text
stringlengths 112
2.78M
| meta
dict |
---|---|
---
author:
- '[^1]'
- '[^2]'
title: Pulsating White Dwarfs
---
Introduction {#sec:intro}
============
White dwarf stars are the final evolutionary state of stars with initial masses up to 8.5–10.6 M$_\odot$[@Woosley15], corresponding to 95 – 97 % of all stars. The fraction depends on the stellar metallicity, which affects both the Initial-Mass-Function and the Initial-to-Final-Mass Relation. For single stars, the minimum mass of a present day white dwarf is around 0.30–0.45 M$_\odot$[@Kilic07], because progenitors that would become lower mass white dwarfs have main sequence evolution time larger than the age of the Universe. Such masses correspond, considering the mass-radius relation of white dwarfs, to a minimal $\log g\simeq 6.5$. Evolutionary models e.g. by Ref. [@Romero15] indicate that the maximum surface gravity for main sequence A stars, which have similar optical spectra to DA white dwarfs, corresponds to $\log g \leq 4.75$, including very low metallicity. There is therefore a gap between low mass white dwarfs and main sequence stars, $4.75~\leq~\log~g~\leq~6.5$.
Most white dwarfs do not generate energy from nuclear fusion, but radiate due to residual gravitational contraction. Because of the degenerate equation of state, contraction is accompanied by a loss of thermal energy instead of increase as in the case of ideal gases; the evolution of white dwarfs is therefore often simply described as cooling. The radius of an average white dwarf star is of the same order of the Earth’s radius, which implies that they have small surface area, resulting in very large cooling times; it takes approximately $10^{10}$ years for the effective temperature of a $\sim 0.6 M_{odot}$ white dwarf to decrease from $100\,000$ K to near $5\,000$ K. Consequently, the cool $\sim 0.6 M_{odot}$ ones are still visible and among the oldest objects in the Galaxy[@GarciaBerro16]. Therefore, studying white dwarfs is extremely important to comprehend the processes of stellar formation and evolution in the Milky Way[@Winget87; @Campos16].
The progenitors of white dwarfs lose most of their envelope in the giant phases, where mass loss depends on metallicity. If the remainder H mass were above $\simeq 10^{-4} M*$, or the He mass above $\simeq 10^{-2} M*$ there would be observable nuclear burning in the white dwarf phase. The limits depend on the mass of the white dwarf. Most white dwarfs have atmospheres dominated by H, and the remainder by He. All other elements are only small traces, much less abundant than in the Sun, due to separation in the strong gravitational field[@Schatzman48]. The lightest elements float to the surface once the white dwarf cools below The He-core white dwarf stars in the mass range $0.2-0.45~M_\odot$, referred to as low-mass white dwarfs, are usually found in close binaries, often double degenerate systems[@Marsh95], being most likely a product of interacting binary star evolution. More than 70% of those studied by Ref. [@Kilic11] with masses below $0.45~M_\odot$ and all but a few with masses below $0.3~M_\odot$ show radial velocity variations[@Brown13; @Gianninas14]. Ref. [@Kilic07] suggests single low-mass white dwarfs result from the evolution of old metal-rich stars that truncate evolution before the helium flash due to severe mass loss. They also conclude all white dwarfs with masses below $\simeq 0.3~M_\odot$ must be a product of binary star evolution involving interaction between the components, otherwise the lifetime of the progenitor on the main sequence would be larger than the age of the Universe.
In Fig. \[single\] we show the results of our effective temperature and surface gravity determinations for all candidates from SDSS. We calculated the single star evolutionary models shown in the figure with the MESA[@MESA] evolutionary code, including diffusion. In Fig. \[double\] the evolutionary models are those with rotation and diffusion of Ref. [@Istrate16].
![image](single.pdf){width="90.00000%"}
Even though the low resolution hydrogen lines observed in SDSS spectra are poor surface gravity indicators below $T_\mathrm{eff} \simeq 10\,000$ K, considering the SDSS spectra are concentrated mainly outside the Galaxy disk, we were surprised that several thousand stars were classified by the SDSS pipeline as A and B stars. Considering their lifetimes on the main sequence smaller than 1 Gyr, and their distance modulus $(m-M)\geq 14.5$ at the SDSS bright saturation, if these stars were main sequence stars, there would be a considerable population of young stars very far from the galactic disk. Using their measured radial velocities, and proper motions if available, [@Pelisoli16] estimated their U, V, W velocities and show there would be a large number of hypervelocity A stars, not detected up to date. If these stars are in fact low mass counterparts of interacting binary evolution, similar to the models of [@Althaus13; @Istrate16], they are mainly concentrated in the galactic disk. Considering we do not know their metallicities, and that low ionization potential metals contribute significantly to the electron pressure, we estimated their surface gravities with two sets of models, a pure hydrogen model and a solar composition model. The estimated surface gravities with solar metallicity models were on average $\Delta \log g \simeq 0.5$ dex smaller, but not systematically. Our plotted values are the solar metallicity ones.
![image](double.pdf){width="90.00000%"}
Interacting Binaries
--------------------
Ref. [@Pietrzynski12] found an RR Lyrae with 0.26 $M_\odot$, and Ref. [@Latour16] found a 0.23 $M_\odot$ pulsating subdwarf (sdBV). Ref. [@Istrate16] interacting binary with mass exchange models show that during a hydrogen shell burning pulse, an extremely low mass white dwarf can cross the main sequence, horizontal branch and even giant instability strip. Ref. [@Karczmarek17] estimate that up to 5% of stars that cross the RR Lyrae and Cepheid instability strip are binaries.
DA white dwarf stars with masses $M\leq 0.45~M_\odot$ and $T_\mathrm{eff} < 20\,000$ K are Low Mass and Extremely Low Mass (ELM) as found by Refs. [@Brown10], [@Kilic11], [@Brown12], [@Brown13], [@Gianninas14], [@Gianninas15] and [@Brown16]. Refs. [@Hermes12] – [@Bell16a] found pulsations in eight of these ELMs, similar to the pulsations seen in DAVs (ZZ Ceti stars), as described in Ref. [@VanGrootel13]. Ref. [@Maxted14] found 17 pre-ELMs, i.e., helium–core white dwarf precursors, and Ref. [@Maxted14a; @Gianninas16] report pulsations in six of them. Pulsations are an important tool to study the stellar interior, and Refs. [@Corsico14] – [@Istrate16a] report on theoretical models and pulsations of ELMs. Refs. [@Kepler16a] and [@Kepler16b] show there are thousands of stars, photometrically classified as blue horizontal branch stars by Ref. [@Xue08; @Xue11; @Carollo16], that have spectroscopic estimated surface gravities much higher than main sequence stars ($\log g \geq 4.75$) and therefore must have radii smaller than the Sun, classifying them as sdAs, in line with the hot subdwarfs reviewed by Ref. [@Heber16]. Ref. [@Pelisoli16] discuss they are possibly Extremely Low Mass white dwarf stars. Refs. [@Kepler16a; @Fusillo15] show that photometrically selected white dwarfs have a contamination around 40%. Even the ones selected also from proper motion by Ref. [@Munn17] show significant contamination by non-white dwarf objects, when spectra are available.
Most stars that produced white dwarfs are born in binaries or multiple systems. Ref. [@Lada06] demonstrates that while around 70% of stars more massive than the Sun are in binaries, two-thirds of the most common stars, M type dwarf stars, are single. More than 10% of the spectroscopically identified white dwarfs in SDSS have red companions[@Kepler16a; @Rebassa16]. Refs. [@Farihi10; @Nebot11] show that nearly 25% of all main sequence binaries are close enough that mass transfer interactions occur when the more massive star becomes a red giant or an asymptotic giant star. If mass transfer exceeds the Eddington limit, the secondary star is not able to accrete the transferred material and the system evolves through a common envelope phase, i.e., the core of the giant and the main sequence companion orbit within the outer layers of the giant star, leading to the shrinkage of the orbit and the release of orbital energy. The orbital energy deposited into the envelope eventually ejects it. Therefore a close binary is formed by the core of the giant star and a main sequence companion, later a close white dwarf-main sequence binary. An ELM will be formed if the envelope is ejected before the helium-flash, which would happen if the star has a low initial mass, i.e., $M\lesssim 2 M_\odot$, to reach conditions to fuse helium in the core before it becomes degenerate.
Mass Distribution
=================
We estimated the masses of all DA white dwarfs found by Ref. [@Kleinman13], [@Kepler15] and [@Kepler16a]. There were no new optical stellar spectra in SDSS Data Release 13. For the DA mass distribution, we only consider spectra with S/N$\geq 15$ to have reliable mass determinations. From $T_\mathrm{eff}$ and $\log g$ values obtained from our fits, after correcting to 3D convection following Ref. [@Tremblay13a], we use the mass–radius relations of Refs. [@Althaus05], [@Renedo10] and [@Romero15] to calculate the stellar mass.
Considering that white dwarfs with larger mass have smaller radius, and therefore can only be seen to smaller distances in a magnitude limited survey as SDSS, we calculated the density by correcting the visible volume with the $1/V_\mathrm{max}$ method of Ref. [@Schmidt68], up to a maximum g=19 magnitude. For DAs with $T_\mathrm{eff} \geq 10000$ K, N=4054, we estimate $\langle M \rangle=0.647\pm 0.002~M_\odot$, $T_\mathrm{eff} \geq 13000$ K, N=3637, $\langle M \rangle=0.646\pm 0.002~M_\odot$, $T_\mathrm{eff} \geq 16000$ K, N=3012, $\langle M \rangle=0.641\pm 0.002~M_\odot$, $T_\mathrm{eff} \geq 25000$ K, N=1121, $\langle M \rangle=0.613\pm 0.003~M_\odot$. The distribution shows that the DA and DB distributions have very different shapes. The DA’s has a tail to larger masses, while the DB’s is extended to lower masses. This is probably reflecting some limitation in the progenitors that can undergo very-late thermal pulses and become DBs.
Pulsations
==========
During the cooling of the white dwarf star, partial ionization zones of C, O, He and H develop at subsequently lower $T_\mathrm{eff}$. Such partial ionization zones increase the opacity and cause pulsations. For C/O, the stars are called pulsating PG 1159 stars, DOVs or GW VIr stars, and occurs at $140\,000~\mathrm{K} \lesssim T_\mathrm{eff} \lesssim 75\,000$ K. For He, $32\,000~\mathrm{K} \lesssim T_\mathrm{eff} \lesssim 22\,000$ K and are called DBVs. For H, $13\,000~\mathrm{K} \lesssim T_\mathrm{eff} \lesssim 10\,500$ K and are called DAVs or ZZ Ceti stars. Recent additions are the DQVs, with $22\,000~\mathrm{K} \lesssim T_\mathrm{eff} \lesssim 18\,000$ K, and with $10\,000~\mathrm{K} \lesssim T_\mathrm{eff} \lesssim 8\,000$ K, the ELMVs and the pre-ELMVs or EL CVn stars.
Class Number
------------ --------
DAVs 181
DBVs 23
DOVs$^{1}$ 22
ELMVs 11
pre-ELMVs 5
DQVs 3
: Number of known pulsating white dwarfs.
\[tab:tab-1\]\
[$^{1}$ Pulsating PG 1159 stars.]{}
Magnetic Fields
===============
Ref. [@GarciaBerro16a] presents a review on magnetic fields in white dwarf stars. When examining each candidate SDSS spectrum by eye, [@Kleinman13; @Kepler15; @Kepler16a] found 822 stars with Zeeman splittings indicating magnetic fields above 2 MG — the limit where the line splitting becomes too small to be identified at the SDSS spectral resolution [@Kepler13]. The mean fields, estimated following [@Kulebi09], range from 2 MG to 700 MG. We caution that stars with large fields are difficult to identify because fields above around 30 MG intermixes subcomponents between different hydrogen series components so much that, depending on effective temperature and signal-to-noise, it becomes difficult to identify the star as containing hydrogen at all, and affecting even the colors significantly. Both the low field limit and the high field limit are totally dominated by systematic effects, not the real limits. The effect of the magnetic field on pulsations has been estimated by [e.g. @Jones89; @Tremblay15].
Rotation
========
In general the measured rotation period for single white dwarfs ranges from 1 h to 18 d, with a median around 1 d[@Kawaler15]. The fastest single white dwarf rotator from asteroseismological measurements (Table \[rot\]) is the $0.79~M_\odot$ DAV SDSS J161218.08+083028.1 discovered by Ref. [@Castanheira13], assuming the two observed periods at 115.0 s and 117.0 s are two components of a rotation triplet.
-------------------------- --------------------- ---------------------- --------- ---------------------
Star $P_{\rm rot}$ \[h\] $T_\mathrm{eff}$[^3] Type $M$ \[$M_{\odot}$\]
RX J2117.1+3412 28 170000 GW Vir 0.72
PG 1159-035 33 140000 GW Vir 0.54
NGC 1501 28 134000 \[WCE\] 0.56
PG 2131+066 5 95000 GW Vir 0.55
PG 1707+427 16 85000 GW Vir 0.53
PG 0122+200 37 80000 GW Vir 0.53
PG 0112+104 10.17 31040 DBV 0.58
KIC 8626021 43 29700 DBV 0.56
EC 20058-5234 2 25500 DBV 0.65
GD 358 29 23740 DBV 0.54
SDSS J083702.16+185613.4 1.13 13590 ZZ Ceti 0.88
G 226-29 9 12510 ZZ Ceti 0.83
G 185-32 15 12470 ZZ Ceti 0.67
SDSS J113655.17+040952.6 2.6 12330 ZZ Ceti 0.55
SDSS J161218.08+083028.1 [**0.93**]{} 12330 ZZ Ceti 0.79
Ross 548 37 12300 ZZ Ceti 0.63
GD 165 50 12220 ZZ Ceti 0.68
LP 133-144 41.8 12150 ZZ Ceti 0.59
KIC 11911480 86.4 12160 ZZ Ceti 0.58
L 19-2 13 12070 ZZ Ceti 0.69
HS 0507+0435 41 12010 ZZ Ceti 0.73
EC 14012-1446 14.4 12020 ZZ Ceti 0.72
KUV 11370+4222 5.56 11940 ZZ Ceti 0.72
G 29-38 32 11910 ZZ Ceti 0.72
KUV 02464+3239 90.7 11620 ZZ Ceti 0.70
HL Tau 76 53 11470 ZZ Ceti 0.55
SDSS J171113.01+654158.3 16.4 11130 ZZ Ceti 0.90
GD 154 50.4 11120 ZZ Ceti 0.65
KIC 4552982 15.0 10860 ZZ Ceti 0.71
SDSS J094000.27+005207.1 11.8 10590 ZZ Ceti 0.82
-------------------------- --------------------- ---------------------- --------- ---------------------
: Rotation periods of white dwarfs as determined via asteroseismology.
\[rot\]
Differential rotation in white dwarfs was studied by Refs. [@Charpinet09] – [@Hermes16], using the change in rotation splitting of non-radial pulsations.
Axions and Dark Mass
====================
Axions are the best candidates for dark mass[@Ringwald16]. Refs.[@Isern03; @Isern10; @Corsico12; @Corsico12a; @Corsico16; @Battich16] show white dwarf pulsations and luminosity function are consistent with extra cooling caused by axions of masses around $17\pm 4$ meV.
Fitting Models
==============
The pulsation spectra exhibited by ZZ Ceti stars strongly depends on their inner chemical profile. There are several processes affecting the chemical profiles that are still not accurately determined. See [@Geronimo17] for a study of the impact of the current uncertainties in stellar evolution on the expected pulsation properties of ZZ Ceti stars.
Each single period is determined by the integral of the pulsation kernel (or work function) over the whole star. It cannot distinguish among different distributions, as demonstrated for example by [@Montgomery03]. If different modes are not independent, i.e., they sample the same regions of the distributions, they carry less information than independent modes. [@Giammichele16] propose one could determine the whole chemical distribution profile from pulsations. [@Giammichele17] performed a test using ten periods, namely the modes l=1, k=2,3,4,5,6 and l=2, k=3,4,5,6,7. Note that in this tests, a sequence of consecutive modes k=2,3,4,5,6 is needed to sample the structure, sequence that is usually not seen real stars. ZZ Ceti stars shows different pulsation spectra: hot stars show few short modes that sample the inner parts of the star while the cool stars show many long modes, which sample the outer parts of the stars. These characteristics must be taken into account when asteroseismology is applied to white dwarfs. A good example of a hot ZZ Ceti pulsator is G 117-B15A, with three modes. Were it not for the convection description problem, that introduces an uncertainty of $\Delta T_{\rm eff} \simeq 500$ K, and the problem with line broadening that gives different $\log g$ from different spectral lines, we could use the 3 modes of G 117-B15A to get 3 structural parameters, plus $dP/dt$ to estimate the core mean molecular weight.
Kepler satellite
================
Observations of pulsating white dwarfs with the Kepler satellite are limited to the brightest objects due to the relatively small size of the telescope, but its long observations has allowed not only an exquisite precision in the pulsation spectra but also the discovery of outbursts lasting hours ([@Bell15; @Hermes15; @Bell16]). These outbursts resemble the [*forte*]{} episode observed in 1996 for the DBV GD 358 by [@Nitta98; @Kepler03].
0.2cm [*Acknowledgments*]{}: S.O.K. and A.D.R. are financed by Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brasil. This research has made use of NASA’s Astrophysics Data System and of the cross-match service provided by CDS, Strasbourg. Funding for the Sloan Digital Sky Survey has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. The SDSS web site is www.sdss.org. Part of the work has been based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, Inovações e Comunicações (MCTIC) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
Woosley, S. E., & Heger, A., ApJ, **810**, 34 (2015) Kilic, M., Stanek, K. Z., & Pinsonneault, M. H., ApJ, **671**, 761 (2007) Romero, A. D., Campos, F., & Kepler, S. O., MNRAS, **450**, 3708 (2015) Garc[í]{}a-Berro, E., & Oswalt, T. D., New Ast. Rev., **72**, 1 (2016) Winget, D. E., Hansen, C. J., Liebert, J., et al., ApJL, **315**, L77 (1987) Campos, F., Bergeron, P., Romero, A. D., et al., MNRAS, **456**, 3729 (2016) Schatzman, E., Nature, **161**, 61 (1948) Marsh, T. R., Dhillon, V. S., & Duck, S. R., MNRAS, **275**, 828 (1995) Kilic, M., Brown, W. R., Allende Prieto, C., et al., ApJ, **727**, 3 (2011) Brown W. R., Kilic M., Allende Prieto C., Gianninas A., Kenyon S. J., ApJ, **769**, 66 (2013) Gianninas, A., Hermes, J. J., Brown, W. R., et al., ApJ, **781**, 104 (2014) Paxton, B., Marchant, P., Schwab, J., et al., ApJS, **220**, 15 (2015) Istrate, A. G., Marchant, P., Tauris, T. M., et al., A&A, **595**, A35 (2016) Pelisoli, I., Kepler, S. O., Koester, D., & Romero, A. D., ASPC 20th European White Dwarf Workshop, **509**, 447 (2017) Althaus, L. G., Miller Bertolami, M. M., & C[ó]{}rsico, A. H., A&A, **557**, A19 (2013) Latour, M., Heber, U., Irrgang, A., et al., A&A, 585, A115 (2016) Pietrzy[ń]{}ski, G., Thompson, I. B., Gieren, W., et al., Nature, **484**, 75 (2012) Karczmarek, P., Wiktorowicz, G., I[ł]{}kiewicz, K., et al., MNRAS, **466**, 2842 (2017) Brown, W. R., Kilic, M., Allende Prieto, C., & Kenyon, S. J., ApJ, **723**, 1072 (2010) Brown W. R., Kilic M., Allende Prieto C., Kenyon S. J., ApJ, **744**, 142 (2012) Gianninas, A., Kilic, M., Brown, W. R., Canton, P., & Kenyon, S. J., ApJ, **812**, 167 (2015) Brown, W. R., Gianninas, A., Kilic, M., Kenyon, S. J., & Allende Prieto, C., ApJ, **818**, 155 (2016) Hermes, J. J., Montgomery, M. H., Winget, D. E., et al., ApJL, **750**, L28 (2012) Hermes, J. J., Montgomery, M. H., Winget, D. E., et al., ApJL, **765**, 102 (2013) Hermes, J. J., Montgomery, M. H., Gianninas, A., et al., MNRAS, **436**, 3573 (2013) Bell, K. J., Kepler, S. O., Montgomery, M. H., et al., 19th European Workshop on White Dwarfs, **493**, 217 (2015) Bell, K. J., Gianninas, A., Hermes, J. J., et al., ApJ, **835**, 180 (2017) Hermes, J. J., Charpinet, S., Barclay, T., et al., ApJ, **789**, 85 (2014) Van Grootel, V., Fontaine, G., Brassard, P., & Dupret, M.-A., ApJ, **762**, 57 (2013) Maxted, P. F. L., Bloemen, S., Heber, U., et al., MNRAS, **437**, 1681 (2014) Maxted, P. F. L., Serenelli, A. M., Marsh, T. R., et al., MNRAS, **444**, 208 (2014) Gianninas, A., Curd, B., Fontaine, G., Brown, W. R., & Kilic, M., ApJL, **822**, L27 (2016) C[ó]{}rsico, A. H., & Althaus, L. G., A&A, **569**, A106 (2014) C[ó]{}rsico, A. H., & Althaus, L. G., ApJL, **793**, L17 (2014) Istrate, A. G., Tauris, T. M., & Langer, N., A&A, **571**, A45 (2014) Istrate, A. G., Tauris, T. M., Langer, N., & Antoniadis, J., A&A, **571**, L3 (2014) Istrate, A. G., Fontaine, G., Gianninas, A., et al., A&A, **595**, L12 (2016) Kepler, S. O., Pelisoli, I., Koester, D., et al., MNRAS, **455**, 3413 (2016) Kepler, S. O., Koester, D., Romero, A. D., Ourique, G., & Pelisoli, I., ASPC 20th European White Dwarf Workshop, **509**, 421 (2017) Tremblay, P.-E., Ludwig, H.-G., Steffen, M., & Freytag, B., A&A, **552**, A13 (2013) Xue, X. X., Rix, H. W., Zhao, G., et al., ApJ, **684**, 1143 (2008) Xue, X.-X., Rix, H.-W., Yanny, B., et al., ApJ, **738**, 79 (2011) Carollo, D., Beers, T. C., Placco, V. M., et al., Nature Phys., **12**, 1170 (2016) Heber, U., PASP, **128**, 082001 (2016) Gentile Fusillo N. P., G[ä]{}nsicke B. T., Greiss S., MNRAS, **448**, 2260 (2015) Munn, J. A., Harris, H. C., von Hippel, T., et al., AJ, **153**, 10 (2017) Lada, C. J., ApJL, **640**, L63 (2006) Rebassa-Mansergas, A., Ren, J. J., Parsons, S. G., et al., MNRAS, **458**, 3808 (2016) Farihi, J., Hoard, D. W., & Wachter, S., ApJS, **190**, 275 (2010) Nebot G[ó]{}mez-Mor[á]{}n, A., G[ä]{}nsicke, B. T., Schreiber, M. R., et al., A&A, **536**, A43 (2011) Kleinman, S. J., Kepler, S. O., Koester, D., et al., ApJS, **204**, 5 (2013) Kepler, S. O., Pelisoli, I., Koester, D., et al., MNRAS, **446**, 4078 (2015) Althaus, L. G., Garc[í]{}a-Berro, E., Isern, J., & C[ó]{}rsico, A. H., A&A, **441**, 689 (2005) Renedo, I., Althaus, L. G., Miller Bertolami, M. M., et al., ApJ, **717**, 183 (2010) Tremblay P.-E., Ludwig H.-G., Steffen M., Freytag B., A&A, **559**, A104 (2013) Schmidt, M., ApJ, **151**, 393 (1968) Garc[í]{}a-Berro, E., Kilic, M., & Kepler, S. O., Intl. Journal Modern Phys. D, **25**, 1630005 (2016) Kepler S. O., et al., MNRAS, **429**, 2934 (2013) Külebi B., Jordan S., Euchner F., Gänsicke B. T., Hirsch H., A&A, **506**, 1341 (2009) Kawaler, S. D., 19th European Workshop on White Dwarfs, ASPC, **493**, 65 (2015) Castanheira, B. G., Kepler, S. O., Kleinman, S. J., Nitta, A., & Fraga, L., MNRAS, **430**, 50 (2013) Charpinet, S., Fontaine, G., & Brassard, P., Nature, **461**, 501 (2009) C[ó]{}rsico, A. H., Althaus, L. G., Kawaler, S. D., et al., MNRAS, **418**, 2519 (2011) Fontaine, G., Brassard, P., & Charpinet, S., 18th European White Dwarf Workshop, **469**, 115 (2013) Hermes, J. J., Kawaler, S. D., Bischoff-Kim, A., et al., ApJL, **841**, L2 (2017) Cantiello, M., Mankovich, C., Bildsten, L., Christensen-Dalsgaard, J., & Paxton, B., ApJ, **788**, 93 (2014) Fuller, J., Lecoanet, D., Cantiello, M., & Brown, B., ApJ, **796**, 17 (2014) Ringwald, A., preprint ([](https://arxiv.org/abs/1612.08933)) (2016) Isern, J., & Garc[í]{}a-Berro, E., Nuclear Phys. B Proc. Suppl., **114**, 107 (2003) Isern, J., Garc[í]{}a-Berro, E., Althaus, L. G., & C[ó]{}rsico, A. H., A&A, **512**, A86 (2010) C[ó]{}rsico, A. H., Althaus, L. G., Miller Bertolami, M. M., et al., MNRAS, **424**, 2792 (2012) C[ó]{}rsico, A. H., Althaus, L. G., Romero, A. D., et al., Journal of Cosm. Astrop. Phys., **12**, 010 (2012) C[ó]{}rsico, A. H., Romero, A. D., Althaus, L. G., et al., Journal of Cosm. Astrop. Phys., **7**, 036 (2016) Battich, T., C[ó]{}rsico, A. H., Althaus, L. G., & Miller Bertolami, M. M., Journal of Cosm. Astrop. Phys., **8**, 062 (2016) De Ger[ó]{}nimo, F. C., Althaus, L. G., C[ó]{}rsico, A. H., Romero, A. D., & Kepler, S. O., A&A, **599**, A21 (2017) Giammichele, N., Fontaine, G., Brassard, P., & Charpinet, S., ApJS, **223**, 10 (2016) Giammichele, N., Charpinet, S., Fontaine, G., & Brassard, P., ApJ, **834**, 136 (2017) Giammichele, N., Charpinet, S., Brassard, P., & Fontaine, G., A&A, **598**, A109 (2017) Montgomery, M. H., Metcalfe, T. S., & Winget, D. E., MNRAS, **344**, 657 (2003) Jones, P.W., Pesnell, W.D., Hansen, C.J., and Kawaler, S.D., ApJ, **336**, 403 (1989) Tremblay, P.-E., Fontaine, G., Freytag, B., et al., ApJ, **812**, 19 (2015) Bell, K. J., Hermes, J. J., Bischoff-Kim, A., et al., ApJ, **809**, 14 (2015) Hermes, J. J., Montgomery, M. H., Bell, K. J., et al., ApJ, **810**, L5 (2015) Bell, K. J., Hermes, J. J., Montgomery, M. H., et al., ApJ, **829**, 82 (2016) Nitta, A., Kepler, S. O., Winget, D. E., et al., Baltic Astr., **7**, 203 (1998) Kepler, S. O., Nather, R. E., Winget, D. E., et al., A&A, **401**, 639 (2003)
[^1]: <kepler@if.ufrgs.br>
[^2]: <alejandra.romero@ufrgs.br>
[^3]: The effective temperatures and masses are corrected to 3D convection[@Tremblay13].
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
AAAI Press\
Association for the Advancement of Artificial Intelligence\
2275 East Bayshore Road, Suite 160\
Palo Alto, California 94303\
bibliography:
- 'references.bib'
title: 'Supplementary material for learning generative networks from off-target samples '
---
Proof for theorems
===================
We first have the following lemma, that readily follows from the Proposition 1 and Theorem 1 of Goodfellow et al [@GoodfellowNIPS2014].
For a fixed generator $G$, the optimal discriminator $\hat{\D}$ that achieves the minimum of equation (6) is $$\hat{\D}^*(\x) = \frac{\hatw(\x)q(\x)}{\hatw(\x)q(\x) + \pg(\x)}$$ and the $\pg$ that achieves the maximum of of $\hat{D}^*$ is $\pg = \hatw(\x)q(\x)$
Recall the definition of $\rho$, $$\rho = \sup_{x \in \suppp} \frac{q(\x)}{p(\x)}$$
We first prove the upper-bound on KL divergence between $p$ and $\pg$.
If $w(\x) \geq \epsilon\ \forall x\in \suppq$, and $J(\hatw) \leq \epsilon^2$, then $$\KL(p||\pg) \leq \log \Big(\frac{1}{1-\epsilon \rho}\Big)$$
$$\begin{aligned}
\KL(p||\pg) &= \int_\x p(\x) \log \frac{p(\x)}{\hatw(\x)q(\x)}d\x\\\end{aligned}$$
Since $J(\hatw)\leq \epsilon^2$, we have $|\hatw(\x) - w(\x)|\leq \epsilon$, and $\forall \x\in \suppq$, $\hat{w}(\x) \geq w(\x) -
\epsilon$. Decreasing the denominator increases the value of the whole expression, we may take $ w(\x)-\epsilon$ to upper bound the $\KL(p||\pg)$. $$\begin{aligned}
\KL(p||\pg) &\leq \int_\x p(\x) \log \frac{p(\x)}{q(\x)(w(\x)-\epsilon)}\\
&= -\int_\x p(\x) \log \frac{q(\x)w(\x) - q(\x)\epsilon}{p(\x)}\\
&= -\int_\x p(\x) \log \Big( 1 - \frac{q(\x)}{p(\x)}\epsilon \Big)\\
&\leq -\int_\x p(\x) \log \Big( 1 - \rho\epsilon \Big)\end{aligned}$$ where the last line follows from the definition of $\rho$, and decreasing the quantity inside log increases the value of the whole expression. Now this is equivalent to $$\begin{aligned}
\KL(p||\pg) \leq -log(1-\rho\epsilon)\end{aligned}$$ Notice that $0\leq\rho \epsilon\leq 1$ because $\rho = \sup_{ x \in \suppp} \frac{1}{w(\x)}$ and $\epsilon < w(\x)$.
We now prove the upper-bound on reverse-KL divergence between $p$ and $\pg$.
If $J(\hatw) \leq \epsilon^2$, then $$KL(p_G || p) \leq (1 + \epsilon) \log (1 + \epsilon\rho)$$
By Lemma 1, $p_G(\x) = \hat{w}(\x) q(\x)$, so $$KL(p_G || p) = \int_\x \hat{w}(\x) q(\x) \log \frac{\hat{w}(\x) q(\x)}{p(\x)} d\x$$ Since $J(\hatw)\leq \epsilon^2$, we have $|\hatw(\x) - w(\x)|\leq \epsilon$, and $\forall x\in \suppq$, $\hat{w}(\x) \leq w(\x) +
\epsilon$. So, $$\begin{aligned}
KL(p_G \mid p) & \leq \int_\x (w(\x) + \epsilon)q(\x) \log \frac{(w(\x) + \epsilon)
q(\x)}{p(\x)} d\x \\
& \leq \int_\x (w(\x)q(\x) + \epsilon q(\x)) \log \frac{w(\x)q(\x) + \epsilon
q(\x)}{p(\x)} d\x \\
& \leq \int_\x (p(\x) + \epsilon q(\x)) \log \frac{p(\x) + \epsilon q(\x)}{p(\x)} d\x \\
& \leq \int_\x (p(\x) + \epsilon q(\x)) \log \left(1 +
\epsilon\frac{q(\x)}{p(\x)}\right) d\x\\
& \leq \int_\x (p(\x) + \epsilon q(\x)) \log (1 + \epsilon\rho) d\x\\
& \leq \log (1 + \epsilon\rho) \int_\x) (p(\x) + \epsilon q(\x)) d\x\\
& \leq (1 + \epsilon) \log (1 + \epsilon\rho)\\\end{aligned}$$
Details of experiments
======================
We have used Keras[^1], a deep learning software package, to implement our NNs.
Architecture for experiments
-----------------------------
For the first domain, $D$, $G$, and $\hatw$ takes in as input the number of objects to be placed and the shape of the object. They do not conditioned on a state. For the second domain, they take poses of objects as an input. They do not condition on the objects already placed. For the last domain, they take poses of all the objects on the table as input.
For the first domain, we use 3 layers of dense network for $D$,$G$, and $\hatw$. For the second and last domain, we use convolutional layers, as the location of each pose in the vector is unimportant. Specifically, for $D$, we have a input convolutional layer that has filter size 2 by 2, with stride of 2 by 1, and 256 filters. This is followed by a max pooling layer whose size is 2 by 1, and then this conv-max pooling is repeated one more time. Then, it is followed by two more conv layers whose number of filters is 256, and the filter size is 3 by 1. Then, the output gets merged with the action input, which is then followed by three dense layers each with size 32, 256, and 32. Lastly, It uses sigmoid activation in its output layer.
The generator has exactly the same architecture, except that the last three dense layers have 32, 32, and 32 as its number of nodes. It has linear output layer. $\hatw$ has exactly the same architecture as the generator
The maximum number of epochs for training $D$ and $G$ is 500, with batch size of 32. We used Adam with learning rate 0.001 for $D$ and $G$, and Adadelta with learning rate 1.0 for $\hatw$.
[^1]: <https://github.com/fchollet/keras>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The equations for fully compressible rotating magnetoconvection are numerically solved in a Cartesian box assuming conditions roughly suitable for the geodynamo. The mean electromotive force describing the generation of mean magnetic flux by convective turbulence in the rotating fluid is directly calculated from the simulations, and the corresponding $\alpha$-coefficients are derived. Due to the very weak density stratification the $\alpha$-effect changes its sign in the middle of the box. It is positive at the top and negative at the bottom of the convection zone. For strong magnetic fields we also find a clear downward advection of the mean magnetic field. Both of the simulated effects have been predicted by quasi-linear computations . Finally, the possible connection of the obtained profiles of the EMF with mean-field models of oscillating $\alpha^2$-dynamos is discussed.'
address: 'Astrophysikalisches Institut Potsdam, An der Sternwarte 16, D-14482 Potsdam, Germany'
author:
- 'A. Giesecke'
- 'U. Ziegler'
- 'G. Rüdiger'
bibliography:
- 'references.bib'
title: 'Geodynamo $\alpha$-effect derived from box simulations of rotating magnetoconvection'
---
,
,
magnetoconvection, geodynamo, $\alpha$-effect
Introduction
============
Convective motions in the fluid outer core influenced by rotation and magnetic fields are able to maintain the Earth’s magnetic field for long times interrupted by occasionally occurring reversals of the dominating dipole component. The equations describing the physical processes in the fluid outer part of the Earth’s core are rather stiff [@1995GAFD...79....1K; @2000RvMP...72.1081R] showing a broad range of timescales on which the characteristic behavior is observable. The resulting and dominating advective timescale $\tau_{\mathrm{adv}}=d/u'$ leads to a very short coherence length of the physical quantities like velocity or magnetic field which, as a consequence, requires a very high spatial resolution in numerical simulations to include the effects of small-scale turbulence [@2003GDS...31..181]. In case of a conducting and rotating fluid with large-scale density stratification a convectively driven turbulence can generate a mean electromotive force (EMF) parallel to the mean magnetic field – a process known as $\alpha$-effect [@1980mfmd.book.....K]. But the Earth’s fluid outer core, though a rapid rotator, is rather weakly stratified so that it remains unclear whether the amplitude of the $\alpha$-effect is sufficient to maintain dynamo action. Present-date global numerical simulations are unable to include small-scale motions because of computational restrictions. Nevertheless, they show quite satisfying results in the sense that they are able to reproduce several features of the observed geomagnetic field [@1996PEPI...98..207G; @1999GeoJI.138..393C; @1999JCoPh.153...51K]. In virtue of these results it seems to be justified to ignore small-scale fluctuations but, on the other hand, there still remain many open questions when comparing such simulations with their restricted parameter range with observational data (see e.g. @2000GGG...1..62, [-@2000GGG...1..62]). Background of our calculations is the idea to describe a dynamo by an induction equation for the mean magnetic field $\left<{\mbox{\boldmath$B$}}\right>$ with a prescribed productive term due to the small-scale turbulence that ensures the existence of dynamo action.
In mean-field dynamo theory the magnetic field and the velocity are split up in a mean part, $\left<{\mbox{\boldmath$B$}}\right>$ and $\left<{\mbox{\boldmath$u$}}\right>$, and a fluctuating component, ${\mbox{\boldmath$B$}}'$ and ${\mbox{\boldmath$u$}}'$. The time behavior of the mean magnetic field $\left<{\mbox{\boldmath$B$}}\right>={\mbox{\boldmath$B$}}-{\mbox{\boldmath$B'$}}$ is described by $$\partial_t\left<{\mbox{\boldmath$B$}}\right>=\nabla\!\times\!\left(\left<{\mbox{\boldmath$u$}}\right>\!\times\!\left<{\mbox{\boldmath$B$}}\right>\!+\!{\mbox{\boldmath$\mathcal{E}$}}\!-\!{\eta}\ \!\nabla\!\times\!\left<{\mbox{\boldmath$B$}}\right>\right)\label{mean_field_ind}$$ with the mean electromotive force ${\mbox{\boldmath$\mathcal{E}$}}=\left<{\mbox{\boldmath$u$}}'\times{\mbox{\boldmath$B$}}'\right>$ which is usually expressed by $${\mathcal{E}}_i = \left<{\mbox{\boldmath$u$}}'\times{\mbox{\boldmath$B$}}'\right>_i=
\alpha_{ij}\left<{B}_j\right>+\beta_{ijk}\partial_k\left<B_j\right>.
\label{eq_alpha}$$ The tensor $\alpha_{ij}$ correlates the turbulent EMF due to small-scale motions with the large-scale magnetic field including the effects of anisotropy. The tensor $\beta_{ijk}$ is related to the turbulent diffusivity $\eta_T$ by $\beta_{ijk}=\eta_{\mathrm{T}} \epsilon_{ijk}$. The easiest way to look for dynamo action is to solve equation (\[mean\_field\_ind\]) with zero mean flow $\left<{\mbox{\boldmath$u$}}\right>$ and with an EMF taken from equation (\[eq\_alpha\]) without turbulent diffusivity ($\beta_{ijk}=0$). Oscillating solutions for so-called $\alpha^2$-dynamos have recently been presented by @2003PhRvE..67b7302S. They determined a radial profile of a spherically symmetric and isotropic $\alpha$ under the conditions that the dynamo mode with the lowest eigenvalue is an oscillating solution and all stationary modes (that usually dominate $\alpha^2$-models) are damped. This constraint leads to an $\alpha$-effect with characteristic zeros in the radial profile. Calculations with a uniform radial $\alpha$-coefficient have been performed by who, instead, included a latitudinal variation of $\alpha$ and effects of anisotropy. They found oscillating $\alpha^2$-dynamos only for exotic exceptions. @2001GApFD..94..263H analyzed the behavior of a geodynamo-like $\alpha{\it{\Omega}}$-dynamo model which produces an axial dipole with reversals that are induced by a fluctuating $\alpha$. The amplitude of the fundamental dipole mode behaves as a damped particle under the influence of a random force in a bistable potential. Here, in contrast to that, a configuration without any differential rotation and, hence, without ${\it{\Omega}}$-effect is considered. The purpose of the present paper is to calculate the turbulent EMF directly from numerical solutions of the full set of nonlinear MHD-equations for a convectively driven turbulent flow under the influence of rotation and subject to an imposed magnetic field. Our aim is to get a representation for the EMF based on simulations under conditions characteristic for the geodynamo. The results principally include effects of anisotropy of the $\alpha$-effect, its radial dependence and its quenching properties. The derived $\alpha$-coefficients will serve as some input data for future mean-field $\alpha^2$-dynamo calculations. A simplified geometry of a Cartesian box representing a small part of a rotating spherical shell provides the ability to examine the small-scale behavior of the fluid motions and magnetic field and allows to consider effects of the turbulence that are neglected in global simulations.
The model
=========
General properties
------------------
Our model is an adaption of configurations used, for example, by and who examined rotating magnetoconvection in a box suitable for the solar convection zone. Figure \[modell\_skizze\] shows a sketch of the box placed somewhere on a spherical shell at some latitude $\theta$. The coordinate system is chosen such that the unit vectors ${{\mbox{\boldmath$
\hat{x},\hat{y},\hat{z}$}}}$ form a right-handed corotating system with ${\mbox{\boldmath$\hat{x}$}}$ pointing towards the equator, ${\mbox{\boldmath$\hat{y}$}}$ pointing in the toroidal direction (from west to east) and ${\mbox{\boldmath$\hat{z}$}}$ pointing from the bottom to the top of the box. Translating this Cartesian system into global spherical coordinates, ${\mbox{\boldmath$\hat{z}$}}$ represents the radial direction ${\mbox{\boldmath$\hat{r}$}}$ directed from inside to outside, ${\mbox{\boldmath$\hat{y}$}}$ the azimuthal direction ${\mbox{\boldmath$\hat{\phi}$}}$ and ${\mbox{\boldmath$\hat{x}$}}$ the meridional direction ${\mbox{\boldmath$\hat{\theta}$}}$, respectively. The angular velocity ${\mbox{\boldmath${\it{\Omega}}$}}$ in the local box coordinate system is then given by ${\mbox{\boldmath${\it{\Omega}}$}}=-{\it{\Omega}}_0\sin\theta\hat{{\mbox{\boldmath$x$}}}+{\it{\Omega}}_0\cos\theta\hat{{\mbox{\boldmath$z$}}}$ where ${\it{\Omega}}_0$ is the angular velocity of the rotating spherical shell.
![ Model box being part of a rotating spherical shell at latitude $\theta$.[]{data-label="modell_skizze"}](./skizze.ps){width="6cm"}
We try to construct a simple model that, at least, roughly represents the conditions in the Earth’s fluid interior. Our Cartesian box model consists of one convectively instable layer with a weak density stratification. The parameters have been chosen in a way that, ultimately, a low Mach number flow with ${\rm Ma} \sim O(10^{-2})$ results and compressibility effects, though existing, are rather small. We further restrict our computations to a rapidly rotating box expressed by a Rossby number ${\rm Ro}={u'/2{\it{\Omega}}d} \ll 1$. Our principal interest focuses on the presence of strong magnetic fields with significant dynamical influence on the flow. To investigate the transition from the weak field case to the strong field case the strength of the imposed magnetic field is successively increased covering a wide range of magnitudes. Here, we present results obtained for a magnetic field applied in $y$-direction corresponding to a toroidal field in spherical coordinates i.e. the production of a poloidal field from a toroidal field via the $\alpha$-coefficient $\alpha_{yy}$ is examined. Imposing other field components will be the subject of subsequent studies. In the strong field case this configuration ensures that the inertial and viscous forces are negligible and the main balance between the forces governing the magnetoconvection state is given by the Coriolis force and the Lorentz force as it is supposed to be the case in the fluid core [@1996PEPI...98..163H; @2000AnRFM..32..409Z].
Equations
---------
The MHD-equations for a rotating fluid including the effects of thermal conduction, compressibility, viscous friction and losses due to magnetic diffusivity are solved numerically using the code NIRVANA [@1998CPC...109..111Z; @1999CPC...116..65Z]. The equations in the local corotating system are\
$$\begin{aligned}
\partial_t\rho&=&-\nabla\cdot(\rho{\mbox{\boldmath$u$}}) \label{conteq}\\
\partial_t(\rho{\mbox{\boldmath$u$}})&=&-\nabla\cdot(\rho{\mbox{\boldmath$uu$}})\!-\!\nabla\!
P\!+\!\nabla\!\cdot\!\sigma\!+\!\rho{\mbox{\boldmath$g$}}\!
-\!2\rho{\mbox{\boldmath${\it{\Omega}}$}}\!\times\!{\mbox{\boldmath$u$}}+\!\frac{1}{\mu_0}\!(\nabla\!\times\!{\mbox{\boldmath$B$}}\mathrm)\!\times\!{\mbox{\boldmath$B$}}\label{nseq}\\
\partial_t e&=&-\nabla\cdot(e{\mbox{\boldmath$u$}})-P\nabla\!\cdot\!{\mbox{\boldmath$u$}}+\sigma
\!\circ\!\nabla{\mbox{\boldmath$u$}}
+\frac{\eta}{\mu_0}|\nabla\!\times\!{\mbox{\boldmath$B$}}|^2+\nabla\!\cdot\!(\chi\nabla
T)\label{eneq}\\
\partial_t{\mbox{\boldmath$B$}}&=&\nabla\times({\mbox{\boldmath$u$}} \times
{\mbox{\boldmath$B$}}-{\eta}\nabla\!\times\!{\mbox{\boldmath$B$}})\label{indeq}\end{aligned}$$ with the density $\rho$, velocity ${\mbox{\boldmath$u$}}$, pressure $P$, magnetic flux density ${\mbox{\boldmath$B$}}$, temperature $T$ and the thermal energy density $e$. We assume a constant gravitational field ${\mbox{\boldmath$g$}}=-g\hat{{\mbox{\boldmath$z$}}}$ within the domain. The viscous stress tensor $\sigma$ is given by $\sigma_{ij}=\nu\rho\left(\partial{u}_{i}/\partial
x_j\!+\!\partial{u}_{j}/\partial x_i\!-\!\nicefrac{2}{3}\nabla\!\cdot\!{\mbox{\boldmath$u$}}\
\delta_{ij}\right)$. $\nu$ denotes the kinematic viscosity and $\chi$ the thermal conductivity coefficient. The values of $\chi$, the dynamic viscosity $\nu_{\mathrm{dyn}}=\nu\rho$ and the magnetic diffusivity $\eta$ are constant over the box volume. An ideal gas equation of state is assumed with $P=(\gamma-1)e={k}{(m\bar \mu)^{-1}}\rho T$ where $k$ is the Boltzmann constant, $m$ the atomic mass unit, $\bar\mu$ the mean molecular weight ($\bar{\mu}=1$ for all runs) and $\gamma=C_P/C_V=5/3$ is the ratio of the specific heats. The permeability $\mu_0$ is given by the vacuum value $\mu_0=4\pi\times 10^{-7}\mathrm{VsA^{-1}m^{-1}}$.
The initial state
-----------------
From the equation of state and the condition for hydrostatic equilibrium, $\partial_z P = -\rho g$, together with the assumption of a polytropic temperature distribution, $T=T_0\left({\rho}/{\rho_0}\right)^{\Gamma}$, the initial density distribution can be calculated as $$\rho(z) = \rho_0\left(1+\frac{\partial_zT}{T_0}(d-z)\right)^{1/\Gamma}
\label{eq_rhodistribution}$$ where $d$ stands for the vertical box extension and the polytropic index $\Gamma$ is given by $\Gamma=\ln \left(1+d\partial_zT/T_0\right)/\ln \xi$. The stratification index $\xi =\rho_{\mathrm{bot}}/\rho_{\mathrm{top}}$, the temperature $T_{\mathrm{0}}$ and the global temperature gradient $\partial_{{z}}T$ are prescribed input parameters whose values are given below. The subscript 0 refers to values taken at the top boundary of the box. The gravitational acceleration can be calculated from the hydrostatic equilibrium condition and the initial density distribution (\[eq\_rhodistribution\]) and is given by $$g =\frac{\Gamma+1}{\Gamma}\frac{k}{m\bar \mu}\partial_z T.
\label{g}$$ To obtain a convectively unstable state the condition $\Gamma > \gamma-1$ must be fulfilled. In fact, $\Gamma=\gamma-1$ leads to a Rayleigh number $${\rm Ra}=\frac{\rho{g C_P}d^4}{\chi\nu}T\left({\partial_{z}T}-\frac{g}{C_{P}}\right)=0
\label{gg}$$ with $C_P={k}({m\bar{\mu}})^{-1}\gamma(\gamma-1)^{-1}$ the specific heat at constant pressure. Here, parameters are chosen such that $\Gamma>\gamma-1$.
Boundary Conditions
-------------------
All quantities are subject to periodic boundary conditions in the horizontal directions. At the top and at the bottom of the computational domain constant values for density and temperature are imposed. The vertical boundary condition for the magnetic field is a perfect conductor condition, and a stress-free boundary condition is adopted for the horizontal components of the velocity $u_x$ and $u_y$. Impermeable box walls at the top and at the bottom lead to a vanishing $u_z$ at the vertical boundaries. Table \[bc\_tab\] summarizes these conditions and gives the initial values for density and temperature which describe the overall stratification and the global temperature gradient.
$\rho$ $T$ ${\mbox{\boldmath$u$}}$ ${\mbox{\boldmath$B$}}$
------------ -------- ----- ------------------------- -------------------------
[ top]{} $1$ $1$ $\partial_z u_x = 0$ $\partial_z B_x = 0$
$(z=d)$ $\partial_z u_y = 0$ $\partial_z B_y = 0$
$u_{z} = 0$ $B_{z} = 0$
[bottom]{} $1.1$ $2$ $\partial_z u_x = 0$ $\partial_z B_x = 0$
$(z=0)$ $\partial_z u_y = 0$ $\partial_z B_y = 0$
$u_{z} = 0$ $B_{z} = 0$
: Vertical boundary conditions
\[bc\_tab\]
Input parameters
----------------
All input quantities are measured at the top of the box. This makes sense since the density variation with depth is negligible and the temperature varies only by a factor of 2. The parameters $\nu, \chi,\eta$ and ${\it{\Omega}}$ are calculated from the Rayleigh number Ra as defined above, the Prandtl number ${\rm Pr}=\nu\rho
C_{{P}}/\chi$, the magnetic Prandtl number ${\rm Pm}=\nu/\eta$ and the Taylor number ${\rm Ta}={4{{\it{\Omega}}}^2d^4}/{\nu^2}$. The basic parameter set used for all simulations that are presented in this paper is given by ${\rm Ra}=10^6, {\rm
Pr}=0.5, {\rm Pm}=0.5$ and ${\rm Ta}=10^7$. The Elsässer number $$\Lambda={{\mbox{\boldmath$B$}}^2\over{2{\it{\Omega}}\mu_0\rho\eta}}
\label{els}$$ serves as an input parameter for the magnitude of the imposed magnetic field whose influence is investigated by varying ${\Lambda}$ from $10^{-2}$ to $10^3$ covering the full range from weak fields to very strong fields. The box with an aspect ratio 8:8:1 is placed at a latitudinal angle of $45^{\circ}$ on the northern hemisphere of the rotating spherical shell and a standard resolution of $100\times100\times80$ grid points is used in all calculations. For all simulations temperature and density at the top of the box are scaled to unity, as it is the case for the global temperature gradient $\partial_z T$ and the box height $d$. A stratification index of $\xi=1.1$ is used.
Results
=======
General properties and energetics
---------------------------------
At first, a non-rotating non-magnetic convection model is computed and the resulting statistically steady state is used as initial condition for the full problem of rotating magnetoconvection. Typical values for the turbulent velocity ${\mbox{\boldmath$u$}}'$ and the turbulent magnetic field ${\mbox{\boldmath$B$}}'$ are obtained by an averaging procedure that includes the whole box volume. In the following, volume averages are indicated by double brackets, $\left<\left<\cdot\right>\right>$, whereas horizontal averages are denoted by single brackets, $\left<\cdot\right>$. As root mean square value of fluctuations we define $\left<\left<{f}'^2\right>\right>=(N_x N_y N_z)^{-1}\sum\limits_{i,j,k}
\left({f_{ijk}}-\left<{f}\right>_k\right)^2$. Here, $N_x (N_y, N_z)$ denotes the number of grid cells in $x, (y, z)$ direction and ${f_{ijk}}-\left<{f}\right>_k$ is the deviation of the fluctuating quantity at a certain grid cell labeled $ijk$ from its horizontal average. Note that due to the horizontal averaging procedure and the periodic horizontal boundary conditions the mean quantities have no dependence on $x$ or $y$. Time averages are labeled by $\overline{f}$ and are computed only over time intervals that show no significant change in the average itself. In the following, time is measured relative to the turnover time given by $\tau_{\mathrm{adv}} = d/u_{\mathrm{rms}}$, where $u_{\mathrm{rms}}=\overline{\sqrt{\left<\left<{{\mbox{\boldmath$u'$}}^2}\right>\right>}}$. All time averages are calculated over a time range of at least $20\tau_{\mathrm{adv}}$ starting at a certain time after the effects of magnetic field and rotation have been introduced and $\left<\left<{\mbox{\boldmath$u'$}}^2\right>\right>$ has reached a new statistically steady state. A comparison with long-term computations shows that this is a sufficient timespan in order to obtain meaningful results. The longest run has been performed for $\Lambda=4$ – namely more than 110 turnover times – which corresponds to about two magnetic diffusion times $\tau_{\eta}=d^2/\eta$. Note that $\tau_{\mathrm{adv}}$ evolves as a part of the solution and depends on the imposed magnetic field whereas $\tau_{\eta}$ can be obtained from the input parameters.
![\[ekin/emag\] Time dependence of the kinetic energy and the magnetic energy. $x$-component (solid), $y$-component (dotted), $z$-component (dashed). $\Lambda=1$.](./ekin_lam01.ps "fig:"){width="6.5cm"} ![\[ekin/emag\] Time dependence of the kinetic energy and the magnetic energy. $x$-component (solid), $y$-component (dotted), $z$-component (dashed). $\Lambda=1$.](./emag_lam01.ps "fig:"){width="6.5cm"}
Figure \[ekin/emag\] shows the temporal behavior of the kinetic energies $E_{\mathrm{kin}}^{x,y,z}=\int dV\rho
u^2_{x,y,z}/2$ (left) and the magnetic energies $E_{\mathrm{mag}}^{x,y,z}=\int dV(2\mu_0)^{-1}
B^2_{x,y,z}$ (right) for the simulation with $\Lambda=1$. The first part of the simulation up to $t\approx 17\tau_{\mathrm{adv}}$ corresponds to thermal convection without rotation and without magnetic field. At $t\approx 17\tau_{\mathrm{adv}}$ the effects of rotation and magnetic field are added seen by a significant drop in the kinetic energies (left panel of figure \[ekin/emag\]) and a sharp increase of magnetic energies (right panel of figure \[ekin/emag\]). After a short transition phase the energy of the different components remains approximately constant for the remainder of time. A large-scale magnetic field in $x$-direction establishes during the timespan $t=17\tau_{\mathrm{adv}}...20\tau_{\mathrm{adv}}$ (see also Figure \[meanbfield\] below) which is associated with a remarkable amount of magnetic energy stored in the $x$-component (solid line in the right panel of figure \[ekin/emag\]). Since the induction equation (\[mean\_field\_ind\]) for the mean magnetic field gives $\partial_t\left<B_z\right>=0$ no $\left<B_z\right>$ can evolve during the simulations. Therefore, the magnetic energy in the vertical field component results from the fluctuating component $B_z'$. The time dependence of the energies is qualitatively similar for all runs. Major differences appear in the amount of quenching after the effects of rotation and magnetic field have abruptly been introduced.
![\[usbs\] Left: Quenching of turbulent velocity and (normalized) magnetic field fluctuations. Right: Ratio of the magnetic energy to the kinetic energy as a function of $\Lambda$.](./us_bs_by_045_lam.ps "fig:"){width="7cm"} ![\[usbs\] Left: Quenching of turbulent velocity and (normalized) magnetic field fluctuations. Right: Ratio of the magnetic energy to the kinetic energy as a function of $\Lambda$.](./emag_ekin_by_045_lam.ps "fig:"){width="7cm"}
The behavior of the fluctuating quantities is shown in Figure \[usbs\] where the turbulent velocity ${u_{\mathrm{rms}}}$ and the normalized turbulent magnetic field ${B}_{\mathrm{rms}}/B_y^0$ (where $B_{\mathrm{rms}}$ is defined analog to $u_{\mathrm{rms}}$) in dependence of the imposed magnetic field respective $\Lambda$ is presented. Both quantities are significantly reduced compared to a non-magnetic but rotating convective state, indicating the trend to a more laminar flow. The ratio between the total magnetic energy and the kinetic energy is plotted in Fig. [\[usbs\]]{} (right). Equipartition is reached for $\Lambda \approx 0.6$ indicated by the dotted lines. For $\Lambda>0.6$ the magnetic energy clearly exceeds the kinetic energy. For $\Lambda = 100$ the magnetic energy dominates the kinetic energy by a factor of 100.
When increasing $\Lambda$ from 0.01 to 1000 the combined effects of rotation and magnetic field lead to Rossby numbers from $7\cdot10^{-2}$ to $1\cdot10^{-2}$ and turbulent Mach numbers $\mathrm{Ma}={u_{\mathrm{rms}}}/c_s$ from $7\cdot10^{-2}$ to $1\cdot 10^{-2}$. This is still much larger than the real Mach number in the fluid outer core which is assumed to be of the order of $10^{-7}$. Reynolds numbers, $\mathrm{Re}={u_{\mathrm{rms}}}d/\nu$, are in the range from Re=210 for $\Lambda
=0.01$ and Re=32 for $\Lambda=1000$.
Patterns of flow and magnetic field
-----------------------------------
The dynamical influence of the imposed magnetic field can be seen in figure \[vz3Dpattern\] where the $z$-component of the velocity near the domain faces and in a horizontal plane at $z=0.5$ is shown. The left (right) panel shows a snapshot of the developed rotating magnetoconvection for $\Lambda\approx 1$ (10). Upflows are visualized in grey whereas downflows are represented in dark tones. Compared to non-magnetic, rotating convection the magnetoconvection pattern remains nearly unchanged for $\Lambda
{\raisebox{-0.6ex}{$\stackrel{{\displaystyle<}}{\sim}$}}1$. Many topologically not connected columnar convection cells can be seen tilted by an angle of $45^{\circ}$ with respect to the $z$-axis and become aligned with the rotation axis (Taylor-Proudman theorem).
![\[vz3Dpattern\] Vertical velocity pattern of rotating magnetoconvection for $\Lambda=1$ (left) and $\Lambda=10$ (right).](./ta_1e7_lam01.ps "fig:"){width="6.5cm"} ![\[vz3Dpattern\] Vertical velocity pattern of rotating magnetoconvection for $\Lambda=1$ (left) and $\Lambda=10$ (right).](./ta_1e7_lam10.ps "fig:"){width="6.5cm"}
Stronger magnetic fields lead to remarkable changes in the flow pattern. Between $\Lambda=1$ and $\Lambda=4$ the quasi-regular pattern becomes more and more disintegrated and evolves towards a nearly two-dimensional flow as illustrated in the right panel of Figure \[vz3Dpattern\]. The convection cells are clearly elongated along the imposed magnetic field direction ($y$-direction) and show little variations of the convective velocity along the field lines. The sheetlike convection cells are again tilted inside the box and aligned with the rotation axis ${\mbox{\boldmath${\it{\Omega}}$}}$ as it is the case for the weak-field calculations. Compared to the cases of rotating convection or weak-field rotating magnetoconvection, the strong-field case is further characterized by a significant reduction of the number of convective cells. The nearly two-dimensionality of the flow can be explained from a condition similar to the Taylor-Proudman theorem for rotating spheres: For a stationary state with small deviations from the basic unperturbed state and neglecting diffusive terms it follows from the induction equation that $B_i\partial_i u_j=0$ i.e. motions cannot vary in the direction of the imposed magnetic field [@1961hhs..book.....C].
Mean fields and $\alpha$-coefficients
-------------------------------------
Figure \[meanbfield\] shows the components of the horizontally averaged magnetic field in units of the initial field $B_0$ for the cases $\Lambda=0.1,1,10$. The light gray lines represent different snapshots within the averaging period indicating substantial fluctuations, and the thick dashed line gives the time average of these individual curves.
![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmx_z_lam0.1.ps "fig:"){width="4.5cm"} ![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmx_z_lam1.ps "fig:"){width="4.5cm"} ![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmx_z_lam10.ps "fig:"){width="4.5cm"}\
![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmy_z_lam0.1.ps "fig:"){width="4.5cm"} ![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmy_z_lam1.ps "fig:"){width="4.5cm"} ![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmy_z_lam10.ps "fig:"){width="4.5cm"}\
![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmz_z_lam0.1.ps "fig:"){width="4.5cm"} ![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmz_z_lam1.ps "fig:"){width="4.5cm"} ![$z$-dependence of mean magnetic fields for $\Lambda = 0.1, 1, 10$ (from left to right). From top to bottom: $\left<B_x\right>,
\left<B_y\right>$ and $\left<B_z\right>$[]{data-label="meanbfield"}](./bmz_z_lam10.ps "fig:"){width="4.5cm"}\
There exists in all cases a significant magnetic field component $\left<B_x\right>$ close to the boundaries and in the bulk having opposite signs in that regions. This behavior strongly differs from the results reported by for highly stratified convection where the $x$-component of the magnetic field was found negligible ($\left<B_x\right> \ll
\left<B_y\right>$). This is probably due to a much lower Taylor number employed in the simulations of and due to the initial two-layer configuration consisting of a convectively instable layer on top of a stable layer. The generation of $\left<B_x\right>$ at the vertical boundaries is a result of the perfect conductor boundary conditions for the magnetic field which means that no magnetic flux can cross the boundaries favoring concentration of magnetic flux at the top and at the bottom of the domain. As expected, no significant mean magnetic field in $z$-direction establishes. For weak imposed fields local dynamo action leads to a slight amplification of $\left<B_y\right>$ close to the boundaries. This effect does not take place for stronger imposed fields where, in contrast, a reduction of $\left<B_y\right>$ occurs at the vertical boundaries.
Usually, the dynamo $\alpha$-coefficients are computed in a simplified way by $$\mathcal{E}_x=\alpha_{xy}\left<B_y\right>,\qquad\mathcal{E}_y=\alpha_{yy}\left<B_y\right>,\qquad\mathcal{E}_z=\alpha_{zy}\left<B_y\right>.
\label{alpha}$$ This may not be justified with the mean field configuration obtained here, since the presence of a $\left<B_x\right>$ and the presence of gradients of the mean field components close to the vertical boundaries lead to a contribution from non-diagonal coefficients of the $\alpha$-tensor (describing the anisotropy) and from turbulent diffusivity. Their contribution cannot be easily separated without contradiction, so it seems more reasonable to regard the calculated EMF-components as the more relevant quantities. This is confirmed by calculations of who showed that the expressions (\[alpha\]) together with the mean field induction equation (\[mean\_field\_ind\]) are too simple for reproducing the mean-field components as computed numerically from solving the full set of MHD-equations. However, these workers have assumed that this deviation occurs for shorter timescales whereas for longer timescales the correct time dependence of the mean-field is retained if a suitable quenching function for $\alpha$ is used. Due to the well-known theoretical background of characteristic properties for the $\alpha$-coefficients and to provide input data for a mean-field $\alpha^2$-dynamo model we nevertheless use relationship (\[alpha\]) keeping in mind that it stands for a more or less rough approximation.
In principle, with the present simulations it is possible to calculate the three coefficients $\alpha_{xy},\alpha_{yy}\mbox{ and }\alpha_{zy}$. Here, we want to concentrate on the first two coefficients $\alpha_{xy}$ and $\alpha_{yy}$. $\alpha_{xy}$ describes turbulent (radial) advection of magnetic field (pumping), whereas $\alpha_{yy}$ describes the production of magnetic field perpendicular to $\left<B_y\right>$ via the $\alpha$-effect and is therefore of profound interest. The vertical profile of the calculated $\alpha$-coefficients is plotted in Figure \[alpha\_plot\] for $\Lambda=0.1,1,10,100$. The top row shows $\alpha_{xy}$ and the bottom row shows $\alpha_{yy}$. Again the grey lines represent snapshots of the coefficients at different times and the dashed line in each plot represents the time-averaged $z$-profile.
![$z$-dependence of $\alpha_{xy}$ (top) and $\alpha_{yy}$ (bottom) for $\Lambda = 0.1, 1, 10,
100$ (from left to right) at $\theta = 45^\circ$. Note the reduction in scale by a factor of 5 for the $\Lambda=100$ case.[]{data-label="alpha_plot"}](./alpha_xy_z_lam0.1.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ (top) and $\alpha_{yy}$ (bottom) for $\Lambda = 0.1, 1, 10,
100$ (from left to right) at $\theta = 45^\circ$. Note the reduction in scale by a factor of 5 for the $\Lambda=100$ case.[]{data-label="alpha_plot"}](./alpha_xy_z_lam1.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ (top) and $\alpha_{yy}$ (bottom) for $\Lambda = 0.1, 1, 10,
100$ (from left to right) at $\theta = 45^\circ$. Note the reduction in scale by a factor of 5 for the $\Lambda=100$ case.[]{data-label="alpha_plot"}](./alpha_xy_z_lam10.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ (top) and $\alpha_{yy}$ (bottom) for $\Lambda = 0.1, 1, 10,
100$ (from left to right) at $\theta = 45^\circ$. Note the reduction in scale by a factor of 5 for the $\Lambda=100$ case.[]{data-label="alpha_plot"}](./alpha_xy_z_lam100.ps "fig:"){width="3.4cm"}\
![$z$-dependence of $\alpha_{xy}$ (top) and $\alpha_{yy}$ (bottom) for $\Lambda = 0.1, 1, 10,
100$ (from left to right) at $\theta = 45^\circ$. Note the reduction in scale by a factor of 5 for the $\Lambda=100$ case.[]{data-label="alpha_plot"}](./alpha_yy_z_lam0.1.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ (top) and $\alpha_{yy}$ (bottom) for $\Lambda = 0.1, 1, 10,
100$ (from left to right) at $\theta = 45^\circ$. Note the reduction in scale by a factor of 5 for the $\Lambda=100$ case.[]{data-label="alpha_plot"}](./alpha_yy_z_lam1.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ (top) and $\alpha_{yy}$ (bottom) for $\Lambda = 0.1, 1, 10,
100$ (from left to right) at $\theta = 45^\circ$. Note the reduction in scale by a factor of 5 for the $\Lambda=100$ case.[]{data-label="alpha_plot"}](./alpha_yy_z_lam10.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ (top) and $\alpha_{yy}$ (bottom) for $\Lambda = 0.1, 1, 10,
100$ (from left to right) at $\theta = 45^\circ$. Note the reduction in scale by a factor of 5 for the $\Lambda=100$ case.[]{data-label="alpha_plot"}](./alpha_yy_z_lam100.ps "fig:"){width="3.4cm"}
We note that $\alpha_{yy}(z)$ shows a quite asymmetric behavior with respect to $z$. It is always negative in the lower part of the layer and positive in the upper part of the layer. The transition between negative and positive $\alpha$-effect occurs roughly in the middle of the box. The peaks of $\alpha_{yy}$ are remarkably broad leading to two extended zones with positive and negative $\alpha$-effect having nearly equal amplitude. Comparable solutions proportional to $\sin(2\pi z)$ from quasi-linear calculations have been found by [@1979PEPI...20..134S]. For increasing magnetic field time fluctuations in the $\alpha$-coefficients are obviously reduced which comes from the fact that the flow tends to become laminar for large $\Lambda$. The resulting vertical profiles of $\alpha_{yy}$ differ from computations with stronger stratification where the $z$-dependence is much more asymmetric. In the latter scenario the zero line is crossed in the lower half-box and the peak of the $\alpha$-coefficients is higher in the upper half-box. This leads to a non-zero $\alpha$-effect when averaged over the entire box volume . In contrast to this, the volume averaged $\alpha$-effect is rather small in our model. $\alpha_{xy}(z)$ shows a drastic change in its behavior between $\Lambda=0.1$ and $\Lambda=1$. For $\Lambda=0.1$ $\alpha_{xy}$ has a broad peak located in the lower part of the layer and changes its sign close to the upper vertical boundary. For $\Lambda\geq 1$ $\alpha_{xy}$ shows a reversed $z$-dependence. This feature is correlated with the change of behavior of $\left<B_y\right>$ at the boundaries between $\Lambda=0.1$ and $\Lambda=1$.
Computations with a box located at $\theta = 135^\circ$ reveal $\alpha_{xy}$ symmetric with respect to the equator but $\alpha_{yy}$ antisymmetric (see Figure \[symetry\]). The resulting $\left<B\right>(z)$ closely resemble the symmetry properties of the $\alpha$-coefficients ($\left<B_x\right>$ antisymmetric, $\left<B_y\right>$ symmetric). From that it follows that the coefficient $\alpha_{xy}$ plays the role of an (negative) advection velocity ($u^{\rm esc} = - \alpha_{xy}$). The $z$-profile of $\alpha_{xy}$ shows no preferred symmetry with respect to the middle of the unstable layer, and $\alpha_{xy}$ is – independently of $\Lambda$ – predominantly positive. The escape velocity in the vertical direction is thus directed [*downwards*]{} for strong magnetic fields. Exactly this behavior has been obtained in a quasi-linear approximation by characterized as ’turbulent buoyancy’. The same result has been found in numerical simulations by , and . It can now be considered as a well-established phenomenon that the magnetic-induced turbulent pumping transports mean magnetic field downwards rather than upwards to the surface.
$\hspace*{2.8cm}\Lambda=1\hspace*{6cm}\Lambda=10$\
![$z$-dependence of $\alpha_{xy}$ and $\alpha_{yy}$ (upper row) and the corresponding $\left<B_x\right>$ and $\left<B_y\right>$ (lower row) for $\Lambda = 1$ and $\Lambda=10$ at $\theta = 135^\circ$[]{data-label="symetry"}](./alpha_xy_z_lam1_theta135.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ and $\alpha_{yy}$ (upper row) and the corresponding $\left<B_x\right>$ and $\left<B_y\right>$ (lower row) for $\Lambda = 1$ and $\Lambda=10$ at $\theta = 135^\circ$[]{data-label="symetry"}](./alpha_yy_z_lam1_theta135.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ and $\alpha_{yy}$ (upper row) and the corresponding $\left<B_x\right>$ and $\left<B_y\right>$ (lower row) for $\Lambda = 1$ and $\Lambda=10$ at $\theta = 135^\circ$[]{data-label="symetry"}](./alpha_xy_z_lam10_theta135.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ and $\alpha_{yy}$ (upper row) and the corresponding $\left<B_x\right>$ and $\left<B_y\right>$ (lower row) for $\Lambda = 1$ and $\Lambda=10$ at $\theta = 135^\circ$[]{data-label="symetry"}](./alpha_yy_z_lam10_theta135.ps "fig:"){width="3.4cm"}\
![$z$-dependence of $\alpha_{xy}$ and $\alpha_{yy}$ (upper row) and the corresponding $\left<B_x\right>$ and $\left<B_y\right>$ (lower row) for $\Lambda = 1$ and $\Lambda=10$ at $\theta = 135^\circ$[]{data-label="symetry"}](./bmx_z_lam01_theta135.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ and $\alpha_{yy}$ (upper row) and the corresponding $\left<B_x\right>$ and $\left<B_y\right>$ (lower row) for $\Lambda = 1$ and $\Lambda=10$ at $\theta = 135^\circ$[]{data-label="symetry"}](./bmy_z_lam01_theta135.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ and $\alpha_{yy}$ (upper row) and the corresponding $\left<B_x\right>$ and $\left<B_y\right>$ (lower row) for $\Lambda = 1$ and $\Lambda=10$ at $\theta = 135^\circ$[]{data-label="symetry"}](./bmx_z_lam10_theta135.ps "fig:"){width="3.4cm"} ![$z$-dependence of $\alpha_{xy}$ and $\alpha_{yy}$ (upper row) and the corresponding $\left<B_x\right>$ and $\left<B_y\right>$ (lower row) for $\Lambda = 1$ and $\Lambda=10$ at $\theta = 135^\circ$[]{data-label="symetry"}](./bmy_z_lam10_theta135.ps "fig:"){width="3.4cm"}
Kinetic helicity
----------------
Figure \[helicity\_plot\] shows the $z$-profile of the kinetic helicity ${H}_{\mathrm{kin}}=\left<{\mbox{\boldmath$u$}}'\cdot\nabla\times{{\mbox{\boldmath$u$}}'}\right>$. The kinetic helicity is negative in the upper part of the box and positive in the lower part.
![$z$-dependence of the kinetic helicity for $\Lambda=0.1,1,10,100$[]{data-label="helicity_plot"}](./hkin_z_lam0.1_theta45.ps "fig:"){width="3.4cm"} ![$z$-dependence of the kinetic helicity for $\Lambda=0.1,1,10,100$[]{data-label="helicity_plot"}](./hkin_z_lam1_theta45.ps "fig:"){width="3.4cm"} ![$z$-dependence of the kinetic helicity for $\Lambda=0.1,1,10,100$[]{data-label="helicity_plot"}](./hkin_z_lam10_theta45.ps "fig:"){width="3.4cm"} ![$z$-dependence of the kinetic helicity for $\Lambda=0.1,1,10,100$[]{data-label="helicity_plot"}](./hkin_z_lam100_theta45.ps "fig:"){width="3.4cm"}
Comparing figure \[helicity\_plot\] with the corresponding $\alpha_{yy}$-profiles from figure \[alpha\_plot\] the well-known relation between the signs of the $\alpha$-coefficient and the kinetic helicity is confirmed, i.e. $\alpha\sim-1/3\tau_{\mathrm{cor}}\left<{\mbox{\boldmath$u$}}'\cdot\nabla\times{{\mbox{\boldmath$u$}}'}\right>$ with a correlation time $\tau_{\mathrm{cor}}$(see e.g. @1980mfmd.book.....K, [-@1980mfmd.book.....K]). A similar relation was found by . Increasing the strength of the imposed magnetic field the change in amplitude of the helicity is roughly in accordance with the change in amplitude of $\alpha_{yy}$. The correlation time $\tau_{\mathrm{cor}}$ roughly is of the order of $10 \%$ of the turnover time $\tau_{\mathrm{adv}}$
$\alpha$-quenching
------------------
In order to estimate the quenching behavior of the $\alpha$-effect we investigate the variation of the local maximum (minimum) value of the time averaged $z$-profile $\alpha_{yy}(z)$ in dependence of the quantity $(B_y^0/B_{\mathrm{eq}})^2$ where $B_y^0$ denotes the initially imposed magnetic field strength and ${B}_{\mathrm{eq}}$ is the so called equipartition field which is defined by ${B}_{\mathrm{eq}}^2=\mu_0\rho{u}_{\mathrm{rms}}^2({B}\!\rightarrow\! 0)$, i.e. $u_{\mathrm{rms}}$ is related to the non-magnetic case. In a rough approximation we have $\Lambda \approx 6.24\cdot (B_y^0/B_{\mathrm{eq}})^2$. The quenching curves are plotted in figure \[quench\_plot\] where the solid (dotted) line corresponds to the maximum (minimum) of $\alpha_{yy}(z)$. The dashed line corresponds to a simple analytic quenching function of the form $$\alpha({B})=\alpha({B}\!\rightarrow\! 0)\cdot \frac{1}{1+\left(B_y^0/B_{\mathrm{eq}}\right)^2}.
\label{quench}$$ A detailed analysis of the $\alpha$-quenching phenomenon for slow rotation, however, can be found in . The corresponding expressions for $\alpha_{yy}$ is given by the dashed-dotted line in figure \[quench\_plot\]. It overestimates the suppression of $\alpha_{yy}$ so that $\alpha$-quenching is probably better described by equation (\[quench\]) for our case of a fast rotator. Note that the curves associated with the maximum respective minimum of $\alpha_{yy}$ are almost identical for $(B_y^0/B_{\mathrm{eq}})^2
{\raisebox{-0.6ex}{$\stackrel{{\displaystyle>}}{\sim}$}}0.2$ and the deviation is at most about $10\%$ for $(B_y^0/B_{\mathrm{eq}})^2 {\raisebox{-0.6ex}{$\stackrel{{\displaystyle<}}{\sim}$}}0.2$.
![Quenching behavior of $\alpha_{yy}$. A value of $(B_y^0/B_{\mathrm{eq}})^2=1$ (equipartition value) corresponds to $\Lambda \approx 6.2$. The solid (dotted) line corresponds to the maximum (minimum) of $\alpha_{yy}(z)$, the dashed line corresponds to the analytic quenching function (\[quench\]) and the dashed-dotted line represents the analytic expression derived by . []{data-label="quench_plot"}](./alpha_yy_theta045_lam.ps){width="9cm"}
For the dynamo number, $C_{\mathrm{\alpha}}=\alpha d/\eta$, we obtain a value of the order of $10$ for imposed fields below $(B_y^0/B_{\mathrm{eq}})^2\approx 1$. This value might be large enough to allow global dynamo action in the sense of $\alpha^2$-dynamos but this, of course, is a very crude estimation.
Conclusions
===========
The main result of our computations is that the $y$-component of the EMF describing the $\alpha$-effect in the azimuthal direction is positive in the upper part of the unstable layer and negative in the lower part on the northern hemisphere. We obtained the $z$-profiles of the $\alpha$-coefficient for a wide range of imposed magnetic field strength with the amplitude of the $\alpha$-effect probably sufficient to ensure dynamo action in mean-field calculations. The $\alpha$-effect is quenched under the influence of strong external magnetic fields but the suppression is not catastrophic, and significant quenching only sets in for magnetic fields above the equipartition value ${B_{\mathrm{eq}}}$. Some weak points in the above model calculations should be mentioned. First, we neglected the radial dependence of the gravitational force and the effects of spherical geometry, thus, the influence of curvature effects remains unknown. Compositional convection is ignored and the adopted parameter values are still far from the realistic values for the Earth. Somewhat unphysical boundary conditions have been used by assuming the conditions of a perfect conductor for both vertical boundaries so it cannot be ruled out that the obtained profiles are profoundly affected by the vertical boundary conditions. Test calculations with rigid boundary conditions for the velocity and/or slightly different boundary conditions for the magnetic field as used e.g. by or show, however, no major differences in the EMF profiles. It remains, therefore, a much-promising ansatz for future mean-field calculations of the geodynamo taking into account the special radial profile of the $\alpha$-effect as it has been obtained in the present paper.
[This work was supported by the DFG SPP 1097 “Erdmagnetische Variationen: Raum-Zeitliche Struktur, Prozesse und Wirkungen auf das System Erde”. The computations have been performed using the Hitachi SR8000 and the PC Cluster at the Astrophysikalisches Institut Potsdam (AIP).]{}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We discuss the hierarchy of subphase transitions in first-order-like nucleation processes for an examplified aggregation transition of heteropolymers. We perform an analysis of the microcanonical entropy, i.e., the density of states is considered as the central statistical system quantity since it connects system-specific entropic and energetic information in a natural and unique way.'
address:
- |
Max-Planck-Institut für Polymerforschung Mainz, Ackermannweg 10,\
D-55128 Mainz, Germany
- |
Institut für Theoretische Physik and Centre for Theoretical Sciences (NTZ),\
Universität Leipzig, Postfach 100920,\
D-04009 Leipzig, Germany
- |
Soft Matter Systems Research Group, Institut für Festkörperforschung (IFF-2) and Institute for Advanced Simulation (IAS-2), Forschungszentrum Jülich,\
D-52425 Jülich, Germany
author:
- Christoph Junghans
- Wolfhard Janke
- Michael Bachmann
title: Hierarchies in Nucleation Transitions
---
nucleation ,first-order transition ,polymer ,structural phases ,microcanonical analysis
Introduction {#intro}
============
The noncovalent cooperative effects in structure formation processes on mesoscopic scales let linear polymers be very interesting objects for studies of the statistical mechanics and thermodynamics of nucleation processes, even on a fundamental level. Structural properties of polymers can typically be well described by means of simple, coarse-grained models of beads and sticks (or springs) representing the monomers and the covalent bonds between adjacent monomers in the chain, respectively. Contemporary, sophisticated generalized-ensemble Monte Carlo simulation techniques as well as large-scale computational resources enable the precise and systematic analysis of thermodynamic properties of all structural phases of coarse-grained polymer models by means of computer simulations. Among the most efficient simulations methods are multicanonical sampling [@muca1; @muca2], replica-exchange techniques [@huku1; @geyer1], and the Wang-Landau method [@wl1].
The high precision of the numerical data for quantities that are hardly accessible in analytic calculations – one of the most prominent and, as it will turn out in the following, most relevant system-specific quantities is the density (or number) of states with energy $E$, $g(E)$ – opens new perspectives for the physical interpretation and classification of cooperative processes such as phase transitions. This is particularly interesting for small systems, where conventional statistical analyses are often little systematic and a general concept seems to be missing. This is apparently reflected in conformational studies in the biosciences, where often novel terminologies are invented for basically the same classes of transitions. The introduction of a unifying scheme appears insofar difficult as finite-size effects in transitions of small systems influence the thermal fluctuations of transition indicators like order parameters. Maximum energetic fluctuations, represented by peaks in the specific heat, do not necessarily coincide with peak temperatures of fluctuations of structural quantities such as the radius of gyration [@bj0; @bj2]. For extremely large systems, which are well described by the theoretical concept of the “thermodynamic limit”, this *canonical* approach is appealing as the fluctuation peaks scale with system size and finally collapse at the same temperature, allowing for the definition of a unique transition point. However, for many systems, in particular biomolecules, the assumption of the thermodynamic limit is nonsensical and the explicit consideration and understanding of finite-size effects is relevant.
Microcanonical vs. canonical temperature {#temp}
========================================
The *microcanonical* analysis [@gross1] allows for such an in-depth analysis of smallness effects. It is completely based on the entropy as a function of the system energy, which is related to the density of states via $S(E)=k_{\rm B}\ln\, g(E)$, where $k_{\rm B}$ is the Boltzmann constant. A major advantage of this quantity is the possibility to introduce the temperature as a *derived* quantity via $$\label{eq:TE}
T(E)=\left[\frac{\partial S(E)}{\partial E}\right]^{-1}$$ which is commonly refered to as the “microcanonical temperature”. This terminology is misleading since $S(E)$ and thus $T(E)$ do not depend on the choice of any statistical ensemble associated to a certain thermal environment of the system. Thus, the physical meaning of $S(E)$ and $T(E)$ is not restricted to systems well-described by the microcanonical ensemble only (i.e., for systems with constant energy). The introduction of the temperature via Eq. (\[eq:TE\]) is also useful for another reason. It applies independently of the system size and it is not coupled to any equilibrium condition. To make use of theoretical concepts like the thermodynamic limit or quasiadiabatic process flow is not necessary.
The typically used heatbath concept for introducing the temperature in the context of the thermodynamic equilibrium of heatbath and system is useful for large systems when thermal fluctuations become less relevant and finite-size effects disappear. However, as it has already been known from the early days of statistical mechanics, the statistical ensembles turn over to the microcanonical ensemble in the thermodynamic limit. This is easily seen for the example of the fluctuations about the mean energy in the canonical ensemble. In this case, the canonical statistical partition function, linked to the free energy $F(T^{\rm can}_{\rm system})$, can be written as an integral over the energy space, $$\label{eq:Z}
Z(T^{\rm can}_{\rm system})=\int_{E_{\rm min}}^{E_{\rm max}} dE g(E) e^{-E/k_{\rm B}T^{\rm can}_{\rm system}}
=e^{F/k_{\rm B}T^{\rm can}_{\rm system}},$$ where the canonical system temperature $T^{\rm can}_{\rm system}$ corresponds in equilibrium to the heatbath temperature $T_{\rm heatbath}$ (which is an adjustable external thermal control parameter in experiments): $$\label{eq:equ}
T^{\rm can}_{\rm system}\equiv T_{\rm heatbath}.$$ Therefore, in equilibrium, the mean energy at a given heatbath temperature can simply be calculated as: $$\label{eq:meanE}
\langle E\rangle(T_{\rm heatbath})=\frac{1}{Z(T_{\rm heatbath})}\int dE\, E\,g(E)e^{-E/k_{\rm B}T_{\rm heatbath}}.$$ The heat capacity $C_V(T_{\rm heatbath})=d\langle E\rangle/dT_{\rm heatbath}=(\langle E^2\rangle-\langle E\rangle^2)/k_{\rm B}T_{\rm heatbath}^2$ must always be nonnegative because of the thermodynamic stability of matter, i.e., $\langle E\rangle$ increases monotonously with $T_{\rm heatbath}$. For this reason, the dependence of $\langle E\rangle$ on the temperature can trivially be inverted, $T^{\rm can}_{\rm system}(\langle E\rangle)$, where we have made use of the equilibrium condition (\[eq:equ\]). In complete analogy to the microcanonical definition of the temperature in Eq. (\[eq:TE\]), we introduce the canonical entropy via the relation $$\label{eq:Tcan}
T^{\rm can}_{\rm system}(\langle E\rangle)=
\left[\frac{\partial S^{\rm can}(\langle E\rangle)}{\partial \langle E\rangle}\right]_{N,V}^{-1},$$ where particle number $N$ and volume $V$ are kept constant. The canonical entropy can explicitly be expressed by the celebrated equation that links thermodynamics and statistical mechanics: $$\label{eq:S}
S^{\rm can}(\langle E\rangle)=\frac{1}{T^{\rm can}_{\rm system}(\langle E\rangle)}
\left[\langle E\rangle - F(T^{\rm can}_{\rm system}(\langle E\rangle))\right].$$ Instead of considering the canonical ensemble by fixing $T_{\rm heatbath}$, $V$, and $N$ as external parameters, we have turned to the caloric representation, where $\langle E\rangle$, $V$, and $N$ are treated as independent variables. If the fluctuations of energy about the mean value $\langle E\rangle$ vanish, the canonical ensemble thus corresponds to the microcanonical ensemble. This is obvious in the thermodynamic limit, where the relative width of the canonical energy distribution, $\Delta E/\langle E\rangle=\sqrt{\langle E^2\rangle-\langle E\rangle^2}/\langle
E\rangle=\sqrt{k_{\rm B}T_{\rm heatbath}^2C_V(T_{\rm heatbath})}/\langle
E\rangle\propto 1/\sqrt{N}$ vanishes, $\lim_{N\to\infty}(\Delta E/\langle E\rangle) = 0$, since $C_V$ and $\langle E\rangle$ are extensive variables as they scale with $N$. Thus, $\langle E\rangle = E$ and $T(E)=T^{\rm can}_{\rm system}(\langle E\rangle)$ in the thermodynamic limit.
Small systems {#small}
=============
Microcanonical and canonical temperatures do typically not coincide, if finite-size effects matter. This is particularly apparent under conditions where cooperative changes of macrostates, such as conformational transitions of finite molecular systems, occur. In transitions with structure formation, the conformational entropies associated to volume and surface effects in the formation of compact states of the system compete with the energetic differences of particles located at the surface or in the interior of the structure. Examples for such morphologies of finite size are atomic clusters, spin clusters, the interface of demixed fluids, globular polymers or proteins, crystals, etc. Since many interesting systems such as heterogeneous biomolecules like proteins are “small” in a sense that a thermodynamic limit does not exist at all, it is useful to build up the analysis of transitions of small systems on the most general grounds. These are, as we have argued above, best prepared by the microcanonical or caloric approach.
The folding of a protein is an example for a subtle structure formation process of a finite system, where effects on nanoscopic scales (e.g., hydrogen bonds being responsible for local secondary structures such as helices and sheets) and cooperative behavior on mesoscopic scales (such as the less well-defined hydrophobic effect which primarily drives the formation of the global, tertiary structures) contribute to the stable assembly of the native fold which is connected to the biological function of the protein. Proteins are linear polypeptides, i.e., they are composed of amino acids linked by peptide bonds. There are twenty types of amino acids that have been identified in bioproteins, all of them differ in their chemical composition and thus possess substantial differences in their physical properties and chemical reactivity. The mechanism of folding depends on the sequence of amino acids, not only the content, i.e., a protein is a disordered system. It is one of the central questions, which mutations of a given amino-acid sequence can lead to a relevant change of morphology and thus the loss of functionality. The atomistic interaction types, scales, and ranges are different as well. For this reason, in compact folds, frustration effects may occur. Disorder and frustration cause glassy behavior, but how glassy is a single protein and can this be generalized?
Since details seem to be relevant for folding, it has commonly been believed that the folding of a certain protein is a highly individual process. Thus, it was a rather surprising discovery that the search for a stable fold can be a cooperative one-step process which can qualitatively and quantitatively be understood by means of the statistical analysis of a single effective, mesoscopic order parameter. The free-energy landscape turned out to be very simple (exhibiting only a single barrier between folded and unfolded conformations) for this class of “two-state folders” [@fersht1; @fersht2]. Of course, folding pathways for other proteins can be more complex as also intermediate states can occur [@sbj1; @sbj2]. However, this raises the question about cooperativity and the generalization of folding behavior in terms of conformational transitions similar to phase transitions in other fields of statistical mechanics. In the following, we are going to discuss a structure formation process, the aggregation of a finite system of heteropolymers, by a general microcanonical approach in order to show how the conformational transition behavior of a small system is indeed related to thermodynamic phase transitions.
Exemplified nucleation process: Aggregation of proteins {#first}
=======================================================
As an example for the occurrence of hierarchies of subphase transitions accompanying a nucleation process, we are going to discuss molecular aggregation [@jbj1; @jbj2; @jbj3] by means of a simple coarse-grained hydrophobic-polar heteropolymer model [@still1; @baj1]. In this so-called AB model, only hydrophobic (A) and hydrophilic (B) monomers line up in linear heteropolymer sequences. In the following, we consider the aggregation of four chains with 13 monomers [@jbj2]. All chains have the same Fibonacci sequence $AB_2\-AB_2\-ABAB_2\-AB$ [@still1]. Folding and aggregation of this heteropolymer have already been subject of former studies [@jbj1; @jbj2].
In the model used here, bonds between adjacent monomers have fixed length unity. Nonbonded monomers of individual chains but likewise monomers of different chains interact pairwisely via Lennard-Jones-like potentials. The explicit form of the potentials depends on the types of the interacting monomers. Pairs of hydrophobic and unlike monomers attract, pairs of polar monomers repulse each other. This effectively accounts for the hydrophobic-core formation of proteins in polar solvent. Details of our aggregation model and of the implementation of the multicanonical Monte Carlo simulation method are described in Ref. [@jbj2].
![\[fig:mic\] Microcanonical entropy $S(e)$, the Gibbs hull ${\cal H}(e)$, and the deviation $\Delta S(e)={\cal H}(e)-S(e)$ as functions of the energy per monomer.](./mic.eps){width="90mm"}
The multicanonical computer simulations enabled us to obtain a precise estimate for the microcanonical entropy $S(E)$ of the multiple-chain system [@jbj2], as shown in Fig. \[fig:mic\] as a function of the energy per monomer, $e=E/N$. The entropy curve is convex in the energetic aggregation transition region as expected for a first-order-like nucleation transition of a finite system [@gross1]. The Gibbs tangent ${\cal H}(e)$, connecting the two coexistence points where concave and convex behavior change, provides the least possible overall concave shape of $S(e)$ in this region. The difference between the Gibbs hull and the entropy curve, $\Delta S(e)={\cal H}(e)-S(e)$, is also shown in Fig. \[fig:mic\]. Not only the entropic suppression in the transition region is clearly visible, it is also apparent that the transition possesses an internal structure.
In order to better understand the subphases, we discuss in the following the inverse derivative of the entropy, the microcanonical temperature (\[eq:TE\]), $T(e)$, which is plotted in Fig. \[fig:tcal\]. The slope of the Gibbs tangent corresponds to the Maxwell line in Fig. \[fig:tcal\] at the aggregation temperature $T_{\rm agg}\approx 0.217$. Therefore, the intersection points of the Maxwell line and the temperature curve define the energetic phase boundaries $e_{\rm agg}$ and $e_{\rm frag}$, respectively, as both phases coexist at $T_{\rm agg}$.
![\[fig:tcal\] Microcanonical temperature $T(e)$ as a function of energy per monomer, $e=E/N$, where $N$ is the total number of all monomers in the system. The horizontal Maxwell line marks the aggregation temperature $T_{\rm agg}$, obtained by the Gibbs construction. Vertical dashed lines separate the different phases and subphases, respectively.](./Tcal.eps){width="90mm"}
For energies $e<e_{\rm agg}\approx -0.43$, conformations of a single aggregate, composed of all four chains, dominate. On the other hand, conformations with $e>e_{\rm frag}\approx 0.05$ are mainly entirely fragmented, i.e., all chains can form individual conformations, almost independently of each other. The entropy is governed by the contributions of the individual translational entropies of the chains, outperforming the conformational entropies. The translational entropies are only limited by the volume which corresponds to the simulation box size. The energetic difference $\Delta q=e_{\rm frag}-e_{\rm agg}$, serving as an estimator for the latent heat, is obviously larger than zero, $\Delta q \approx 0.48$. It thus corresponds to the total energy necessary to entirely melt the aggregate into fragments at the aggregation (or melting) temperature.
![\[fig:pics\] Representative conformations in the different structural phases. ](./pics.eps){width="90mm"}
The relation between energy and microcanonical temperature in the aggregate phase and in the fragment phase is as intuitively expected: With increasing energy, also $T(e)$ increases. However, much more interesting is the behavior of the system in the energy interval $e_{\rm agg}<e<e_{\rm frag}$, which represents the energetic nucleation transition region. Figure \[fig:tcal\] clearly shows that in our example $T(e)$ changes the monotonic behavior three times in this regime. Representative conformations in the diffferent structural phases are shown in Fig. \[fig:pics\]. The change of monotony of the microcanonical temperature curve is called backbending effect, because the temperature decreases while energy is increased. This rather little intuitive behavior is a consequence of the suppression of entropy in this regime ($S(E)$ is convex in the backbending region). The surface entropy per monomer vanishes in the thermodynamic limit [@jbj2].
If two chains aggregate (subphase 1 in Fig. \[fig:tcal\]), the total translational entropy of the individual chains is reduced by $k_B\ln\, V$, where $V$ is the volume (corresponding to the simulation box size), whereas the energy of the aggregate is much smaller than the total energy of the system with the individual chains separated. Thus, the energy associated to the interaction between different chains, i.e., the cooperative formation of inter-chain contacts between monomers of different chains, is highly relevant here. This causes the latent heat between the completely fragmented and the two-chain aggregate phase to be nonzero which signals a first-order-like transition. This procedure continues when an additional chain joins a two-chain cluster. Energetically, the system enters subphase 2. Qualitatively, the energetic and entropic reasons for the transition into this subphase are the same as explained before, since it is the same kind of nucleation process. In our example of four chains interacting with each other, there is another subphase 3 which also shows the described behavior. The energetic width of each of the subphase transitions corresponds to the respective latent heat gained by adding another chain to the already formed cluster. The subphase boundaries (vertical dashed, gray lines in Fig. \[fig:tcal\]) have been defined to be the inflection points in the raising temperature branches, thus enclosing a complete “oscillation” of the temperature as a function of energy. The energetic subphase transition points are located at $e_{\rm 12}\approx -0.11$ and $e_{\rm 23}\approx -0.26$, respectively. Therefore, the latent heat associated to these subphase transitions is in all three cases about $\Delta q_{ij}\approx 0.16$ ($i,j=1,2,3$, $i\neq j$), thus being one third of the total latent heat of the complete nucleation process. This reflects the high systematics of subphase transitions in first-order nucleation processes.
Summary
=======
The most interesting result from this heteropolymer aggregation study is that first-order phase transitions such as nucleation processes can be understood as a composite of hierarchical subphase transitions, each of which exhibits features of first-order-like transitions. Since with increasing number of chains the microcanonical entropy per chain converges to the Gibbs hull in the transition region, the “amplitudes” of the backbending oscillations and the individual latent heats of the subphases become smaller and smaller [@jbj2; @jbj3]. Thus, in the thermodynamic limit, the heteropolymer aggregation transition is a first-order nucleation process composed of an infinite number of infinitesimally “weak” first-order-like subphase transitions.
Acknowledgments {#acknowledgments .unnumbered}
===============
Supercomputer time has been provided by the Forschungszentrum Jülich under Project Nos. hlz11, jiff39, and jiff43.
[99]{}
B. A. Berg and T. Neuhaus, Phys. Lett. B **267**, 249 (1991); Phys. Rev. Lett. **68**, 9 (1992). W. Janke, Physica A **254**, 164 (1998); B. A. Berg, Fields Inst. Comm. **26**, 1 (2000). K. Hukushima and K. Nemoto, J. Phys. Soc. Jpn. **65**, 1604 (1996). C. J. Geyer, in *Computing Science and Statistics*, Proceedings of the 23rd Symposium on the Interface, ed. by E. M. Keramidas (Interface Foundation, Fairfax Station, 1991), p. 156. F. Wang and D. P. Landau, Phys. Rev. Lett. **86**, 2050 (2001). M. Bachmann and W. Janke, in: *Rugged Free Energy Landscapes: Common Computational Approaches to Spin Glasses, Structural Glasses and Biological Macromolecules*, edited by W. Janke, Lect. Notes Phys. **736** (Springer, Berlin, 2008), p. 203. M. Bachmann and W. Janke, J. Chem. Phys. **120**, 6779 (2004). D. H. E. Gross, *Microcanonical Thermodynamics* (World Scientific, Singapore, 2001). S. E. Jackson and A. R. Fersht, Biochemistry **30**, 10428 (1991). A. R. Fersht, *Structure and Mechanisms in Protein Science: A Guide to Enzyme Catalysis and Protein Folding* (Freeman, New York, 1999). S. Schnabel, M. Bachmann, and W. Janke, Phys. Rev. Lett. **98**, 048103 (2007). S. Schnabel, M. Bachmann, and W. Janke, J. Chem. Phys. **126**, 105102 (2007). C. Junghans, M. Bachmann, and W. Janke, Phys. Rev. Lett. **97**, 218103 (2006). C. Junghans, M. Bachmann, and W. Janke, J. Chem. Phys. **128**, 085103 (2008). C. Junghans, M. Bachmann, and W. Janke, Europhys. Lett. **87**, 40002 (2009). F. H. Stillinger, T. Head-Gordon, and C. L. Hirshfeld, Phys. Rev. E **48**, 1469 (1993); F. H. Stillinger and T. Head-Gordon, Phys. Rev. E **52**, 2872 (1995). M. Bachmann, H. Ark[i]{}n, and W. Janke, Phys. Rev. E **71**, 031906 (2005).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We use the Wilkinson Microwave Anisotropy Probe (WMAP) data on the spectrum of cosmic microwave background anisotropies to put constraints on the present amount of lepton asymmetry $L$, parameterized by the dimensionless chemical potential (also called degeneracy parameter) $\xi$ and on the effective number of relativistic particle species. We assume a flat cosmological model with three thermally distributed neutrino species having all the same mass and chemical potential, plus an additional amount of effectively massless exotic particle species. The extra energy density associated to these species is parameterized through an effective number of additional species ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$. We find that $0<|\xi|<1.1 $ and correspondingly $0<|L|<0.9$ at $2\sigma$, so that WMAP data alone cannot firmly rule out scenarios with a large lepton number; moreover, a small preference for this kind of scenarios is actually found. We also discuss the effect of the asymmetry on the estimation of other parameters and in particular of the neutrino mass. In the case of perfect lepton symmetry, we obtain the standard results. When the amount of asymmetry is left free, we find $\sum m_\nu < 3.6$ eV at $2\sigma$. Finally we study how the determination of $|L|$ is affected by the assumptions on ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$. We find that lower values of the extra energy density allow for larger values of the lepton asymmetry, effectively ruling out, at $2\sigma$ level, lepton symmetric models with ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}\simeq 0$.'
author:
- Massimiliano Lattanzi
- Remo Ruffini
- 'Gregory V. Vereshchagin'
title: Joint constraints on the lepton asymmetry of the Universe and neutrino mass from the Wilkinson Microwave Anisotropy Probe
---
Introduction
============
It is a remarkable fact that our observational knowledge of the Universe can be justified in terms of a model, the so-called power-law ${\Lambda\mathrm{CDM}}$ model, characterized by just six parameters, describing the matter content of our Universe (the physical density of baryons $\omega_b$, the physical density of matter $\omega_m$, the Hubble constant $h$), the initial conditions from which it evolved (the amplitude $A$ and the spectral index $n$ of the primordial power spectrum) and the optical depth at reionization $(\tau)$. In particular, this model provides a good fit to both the cosmic microwave background (CMB) [@WMAP:parameters] and large scale structure (LSS) data (although in this last case one additional parameter, the bias parameter $b$, is needed) [@Te04]. Nevertheless, the data leave room for more refined models, described by additional parameters: among them, the spatial curvature, the amplitude of tensor fluctuations, a running spectral index for scalar modes, the equation of state for dark energy, the neutrino fraction in the dark matter component, a non-standard value for the relativistic energy density. All have been considered in previous works. In particular the last two have been studied in order to gain deeper information on the properties of neutrinos [@Ha02; @WMAP:parameters; @Ha03; @El03; @El02; @Allen03; @Barger04; @Cr03; @Pi03; @DiBari02; @DiBari03; @Barger03; @Cr04; @Ha04; @Ha03b; @Cu03]. Before the measurements of the CMB anisotropy spectrum carried out by the Wilkinson Microwave Anisotropy Probe (WMAP) [@WMAP:mission; @WMAP:Hinshaw; @WMAP:Kogut; @WMAP:Page; @WMAP:Peiris; @WMAP:parameters; @WMAP:methodology], the combined CMB and LSS data yielded the following upper bound on the sum of neutrino masses: $\sum m_\nu \le 3$ eV [@Ha02]. The WMAP precision data allowed to strengthen this limit. Using rather simplifying assumptions, i.e., assuming three thermalized neutrino families all with the same mass and a null chemical potential (thus implying perfect lepton symmetry), the WMAP team found that the neutrino mass should be lower than 0.23 eV [@WMAP:parameters]. This tight limit has been somewhat relaxed to $\sum m_\nu \le 1$ eV [@Ha03] owing to a more careful treatment of the Ly-$\alpha$ data, and its dependence on the priors has been examined [@El03]. The LSS data can also be used to put similar constraints, although they are usually weaker. Using the data from the 2 Degree Field Galaxy Redshift Survey (2dFGRS) and assuming “concordance” values for the matter density $\Omega_m$ and the Hubble constant $h$ it is found that $\sum m_\nu \le 1.8$ eV [@El02]. A combined analysis of the Sloan Digital Sky Survey (SDSS) and WMAP data gives a similar bound : $\sum m_\nu \le 1.7$ eV [@Te04]. Quite interestingly, the authors of Ref. [@Te04] claim that, from a conservative point of view (i.e., making as few assumptions as possible), the WMAP data alone don’t give any information about the neutrino mass and are indeed consistent with neutrinos making up the 100% of dark matter. In Ref. [@Allen03] it is claimed that the cosmological data favor a non-zero neutrino mass at the 68% confidence limit, while the authors of Ref. [@Barger04] find the limit $\sum m_\nu<0.74$.
At the same time, more detailed scenarios with a different structure of the neutrino sector have been studied. The first and more natural extension to the standard scenario is the one in which a certain degree of lepton asymmetry (parameterized by the so-called *degeneracy parameter* $\xi$, i.e., the dimensionless chemical potential) is introduced [@Freese83; @Ruffini83; @Ruffini88]. Although standard models of baryogenesis (for example those based on $SU(5)$ grand unification models) predict the lepton charge asymmetry to be of the same order of the baryonic asymmetry $B\sim 10^{-10}$, nevertheless there are many particle physics motivated scenario in which a lepton asymmetry much larger than the baryionic one is generated [@Harvey81; @Dolgov91; @Dolgov92; @Foot96; @Casas99; @March99; @Dolgov00; @McDonald00; @Kawasaki02; @DiBari02b; @Yama03; @Chiba04; @Taka04]. In some cases, the predicted lepton asymmetry can be of order unity. One of the interesting cosmological implications of a net leptonic asymmetry is the possibility to generate small observed baryonic asymmetry of the Universe [@Buch04; @Falc01] via the so-called sphaleron process [@Kuz85]. The process of Big Bang Nucleosynthesis (BBN) is very sensitive to a lepton asymmetry in the electronic sector, since an excess (deficit) of electron neutrinos with respect to their antiparticles, alters the equilibrium of beta reactions and leads to a lower (higher) cosmological neutron to proton ratio $n/p$. On the other hand, an asymmetry in the $\mu$ or $\tau$ sector, even if not influencing directly the beta reactions, can increase the equilibrium $n/p$ ratio due to a faster cosmological expansion. This can be used to constrain the value of the degeneracy parameter [@Bi91]. This leads to the bounds $-0.01 <\xi_e<0.22$ and $|\xi_{\mu,\tau}|<2.6$ [@Kn01; @Hans02].
The effect of a relic neutrino asymmetry on the CMB anisotropy and matter power spectrum was first studied in Ref. [@Le99], and is mainly related to the fact that a lepton asymmetry implies an energy density in relativistic particles larger than in the standard case. The cosmological observables can then be used to constrain this extra energy density, parameterized by the effective number of relativistic neutrino species ${N^\mathrm{eff}}$. Although this is somewhat more general than the case of a lepton asymmetry, in the sense that the extra energy density can arise due to other effects as well, nevertheless the case of a non-null chemical potential is not strictly covered by the introduction of ${N^\mathrm{eff}}$. This is because the increased relativistic energy density is not the only effect connected to the lepton asymmetry (an additional side effect is for example a change in the Jeans mass of neutrinos [@Freese83; @Ruffini86; @Ruffini88]). In the hypothesis of a negligible neutrino mass, it has been shown that the WMAP data constrain ${N^\mathrm{eff}}$ to be smaller than 9; when other CMB and LSS data are taken into account, the bound shrinks to $1.4 \le {N^\mathrm{eff}}\le 6.8$ [@Cr03; @Pi03]. A combined analysis of CMB and BBN data leads to even tighter bounds [@DiBari02; @DiBari03; @Ha03; @Barger03]. A more detailed analysis, in which the effective number of relativistic relics and the neutrino mass are both left arbitrary and varied independently, can be found in Ref. [@Cr04]. In the same paper, the effect of different mass splittings is also studied. Finally an extension of these arguments to the case in which additional relativistic, low-mass relics (such as a fourth, sterile neutrino or a QCD axion) are present, has been studied in Ref. [@Ha04].
The goal of this paper is to perform an analysis of the WMAP data using the degeneracy parameter, together with the effective number of relativistic particles, as additional free parameters, in order to put constraints on the lepton number of the Universe. We work in the framework of an extended cosmological model with three thermally distributed neutrino families having all the same mass and chemical potential, plus a certain amount of exotic particles species, considered to be effectively massless. We use the physical neutrino density $\omega_\nu\equiv\Omega_\nu h^2$, the degeneracy parameter $\xi$ and the extra energy density in exotic particles ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ as additional parameters that describe the neutrino sector. We perform an analysis in a 8-dimensional parameter space that includes the standard, “core” cosmological parameters.
The paper is organized as follows. After a discussion on the motivations that drive our work in Sec. \[sec:Motivation\], we shortly review some basic formulae in Sec. \[sec:Basic formulae\] and discuss the impact of a non-null degeneracy parameter on the CMB spectrum in Sec. \[sec:Effect\]. In Sec. \[sec:Method\] we describe the analysis pipeline, while in Sec. \[sec:Results\] we present our basic results. Finally, we draw our conclusions in Sec. \[sec:Conclusions\].
Motivation for this work {#sec:Motivation}
========================
The main motivation for this work comes from the fact that, even though several analyses have been performed which were aimed at putting constraints on the number of effective relativistic degrees of freedom, a statistical analysis of the CMB data aiming to put bounds directly on the degeneracy parameter, instead of on ${N^\mathrm{eff}}$, is nevertheless still missing. There are two reasons for this: first of all, in the limit of a vanishing neutrino mass, the increase in ${N^\mathrm{eff}}$ is in effect all that is needed to implement the non null chemical potential into the standard model of the evolution of perturbations [@Le99; @Ma95]. It is then argued that, since neutrinos with mass smaller than roughly $0.3$ eV, being still relativistic at the time of last scattering, would behave as massless, the distinction between $\xi$ and ${N^\mathrm{eff}}$ is no more relevant in this case for what concerns their effect on the CMB anisotropy spectrum. Although this is certainly true, it is our opinion that this does not allow to neglect *a priori* the difference between the two parameters. One reason is that the most conservative bound on neutrino mass, coming from the tritium beta decay experiments, reads $m_\nu < 2.2$ eV (at the $2\sigma$ level) [@Wein99; @Bonn02], that is quite higher than the value of 0.3 eV quoted above. The main evidences for a neutrino mass in the sub-eV range come, in the field of particle physics, from the experiments on neutrinoless double beta decay [@Kl02; @Kl04], whose interpretation depends on assumptions about the Majorana nature of neutrinos and on the details of the mixing matrix. Other indications of a sub-eV mass come, as stated above, from cosmology and in particular from the power spectrum of anisotropies, but since we want to keep our results as much as possible independent from other analyses, we should not use information on neutrino mass derived from the previous analyses of the CMB data. Moreover, let us note that CMB data analyses are often refined using the results from LSS experiments. Since the structure formation, starting close to the epoch of matter-radiation equality, goes on until very late times, even very light neutrinos (in the range $10^{-3}\div0.3$ eV) cannot be considered massless for the purpose of evaluating their effect on the matter power spectrum. This means in particular that using ${N^\mathrm{eff}}$ would lead to overlook the change in the free streaming length and in the Jeans mass of neutrinos due to the increased velocity dispersion [[@La03]]{}. It is then our opinion that the use of ${N^\mathrm{eff}}$, even if correct with respect to the interpretation of CMB data, precludes the possibility of correctly implementing the LSS data as a subsequent step in the analysis pipeline.
The second point against the cosmological significance of the degeneracy parameter is related to the constrain from BBN. It was recently shown that, if the Large Mixing Angle (LMA) solution to the solar neutrino problem is correct (as the results of the KamLAND experiment suggest [@KamLAND03]), then the flavor neutrino oscillations equalize the chemical potentials of $e$, $\mu$ and $\tau$ neutrinos prior to the onset of BBN, so that a stringent limit $\xi\lesssim 0.07$ actually applies to all flavours [@Do02; @Ab02; @Wo02]. This would constrain the lepton asymmetry of the Universe to such small values that it could be safely ignored in cosmological analyses. However, the presence of another relativistic particle or scalar field would make these limits relax [@Barger03b], while the effect of the mixing with a light sterile neutrino, whose existence is required in order to account for the results of the Liquid Scintillation Neutrino Detector (LSND) experiment [@LSND], is still unclear [@Ab02]. Moreover, it has been recently shown that a hypothetical neutrino-majoron coupling can suppress neutrino flavor oscillations, thus reopening a window for a large lepton asymmetry [@Do04; @Do04b]. For all these reasons, we judge it is interesting to study if CMB data alone can constraint or maybe even rule out such exotic scenarios.
Basic formulae {#sec:Basic formulae}
==============
It is customary in cosmology to call ultrarelativistic (or simply relativistic) a species $x$ that decouples from the photon bath at a temperature $T_d$ such that its thermal energy is much larger than its rest mass energy: $k_B T_d \gg m_x c^2$.
Owing to Liouville’s theorem, the distribution function in momentum space $f_x(p\,;\,T_x,\xi_x)$ of the species $x$ is given, after decoupling, by (we shall use all throughout the paper units in which $c=\hbar=k_B=1$): $$f_x(p\,;\,T_x,\xi_x) =\frac{g_x}{(2\pi)^3}\left[\exp\left(\frac{p}{T_x}-\xi_x\right)\pm1\right]^{-1},$$ where $\xi \equiv \mu_d/T_d$ is the dimensionless chemical potential, often called degeneracy parameter, the sign $+\,(-)$ corresponds to the case in which the $x$’s are fermions (bosons), $g$ is the number of quantum degrees of freedom, and the temperature $T$ evolves in time as the inverse of the cosmological scale factor $a$, so that $T(t)\cdot a(t)=\mathrm{const.}$
The energy density of the $x$’s at a given temperature is readily calculated: $$\begin{gathered}
\rho_x(T_x,\xi)=\int \,E(p)\,f(p\,;\,T_x,\xi_x) d^3{\vec p}=\\[0.2cm]
=\frac{g_x}{2\pi^2}\int_0^\infty \,p^2 \sqrt{p^2+m_x^2}\,f(p\,;\,T_x,\xi_x)dp.
\label{eq:rhox}\end{gathered}$$ Using the dimensionless quantities $y\equiv p/T$ and $\beta\equiv~m_x/T$, the expression for the energy density can be put in the form: $$\rho_x(T_x,\xi_x)
=\frac{g_x}{2\pi^2}T_x^4\int_0^\infty dy\,y^2 \frac{\sqrt{y^2+\beta^2}}{\exp(y-\xi_x)\pm 1}.$$ We stress the fact that a temperature dependence is still present in the integral through the term $\beta$. However, the temperature dependence disappear from the integral in two notable limits, the ultrarelativistic (UR) and non-relativistic (NR) one, corresponding respectively to the two opposite cases $\beta\ll 1$ and $\beta\gg 1$ [@Ruffini83]. Then, defining $$J_n^\pm(\xi) \equiv \left( \int_0^\infty \frac{y^n}{e^{y-\xi}\pm 1}dy \right)
\left( \int_0^\infty \frac{y^n}{e^{y}\pm 1}dy \right)^{-1},$$ so that $J_n^\pm(0)=1$, we have $$\rho_x(T_x,\xi_x)=\left\{
\begin{array}{l}
\left(
\begin{array}{c}
1 \\
7/8
\end{array}
\right)
g_x \displaystyle \frac{\pi^2}{30} J_3^\pm(\xi_x) T_x^4 \qquad \mathrm{UR} \\[0.5 cm]
\left(
\begin{array}{c}
1 \\
3/4
\end{array}
\right)
g_x \displaystyle \frac{\zeta(3)}{\pi^2} m_x J_2^\pm(\xi_x) T_x^3 \qquad \mathrm{NR}
\end{array}
\right.$$ where the upper and lower values in parentheses in front of the expression in the right-hand side hold for bosons and fermions respectively, and $\zeta(n)$ is the Riemann Zeta function of order $n$.
It is useful to express $\rho_x(t)$ in terms of the present day energy density of the cosmic background photons: $$\begin{gathered}
\rho_x(t)
=
\left(
\begin{array}{c}
1 \\
7/8
\end{array}
\right)
\left[ \frac{g_x}{2} \left( \frac{T^0_x}{T^0_\gamma}
\right)^4 J_3^\pm(\xi_x) \right] \rho_\gamma^0 (1+z)^4
\equiv\\[0.2cm]
\equiv g_x^\mathrm{eff}\rho_\gamma^0 \left(1+z\right)^4,
\label{eq:rhox_UR}\end{gathered}$$ having defined an effective number of relativistic degrees of freedom $g_x^\mathrm{eff}$ as $$g_x^{\mathrm{eff}}
\equiv
\frac{g_x}{2}
\left(
\begin{array}{c}
1 \\
7/8
\end{array}
\right)
\cdot
\left[ \left( \frac{T^0_x}{T^0_\gamma}
\right)^4 J_3^\pm(\xi_x) \right].
\label{eq:geff_def}$$ It is often the case that one has to consider a fermion species $x$ together with its antiparticle $\bar x$, the most notable example being the relic neutrinos and antineutrinos. In chemical equilibrium, the relation $\xi_x = -\xi_{\bar x}$ holds owing to the conservation of chemical potential, as can be seen considering the reaction: $$x+\bar x \longleftrightarrow\,\ldots\,\longleftrightarrow \gamma+\gamma$$ and noting that the chemical potential in the final state vanishes [@book:Weinberg]. This relation holds for neutrinos and antineutrinos in several cosmological scenarios. There are some exceptions to this, most notably early Universe scenarios in which lepton asymmetry is generated [@Fo97] or destroyed [@Ab04] by active-sterile neutrino oscillations at low temperatures. However, we shall assume all throughout the paper that the relation $\xi_\nu = -\xi_{\bar \nu}$ holds.
It can then be shown that $$\begin{gathered}
{g^\mathrm{eff}}_{x+\bar x} =
{g^\mathrm{eff}}_x + {g^\mathrm{eff}}_{\bar x} = \\[0.2cm]
= \frac{7}{8}g_x \left[1 + \frac{30}{7} \left( \frac{\xi_x}{\pi} \right)^2 + \frac{15}{7} \left( \frac{\xi_x}{\pi} \right)^4 \right] \left( \frac{T^0_x}{T^0_\gamma}
\right)^4,
\label{eq:geff x+xbar}\end{gathered}$$ where the factor between square parentheses can be recognized as what it is often quoted as the contribution of a non-vanishing chemical potential to the effective number of relativistic species ${N^\mathrm{eff}}$.
The definitions introduced above can be easily extended to the case when several ultra-relativistic species $x_i$ are present: $${g^\mathrm{eff}}\equiv
\sum_i
{g^\mathrm{eff}}_i,$$ where photons are excluded from the summation. This means that, since ${g^\mathrm{eff}}_\gamma=g_\gamma=1$, the actual number of relativistic degrees of freedom is $(1+{g^\mathrm{eff}})$.
The total density of ultrarelativistic particles at a given time is thus: $$\rho_\mathrm{rad}=\rho_\gamma^0\left(1+{g^\mathrm{eff}}\right)(1+z)^4.$$
Finally we can use this expression to find the dependence on ${g^\mathrm{eff}}$ of the redshift of radiation-matter equality $z_{eq}$ (the subscripts $b$ and $\mathrm{CDM}$ stands for baryons and cold dark matter respectively):
$$1+z_{eq}=\frac{\rho_b^0+\rho_{\mathrm{CDM}}^0}{\rho_\gamma^0}
\left(1+{g^\mathrm{eff}}\right)^{-1}.$$
So, the larger is the energy density of ultra-relativistic particles in the Universe, parameterized by the effective number of degrees of freedom ${g^\mathrm{eff}}$, the smaller $z_{eq}$ will be, i.e., the later the equality between radiation and matter will occur. In other words, supposing that the density in non-relativistic particles (baryons + CDM) is well known and fixed, having more relativistic degrees of freedom will shift $z_{eq}$ closer to us and to the CMB decoupling.
In the standard cosmological scenario, the only contribution to the energy density of relativistic particles other than photons is the one due to the three families of standard neutrinos, with zero chemical potential. The ratio of the neutrino temperature to the photon temperature is $T^0_\nu/T^0_\gamma=(4/11)^{1/3}$, due to the entropy transfer that followed the electron-positron annihilation, shortly after neutrino decoupling. Then
$${g^\mathrm{eff}}=\frac{7}{8}\left(\frac{4}{11}\right)^{4/3}N_{\nu}\simeq 0.23\,N_\nu ,
\label{eq:geff vs N}$$
where $N_\nu=3$ is the number of neutrino families. The energy density in a single neutrino species is:
$$\rho_\nu^{\mathrm{std}}=\frac{7\pi^2}{120} \left(\frac{4}{11}\right)^{4/3}T_\gamma^4.$$
However, several mechanisms that could increase (or even decrease) the energy density of relativistic particles have been proposed. In the presence of some extra relics (such as sterile neutrinos, majorons, axions, etc.) the energy density of radiation would obviously increase. A non-zero chemical potential for neutrinos or an unaccounted change of $\rho_\gamma$, due for example to particle decays that increase the photon temperature, would produce the same result. In all cases the effect is the same: a change in ${g^\mathrm{eff}}$, as it can be seen by looking at eq. (\[eq:geff\_def\]). It is usual in the literature to parameterize the extra energy density by introducing an effective number of neutrino families ${N^\mathrm{eff}}$, defined as:
$${N^\mathrm{eff}}\equiv \frac{\sum_i \rho_i}{\rho_\nu^{\mathrm{std}}},$$
where again the sum runs over all ultrarelativistic species with the exceptions of photons. It is clear from this definition that ${N^\mathrm{eff}}$ is actually the energy density in ultrarelativistic species (apart from photons) normalized to the energy density of a single neutrino species with zero chemical potential and standard thermal history. It is easy to show that a relation formally similar to (\[eq:geff vs N\]) holds in the non-standard scenario:
$${g^\mathrm{eff}}=0.23\,{N^\mathrm{eff}}.$$
In addition, it should be noted that even in the standard scenario ${N^\mathrm{eff}}\neq N_\nu=3$, but instead ${N^\mathrm{eff}}\simeq 3.04$. This is due to the fact that neutrino decoupling is not instantaneous, so that neutrino actually share some of the entropy transfer of the $e^+ e^-$ annihilation, on one side, and to finite temperature quantum electrodynamics corrections on the other [@Dolgov97; @Mangano02].
It is also useful to introduce the effective number of additional relativistic species $\Delta{N^\mathrm{eff}}$ defined as: $$\Delta {N^\mathrm{eff}}\equiv{N^\mathrm{eff}}-3.04\,,$$ so that $\Delta {N^\mathrm{eff}}=0$ in the standard scenario. Please note that $\Delta {N^\mathrm{eff}}$ can also be negative, for example in very low reheating scenarios [@Giud01].
In this paper we shall consider a scenario in which the radiation content of the Universe at the time of radiation-matter equality is shared among photons, three neutrino families with standard temperature but possibly non-zero chemical potential, and some other relic particle. We shall suppose that the presence of the latter can be completely taken into account through its effect on ${N^\mathrm{eff}}$. This is true if the species has been in its ultrarelativistic regime for the most part of the history of the Universe. The presence of this extra relic is required for our analysis, in order to circumvent the equalization of neutrino chemical potentials, as explained at the end of section \[sec:Motivation\]. We also assume that the degeneracy parameters for neutrinos and antineutrinos are equal and opposite, and that $e$, $\mu$ and $\tau$ neutrinos all have the same chemical potential.
The extra energy density can thus be split into two distinct contributions, the first due to the non-zero degeneracy parameter of neutrinos and the second due to the extra relic(s): $$\Delta {N^\mathrm{eff}}= \Delta{N^\mathrm{eff}}_\nu(\xi)+\Delta{N^\mathrm{eff}}_{\mathrm{others}}.
\label{eq:DNtot}$$ Following our assumptions, $\Delta{N^\mathrm{eff}}_\nu$ can be expressed as a function of the chemical potential only: $$\Delta{N^\mathrm{eff}}_\nu(\xi)=3\left[\frac{30}{7} \left( \frac{\xi_x}{\pi} \right)^2 + \frac{15}{7} \left( \frac{\xi_x}{\pi} \right)^4 \right].
\label{eq:DNnu}$$
Effect of a non-null chemical potential {#sec:Effect}
=======================================
As anticipated above, the main effect connected to the presence of a non-vanishing degeneracy parameter, is an increase in ${g^\mathrm{eff}}$ (or, equivalently, in ${N^\mathrm{eff}}$). The presence of this extra number of effective relativistic degrees of freedom can in principle be detected from observations of the CMB radiation. The shift of matter-radiation equality has important consequences for the CMB anisotropy spectrum, these being due to the larger amplitude of the oscillations that enter the horizon during the radiation dominated phase, and to a larger early integrated Sachs-Wolfe (ISW) effect. However these effects, basically due to the speeding up of the cosmological expansion, can be similarly produced by the variation of other cosmological parameters, for example by a smaller CDM density.
Moreover since the change in the redshift of matter-radiation equality depends on the ultra-relativistic species only through the quantity ${g^\mathrm{eff}}$, it cannot be used to distinguish between the different species (i.e., it is “flavor blind”), nor to understand if the excess energy density is due to the presence of some unconventional relic, to an extra entropy transfer to photons, or to a non-null chemical potential (i.e., to a lepton asymmetry), or maybe to all of the previous.
However ultrarelativistic particles, other than changing the background evolution, have an effect even on the evolution of perturbations, as it was pointed out in [@Ba04] with particular regard to the case of neutrinos. First of all, the high velocity dispersion of ultrarelativistic particles damps all perturbations under the horizon scale. Second, the anisotropic part of the neutrino stress-energy tensor couples with the tensor part of the metric perturbations. It was shown in [@We04] that this reduces the amplitude squared of tensor modes by roughly 30% at small scales. Finally, the authors of Ref. [@Ba04] claimed that the perturbations of relativistic neutrinos produce a distinctive phase shift of the CMB acoustic oscillations. These effects can thus be used to break the degeneracy between ${g^\mathrm{eff}}$ and other parameters. It remains to establish whether they can be used to break the degeneracy between the different contributions to ${g^\mathrm{eff}}$ or not. Even without performing a detailed analysis, it can be seen by looking at the relevant equations in Ref. [@Ba04] and [@We04], that both the absorption of tensor modes and the phase shift depend on the quantity $f_\nu \equiv \rho_\nu/(\rho_\gamma + \rho_\nu)$. The effect of free streaming, even if more difficult to express analytically, is also mainly dependent on the value of $f_\nu$ [@Hu98]. If we consider the case where the three standard neutrinos are the only contribution to the radiation energy density other than photons, but we allow for the possibility of a non-vanishing chemical potential or for a different $T_\nu^0/T_\gamma^0$ ratio, we see again that the changes in the shape of the CMB anisotropy spectrum depend only on ${g^\mathrm{eff}}$ as whole, as long as eq. (\[eq:rhox\_UR\]) retains its validity, i.e., as long as neutrinos are in their ultra-relativistic regime. Even considering the presence of some additional relic particle $x$ does not seem to change this picture. Supposing that the other ultrarelativistic particles behave as neutrinos for what concerns the effects under consideration, we can argue that in all cases the relevant quantity is $f_x\equiv\rho_x/\rho_\mathrm{rad}$, so that we are again lead to the conclusions that ${g^\mathrm{eff}}$ is the only relevant parameter. This means, for example, that in the case of neutrinos with mass less than $\sim 0.3$ eV, so that they stay ultrarelativistic until the time of last scattering and for some time after, the effect on CMB perturbations is exactly the same of massless neutrinos, and every change in their temperature or chemical potential, as even the presence of an additional, sterile neutrino, is absorbed in ${g^\mathrm{eff}}$ (moreover, we don’t have obviously any possibility to extract information about their mass).
However the picture changes when considering neutrinos (or other relic particles) that go out from the ultrarelativistic regime before matter-radiation decoupling. If the neutrino mass is larger than $\sim 0.3$ eV, the effect of its finite mass is felt by the perturbations that enter the horizon after neutrinos have gone out from the ultrarelativistic regime, because from some point on the evolution the energy density will be given no more by the approximate formula (\[eq:rhox\_UR\]), but instead by eq. (\[eq:rhox\]) that contains a dependence on mass through the term $\beta$. A side effect of this is that it will not be possible to single out the dependence of $\rho$ from $T$ and $\xi$ as an overall factor, so that these two contributions become distinguishable.
Let us make this more clear with an example. Consider a gravitational wave entering the horizon after neutrinos became non-relativistic, but before matter-radiation decoupling. This wave will be absorbed, according to [@We04], proportionally to $\rho_\nu$. On the other hand the free streaming length of neutrinos will vary according to the velocity dispersion $<v^2>$ [@Freese83; @Ruffini86]. The key point is that, for a gas of non-relativistic particles, $\rho_\nu$ and $<v^2>$ will depend on $T_\nu$ and $\xi_\nu$ in different ways, so that measuring independently the absorption factor and the free streaming length, it would be possible at least in principle to obtain the values of $T_\nu$ and $\xi_\nu$ without any ambiguity left.
What we have just said is even more true with respect to the LSS data, since even neutrinos with mass greater than $10^{-3}$ eV are in their non-relativistic regime during the late stages of the process of structure formation. We conclude then by stressing that one should be careful when parameterizing the lepton asymmetry by means of an effective number of degrees of freedom.
Method {#sec:Method}
======
We used the `CMBFAST` code [@Se96], modified as described in Ref. [@Le99] in order to account for a non vanishing chemical potential of neutrinos, to compute the temperature (TT) and polarization (TE) CMB spectra for different combinations of the cosmological parameters. As a first step, we added three more parameters, namely the effective number of additional relativistic species ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$, the neutrino degeneracy parameter $\xi$ (both defined in Sec. \[sec:Basic formulae\]) and the neutrino physical energy density $\omega_\nu\equiv\Omega_\nu h^2$ to the standard six-parameters ${\Lambda\mathrm{CDM}}$ model that accounts in a remarkably good way for the WMAP data. As anticipated above, ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ accounts only for the extra energy density due to the presence of additional relic relativistic particles other than the three Standard Model neutrinos. We shall refer to the $(\omega_\nu,\, \xi,\,{\Delta{N^\mathrm{eff}}_{\mathrm{others}}})$ subspace as the “neutrino sector” of the parameter space (although, as we have just noticed, ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ does not refer directly to neutrinos).
With the above mentioned choice of the parameters we can make a consistency check to our results, by verifying that imposing the priors $\xi = 0$, $\omega_\nu=0$ and ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}=0$, we obtain results that are in agreement with the ones of the WMAP collaboration. Moreover, by choosing a sufficiently wide range for the variation of the three additional parameters, we can check how much their introduction affects the estimation of the best-fit values of the core parameters. Thus we choose to use the following parameters: the physical baryon density $\omega_b \equiv \Omega_b h^2$, the total density of non relativistic matter $\omega_m\equiv(\Omega_b+\Omega_\mathrm{CDM})h^2$, the scalar spectral index $n$, the optical depth to reionization $\tau$, the overall normalization of the CMB spectrum $A$, the physical neutrino density $\omega_\nu =\Omega_\nu h^2$, the neutrino degeneracy parameter $\xi$ and the extra energy density in non-standard relics ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$. We will be considering the scenario in which the three standard model neutrinos have all the same mass and chemical potential. We take the chemical potential to be positive (this corresponds to an excess of neutrinos over antineutrinos), but since the effects on the CMB do not depend on the sign of $\xi$, we quote the limits that we obtain in terms of its absolute value. We do not include as a free parameter the Hubble constant $H_0$, whose degeneracy with the effective number of relativistic degrees of freedom and with the neutrino mass has been studied in previous works [@El03]. Instead we decided, according to the recent measurements of Hubble Space Telescope (HST) Key Project [@Fr01], to assume that $h = 0.72$. Moreover, we restrict ourselves to the case of a flat Universe, so that the density parameter of the cosmological constant $\Omega_\Lambda$ is equal to $1-(\omega_m + \omega_\nu)/h^2$. We are thus dealing with a 8-dimensional parameter space.
Let us discuss in a bit more detail the way we deal with priors in the neutrino sector of parameter space, i.e., with information coming from other observations, and in particular from BBN. As we have stressed in section \[sec:Motivation\], the standard BBN scenario, together with the equalization of chemical potentials, constraints the neutrino degeneracy parameter to values lower than the ones considered in this paper; on the other hand this conclusion possibly does not hold in non-standard scenarios where additional relativistic relics are present. However, even non-standard scenarios of this kind usually single out some preferred region in parameter space. At the present, several non-standard scenarios that can account for the observed Helium abundance exist (see for example Refs. [@DiBari02] and [@DiBari03]) so that we adopt a conservative approach, and choose not to impose any prior on the neutrino sector, other than the ones that emerge “naturally” as a consequence of our choice of parameters. Anyway, this does not preclude the possibility of successively using the BBN information: in fact once the likelihood function in the neutrino sector has been calculated, it can be convolved with the relevant priors coming from non-standard BBN scenarios.
We span the following region in parameter space: $0.020\leq\omega_b\leq0.028$, $0.10\leq\omega_m \leq 0.18$, $0.9\leq n \leq 1.10$, $0 \leq \tau \leq 0.3$, $0.70\leq A \leq 1.10$, $0\leq\omega_\nu\leq0.30$, $0 \leq |\xi| \leq 2.0$, $0\leq{\Delta{N^\mathrm{eff}}_{\mathrm{others}}}\leq 2.0 $. We shall call this our “(5+3) parameter space”. In order to obtain the likelihood function ${\cal L}({\omega_b,\, \omega_m,\, n,\, \tau,\, A,\, \omega_\nu,\, \xi, {\Delta{N^\mathrm{eff}}_{\mathrm{others}}}})$ in this region, we sample it over a grid consisting of 5 equally spaced points in each dimension. For each point on our grid, corresponding to a combination of the parameters, we compute the likelihood relative to the TT [@WMAP:Hinshaw] and TE [@WMAP:Kogut] angular power spectrum observed by WMAP, using the software developed by the WMAP collaboration [@WMAP:methodology] and kindly made publicly available at their website[^1]. To obtain the likelihood function for a single parameter, we should marginalize over the remaining ones. However for simplicity we approximate the multi-dimensional integration required for the marginalization procedure with a maximization of the likelihood, as it is a common usage in this kind of likelihood analysis. This approximation relies on the fact that the likelihood for cosmological parameters is supposed to have a gaussian shape (at least in the vicinity of its maximum) and that integration and maximization are known to be equivalent for a multivariate Gaussian distribution.
According to Bayes’ theorem, in order to interpret the likelihood functions as probability densities, they to be inverted through a convolution with the relevant priors, representing our knowledge and assumptions on the parameters we want to constrain. Here we shall assume uniform priors, i.e. we will assume that all values of the parameters are equally probable.
For each of the core parameters, we quote the maximum likelihood value (which we shall refer to also as the “best-fit” value) over the grid and the expectation value over the marginalized distribution function. We quote also the best chi square value $\chi_0^2$ (we recall that $\chi^2\equiv-2\ln{\cal L}$) divided by the number of degrees freedom, that is equal to the number of data (for WMAP, this is 1348) minus the number of parameters. For what concerns the parameters of the neutrino sector, we quote the maximum likelihood and the expectation value as well, and in addition we report a $2\sigma$ confidence interval. Using a Bayesian approach, we define the 95% confidence limits as the values at which the marginalized likelihood is equal to $\exp[-(\chi_0^2-4)/2]$, i.e., the values at which the likelihood is reduced by a factor $\exp(2)$ with respect to its maximum value [^2]. There is one exception to this procedure, namely, when the maximum likelihood value for a parameter that is positively defined (such as $\omega_\nu$ or the absolute value of $\xi$), let us call it $\theta$, is equal to zero. In this case, instead than computing the expectation value, we just give an upper bound. In order to do this, we compute the cumulative distribution function ${\cal C}(\theta)=\left(\int_0^{\theta}{\cal L}(\bar\theta) d\bar\theta\right) / \left(\int_0^\infty{\cal L}(\bar\theta) d\bar\theta\right)$ and quote as the upper limit at the 95% confidence level the value of $\theta$ at which ${\cal C}(\theta) = 0.95$.
Once we have obtained constraints on $\omega_\nu$, $\xi$ and ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$, we translate them to limits on the neutrino mass $m_\nu$, the lepton asymmetry $L$ and the extra number of effective relativistic species $\Delta{N^\mathrm{eff}}$, using eqs. (\[eq:DNtot\]) and (\[eq:DNnu\]) together with the following relations [[@Freese83; @Ruffini86; @La03]]{}: $$\begin{aligned}
&&\Omega_\nu h^2 = \sum_\nu \frac{m_\nu F(\xi_\nu)}{93.5\,\mathrm{eV}},\\[0.2 cm]
&&L\equiv \sum_\nu \frac{n_\nu-n_{\bar \nu}}{n_\gamma}= \nonumber\\[0.2cm]
&&=\frac{1}{12\zeta(3)}\left[\sum_\nu\left(\xi^3+\pi^2 \xi\right)\left(\frac{T^0_\nu}{T^0_\gamma}\right)^3\right],
$$ where $$\begin{gathered}
F(\xi)\equiv\frac{2}{3\zeta(3)}\left[ \sum_{k=1}^\infty (-1)^{k+1}\frac{e^{+ k \xi}+e^{-k \xi}}{k^3} \right] = \\
= \frac{1}{3\zeta(3)}\left[\frac13\,\xi^3+\frac{\pi^2}{3}\xi
+4\sum_{k=1}^\infty(-1)^{k+1}\,\displaystyle\frac{e^{-k\xi}}{k^3}\right].\end{gathered}$$
Results and discussion {#sec:Results}
======================
We start our analysis by looking at the effect of the introduction of the additional parameters to the estimation of the core parameters $({\omega_b,\,\omega_m,\,n,\, \tau,\, A})$. First of all, we check that imposing the priors $\xi = 0$, $\omega_\nu = 0$ and ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}=0$ our results are in good agreement with the ones of the WMAP collaboration (we should refer to the values quoted in Table I of Ref. [@WMAP:parameters]). The mean and maximum likelihood values that we obtain for each parameter are summarized in the second and third column of Table \[tab:core\_summary\]. We see that in all cases our results lie within the 68% confidence interval of WMAP expected values. Then we remove the prior on $\omega_\nu$, while still retaining the ones on $\xi$ and ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$. The maximum likelihood model has still $\omega_\nu=0$. The best-fit values of the core parameters are left unchanged, and the same happens for the best-fit $\chi^2$, thus suggesting that a non-zero $\omega_\nu$ is not required in order to improve the goodness of fit. The results for the core parameters are summarized in the fourth and fifth column of Table \[tab:core\_summary\].
[ldddddd]{} & & &\
& & & & & &\
Parameter & & & & & &\
\
Baryon density,$\omega_b$ & 0.024 & 0.024 & 0.024 & 0.024 & 0.023 & 0.022\
Matter density,$\omega_m$ & 0.15 & 0.16 & 0.15 & 0.16 & 0.14 & 0.14\
Hubble constant[^3],$h$ & 0.72 & 0.72 & 0.72 & 0.72 & 0.72 & 0.72\
Spectral index,$n$ & 1.00 & 1.00 & 1.00 & 1.00 & 0.98 & 0.95\
Optical depth,$\tau$ & 0.13 & 0.075 & 0.13 & 0.075 & 0.12 & 0.075\
Amplitude,$A$ & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.7\
$\chi^2/\nu$ & & & & & &
Finally, we compute the likelihood over our whole parameter space. The results for the core parameters are summarized in the sixth and seventh column of Table \[tab:core\_summary\]. The maximum likelihood model over the grid has $({\omega_b,\, \omega_m,\, n,\, \tau,\, A,\, \omega_\nu,\, \xi, {\Delta{N^\mathrm{eff}}_{\mathrm{others}}}})=(0.022,\,0.14,\,0.95,\,0.075,\,0.7,\,0,\,0.5,\,0)$. We see that this time, the best fit values for the five core parameters are slightly changed with respect to the standard case. The changes in $\omega_m$ and $n$ could seem strange at a first sight, since intuitively one would expect the opposite behaviour, i.e., a change to larger values for both, because a larger $\omega_m$ could keep the time of matter-radiation equality, while a larger $n$ would increase the power on small scales thus leaving more room for neutrino free streaming. This is because the goodness of fit of a particular model with respect of the WMAP data is mainly determined by its ability to fit the first and second peak. Increasing together $\omega_m$, $n$, $\xi$ and ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ would increase the height of the first peak that can be then lowered back by decreasing the overall amplitude $A$. We show in fig. \[fig:compare\_spectra\] a comparison between the best-fit spectrum in the $(\xi = 0,\,{\Delta{N^\mathrm{eff}}_{\mathrm{others}}}=0)$ subspace with the best-fit spectrum on the whole space.
Now let us turn our attention to the neutrino sector of parameter space. The best-fit model over the $(\xi=0,\,{\Delta{N^\mathrm{eff}}_{\mathrm{others}}}=0)$ subspace of the grid has $\omega_\nu=0$, and the $\chi^2$ changes from 1437 to 1541 when going from $\omega_\nu = 0$ to the next value in our grid, $\omega_\nu = 0.075$. We can compute an upper bound for $\omega_\nu$, but since the region in which ${\cal L}(\omega_\nu)$ significantly differs from zero is all comprised between the first two values in our grid $\{0, 0.075\}$, the result is rather dependent from the particular interpolation scheme we choose. Using a simple, first order interpolation scheme, we find the bound $\omega_\nu < 0.0045$ (95% CL), corresponding to $m_\nu<0.14$ eV, while using higher order interpolation schemes the bound weakens up to $\omega_\nu < 0.015$ ($m_\nu<0.47$ eV). This result should then be taken with caution and we shall simply consider it as an indication that, although we are using a grid-based method with a rather wide grid spacing instead than the more sophisticated Markov Chain Monte Carlo (MCMC) method [@Chris01; @Lewis02; @book:Gamerman], we basically obtain the same results of the WMAP collaboration, namely, $\omega_\nu\le0.0072$ [@WMAP:parameters], when imposing the priors $\xi =0,\,{\Delta{N^\mathrm{eff}}_{\mathrm{others}}}=0$.
We make a second check by imposing that $\omega_\nu=0,\,\xi=0$ and computing the 95% confidence region for ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$. We find that $0\le{\Delta{N^\mathrm{eff}}_{\mathrm{others}}}\le1.4$. Since the degeneracy parameter is vanishing, the same limit applies to $\Delta{N^\mathrm{eff}}={\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$. This is quite in agreement with the results quoted in Ref. [@Cr03], although it is more restrictive. This is probably due to the fact that we are imposing a stronger prior on $h$, keeping it constant and equal to 0.72. This is confirmed by a visual inspection of fig. 2 of Ref. [@Cr03].
The best-fit value for neutrino density over the whole parameter space is still $\omega_\nu=0$, but this time $\chi^2$ changes from 1431 to 1441 as $\omega_\nu$ goes from 0 to 0.075, so that the probability density spreads out to higher values of $\omega_\nu$ with respect to the preceding case. The result is that the upper bound raises up to $\omega_\nu<0.044$, quite independently from the interpolation scheme used. This is probably related to the already observed trend for which, when the energy density of relativistic relics is increased, the possibility for larger neutrino masses reopens [@Ha03; @El03; @Ha04; @Lesg01].
The maximum likelihood value for the degeneracy parameter is $|\xi| = 0.5$, while the expectation value over the distribution function is $|\xi| = 0.56$ (corresponding to $|L|=0.43$). At the $2\sigma$ level, the degeneracy parameter is constrained in the range $0\le|\xi|\le1.07$. This corresponds to $0\le |L| \le 0.9$. For what concerns the additional number of relativistic relics, the maximum likelihood model has ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}=0$, and the expectation value over the marginalized probability function is ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}=0.3$. The 95% confidence region is $-0.7\le{\Delta{N^\mathrm{eff}}_{\mathrm{others}}}\le1.3$. This opening towards smaller, negative values of ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ can be ascribed to the fact that such values produce a lowering of the acoustic peak, that can be compensated by a larger degeneracy parameter. The quoted bounds on $\omega_\nu$ and $\xi$ translate to the following bound on the neutrino mass: $m_\nu<1.2$ eV (95% CL). In fig. \[fig:6Like\_5+3\] we show the likelihood functions, while in Table \[tab:nu\_summary\] we summarize our results for the basic and derived parameters describing the neutrino sector.
We remark that, although the maximum likelihood model over the whole grid has $\omega_\nu=0$, this is not in contradiction with our choice of considering $\xi$ and ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ as independent parameters, in spite of the fact that in this limit they should be degenerate. The basic reason is that, as can be seen from the likelihood curves, models with $\omega_\nu>0$ can be statistically significant. For these models, $\Delta{N^\mathrm{eff}}_\nu$ and ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ are not exactly degenerate.
In order to better study the partial degeneracy between $|\xi|$ and ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$, and then to understand how the value of ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ affects the estimation of the degeneracy parameter, we compute the likelihood curve for the degeneracy parameter for particular values of ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$. The results are shown in Table \[tab:xi\_vs\_DNoth\]. . From this table, a quite evident trend appears, namely that for large values of ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$, smaller values of $|\xi|$ are preferred, and viceversa. As already noticed, this is probably related to the fact that when ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ is increased, it remains less room for the extra energy density of neutrinos coming from the non-vanishing degeneracy parameter. It is worth noting that, for ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}\simeq 0$, the case $\xi=0$ lies outside the 95% confidence region. We stress the fact that, according to theoretical predictions, in models of degenerate BBN with “3+1” neutrino mixing, if chemical potentials are large ($\xi>0.05$), the production of sterile neutrinos is suppressed, effectively resulting in ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}=0$ [@DiBari02; @DiBari03].
[ld]{} &\
Parameter &\
\
Physical neutrino density, $\omega_\nu$ & 0.044\
Degeneracy parameter, $|\xi|$ & 0.60\^[+0.50]{}\_[-0.60]{}\
Neutrino mass in eV, $m_\nu$ & 1.2\
Lepton asymmetry, $|L|$ & 0.46\^[+0.43]{}\_[-0.46]{}\
Effective number of additional & 0.301.0\
relativistic relics, ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ &\
Effective number of additional & 0.70\^[+1.40]{}\_[-1.15]{}\
relativistic relics, $\Delta N^{\mathrm{eff}}$ &
[cd]{} ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$ &\
&\
\
0 & 0.650.58\
0.5 & 0.42\^[+0.58]{}\_[-0.42]{}\
1.0 & 0.18\^[+0.58]{}\_[-0.18]{}\
1.5 & 0.53\
2.0 & 0.29
Conclusions and perspectives {#sec:Conclusions}
============================
In this paper, we have studied the possibility to constraint the lepton asymmetry of the Universe, the sum of neutrino masses, and the energy density of relativistic particles using the WMAP data, in the framework of an extended flat ${\Lambda\mathrm{CDM}}$ model. Despite the fact that the current amount of cosmological data can be rather coherently explained by the standard picture with three thermally distributed neutrinos, vanishing lepton asymmetry and no additional particle species, nevertheless we think that it is useful to explore how non-standard scenarios are constrained by the cosmological observables. We have concentrated our attention to models with a (eventually large) net lepton asymmetry (corresponding to a non-zero degeneracy parameter for neutrinos). Such models are motivated in the framework of extensions to the standard model of particle physics, and can possibly explain the observed amount of baryon asymmetry in the Universe. Having in mind this, we have also included the energy density of relativistic species as an independent parameter. In this last aspect, our approach differs from previous ones, where the two parameters where considered degenerate. We have remarked that, although an approximate degeneracy between the two exists, it could be broken by finite mass effect, especially in the case of neutrino masses saturating the tritium beta decay bound.
When considering perfect lepton symmetry , our results are in agreement with previous ones. In the more general case, we have found that, at the $2\sigma$ level the bounds on the degeneracy parameter and lepton asymmetry are respectively $0\le |\xi| \le 1.1$ and $0\le |L| \le 0.9$. The effective number of additional relativistic species (excluding the contribution from the non standard thermal distribution of neutrinos) is bounded as follows (95% CL): $-0.7~\le~{\Delta{N^\mathrm{eff}}_{\mathrm{others}}}~\le~1.3$. Including also neutrinos, this reads $-0.45\le\Delta{N^\mathrm{eff}}\le2.10$. This limit is much more restrictive than the ones found in similar analysis [@Cr03; @Pi03]. This is probably due to the fact that we assume a very strong prior on the Hubble parameter, fixing $h=0.72$. The physical explanation is that the later matter-radiation equality due to $\Delta{N^\mathrm{eff}}>0$ can be compensated by making $\omega_m=\Omega_mh^2$ larger, and viceversa. This gives rise to a partial degeneracy between $\Delta{N^\mathrm{eff}}$ and $h$, thus making the costraints on both parameters looser unless some external prior is imposed to break the degeneracy.
We also find that the data are compatible with $\omega_\nu$ and $m_\nu$ equal to $0$, with upper bounds (95% CL) $\omega_\nu\le 0.044$ and $m_\nu\le 1.2\,\operatorname{eV}$. This bounds are larger than the ones usually found, and this is probably due on one hand to the presence of a larger energy density of UR particles, and on the other hand to the wide grid spacing we have used.
The usual scenario, with $|L|=0$ and $\Delta{N^\mathrm{eff}}=0$, is then compatible with WMAP data at the $2\sigma$ level; however the likelihood curves show that alternative scenarios with $\xi\simeq 0.6$ and $\Delta{N^\mathrm{eff}}\simeq 0.7$ have a larger likelihood with respect to the data. In effect, the standard scenario lies outside the $1\sigma$ confidence region. Even if this is not enough to definitely claim evidence, in the CMB anisotropy spectrum, of exotic physics, we think that it is however interesting that non-standard models are not ruled out but actually preferred by the WMAP data.
We have also studied how the results on the lepton asymmetry can change when more precise information on the energy density of relativistic particles is given. We have shown that, the smaller is the extra energy density, the larger is the allowed lepton asymmetry. In particular, for models with vanishing ${\Delta{N^\mathrm{eff}}_{\mathrm{others}}}$, perfect lepton symmetry is ruled out at the $2\sigma$ level. This is probably due to the approximate degeneracy between $\Delta {N^\mathrm{eff}}$ and $\xi$. The issue of the exact extent of this degeneracy is still open, and we think that it deserves a deeper attention. It would be desirable to investigate if future precision CMB experiments, and in particular the PLANCK mission, can clearly disentangle the two parameters. Even more promising to this purpose is the power spectrum of LSS, as we have stressed in Sec. \[sec:Effect\].
The results presented in this paper have been derived assuming that the three neutrino families have all the same mass and chemical potential, owing to the structure of the `CMBFast` code used to generate the theoretical power spectra. It is yet to investigate the role of a non-uniform distribution of the lepton asymmetry between different families. In a similar way we did not investigate models in which chemical equilibrium between neutrinos and antineutrinos does not hold, implying $\xi_\nu \neq - \xi_{\bar \nu}$.
We conclude then that WMAP data still cannot exclude the presence of non-standard physics in the early evolution of the Universe. In particular, they do not exclude the presence of a large neutrino asymmetry, and consequently they do not rule out exotic leptogenesis scenario where a large lepton number is produced.
ML would like to thank Giovanni Montani for useful discussion and comments.
[73]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , , , , ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, ****, ().
, in **, edited by (, ), p. .
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , in **, edited by (), vol. of ** (AIP, New York, 2003), pp. .
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, ().
, ** (, ).
, ****, ().
, , , ().
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ** (, ).
, ****, ().
[^1]: <http://lambda.gsfc.nasa.gov/>
[^2]: The 95% confidence level defined in this way is not in general equal to the $2\sigma$ region, defined computing the variance of the probability distribution. However, the two are equal for a gaussian probability density. As we shall see, almost all the marginalized distribution have a nearly gaussian shape. When it is not so, we shall point this out.
[^3]: The value of the Hubble constant is kept fixed to $h=0.72$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider revenue maximization in online auction/pricing problems. A seller sells an identical item in each period to a new buyer, or a new set of buyers. For the online pricing problem, we show regret bounds that scale with the *best fixed price*, rather than the range of the values. We also show regret bounds that are *almost scale free*, and match the offline sample complexity, when comparing to a benchmark that requires a *lower bound on the market share*. These results are obtained by generalizing the classical learning from experts and multi-armed bandit problems to their *multi-scale* versions. In this version, the reward of each action is in a *different range*, and the regret with respect to a given action scales with its *own range*, rather than the maximum range.'
author:
- |
Sébastien Bubeck sebubeck@microsoft.com\
Microsoft Research,\
1 Microsoft Way,\
Redmond, WA 98052, USA. Nikhil Devanur nikdev@microsoft.com\
Microsoft Research,\
1 Microsoft Way,\
Redmond, WA 98052, USA. Zhiyi Huang zhiyi@cs.hku.hk\
Department of Computer Science,\
The University of Hong Kong,\
Pokfulam, Hong Kong. Rad Niazadeh rad@cs.stanford.edu\
Department of Computer Science,\
Stanford University,\
Stanford, CA 94305, USA.
bibliography:
- 'bibliography.bib'
title: 'Multi-scale Online Learning and its Applications to Online Auctions'
---
online learning, multi-scale learning, auction theory, bandit information, sample complexity
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study the leaf-to-leaf distances on one-dimensionally ordered, full and complete $m$-ary tree graphs using a recursive approach. In our formulation, unlike in traditional graph theory approaches, leaves are ordered along a line emulating a one dimensional lattice. We find explicit analytical formulae for the sum of all paths for arbitrary leaf separation $r$ as well as the average distances and the moments thereof. We show that the resulting explicit expressions can be recast in terms of Hurwitz-Lerch transcendants. Results for periodic trees are also given. For incomplete random binary trees, we provide first results by numerical techniques; we find a rapid drop of leaf-to-leaf distances for large $r$.'
author:
- 'Andrew M. Goldsborough'
- 'S. Alex Rautu'
- 'Rudolf A. Römer'
title: 'Leaf-to-leaf distances and their moments in finite and infinite ordered $m$-ary tree graphs'
---
Introduction
============
The study of graphs and trees, i.e. objects (or *vertices*) with pairwise relations (or *edges*) between them, has a long and distinguished history throughout nearly all the sciences. In computer science, graphs, trees and their study are closely connected, e.g. with sorting and search algorithms [@SedF13]; in chemistry the Wiener number is a topological index intimately correlated with, e.g., chemical and physical properties of alkane molecules [@Wie47]. In physics, graphs are equally ubiquitous, not least because of their immediate usefulness for systematic perturbation calculations in quantum field theories [@PesS95]. In mathematics, graph theory is in itself an accepted branch of mainstream research and graphs are a central part of the field of discrete mathematics [@Ros12]. An important concept that appears in all these fields is the *distance* in a graph, i.e. the number of edges connecting two vertices [@LecNS13; @SzeWW11; @Wan10]. For trees, i.e. undirected graphs in which any two vertices are connected by only one path, various results exist [@Jac88; @KirPS89; @KirPS94], for example, that compute the distance from the top of the tree to its leaves. Tree-like structures have recently also become more prominent in quantum physics of interacting particles with the advent of so-called tensor network methods [@Sch11]. These provide elegant and powerful tools for the simulation of low dimensional quantum many-body systems. In a recent publication [@GolR14] we show that certain correlation functions and measures of quantum entanglement can be constructed by a holographic distance and connectivity dependence along a tree network connecting certain leaves [@EveV11]. In these quantum systems, the leaves are ordered according to their physical position, for example the location of magnetic ions in a quantum wire. This ordering imposes a new restriction on the tree itself and the lengths which become important are leaf-to-leaf distances across the ordered tree. We emphasize that these distances therefore correspond to quite different measures than those studied in the various sciences mentioned before. We also note that in tensor networks the leaf-to-leaf distance is referred to as the *path length* [@EveV11], but in graph theory this term usually refers to the sum of the levels of each of the vertices in the tree [@SedF13].
In the present work, we shall concentrate on full and complete trees that have the same structure as regular tree tensor networks [@SilGMR10; @GerSRF14]. We derive the average leaf-to-leaf distances for varying leaf separation with leaves ordered in a one-dimensional line as shown e.g. in Fig. \[fig-binarytree\](a) for a binary tree [^1].
\(a) ![image](complete_definitions_small.eps){width="0.95\columnwidth"} (b) ![image](complete_BT_box.eps){width="0.95\columnwidth"}
\[fig-binarytree\]
The method is then generalized to $m$-ary trees and the moments of the leaf-to-leaf distances. Explicit analytical results are derived for finite and infinite trees. We also consider the case of periodic trees. We then illustrate how such properties may arise in the field of tensor networks. Last, we numerically study the case of incomplete random trees, which is closest related to the tree tensor networks considered in Ref. [@GolR14].
Average leaf-to-leaf distance in complete binary trees {#sec-binary-averagepathlength}
======================================================
Recursive formulation {#sec-binary-averagepathlength-recursion}
---------------------
Let us start by considering the complete binary tree shown in Figure \[fig-binarytree\](a). It is a connected graph where each vertex is $3$-valent and there are no loops. The *root node* is the vertex with just two degrees at the top of Figure \[fig-binarytree\](a). The rest of the vertices each have two *child* nodes and one parent. A *leaf node* has no children. The *depth* of the tree denotes the number of vertices from the root node with the root node at depth zero. With these definitions, a binary tree is *complete* or *perfect* if all of the leaf nodes are at the same depth and all the levels are completely filled. We now denote by the *level*, $n$, a complete set of vertices that have the same depth. These are enumerated with the root level as $0$. We will refer to a *level $n$ tree* as a complete tree where the leaves are at level $n$. The *leaf-to-leaf distance*, $\ell$, is the number of edges that are passed to go from one leaf node to another (cp. Figure \[fig-binarytree\](a)).
Let us now impose an *order* on the tree of Figure \[fig-binarytree\](a) such that the leaves are enumerated from left to right to indicate position values, $x_i$, for leaf $i$. Then we can define a leaf *separation* $r=|x_{i} - x_{j}|$ for any pair of leaves $i$ and $j$. This is equivalent to the notion of distance on a one-dimensional physical lattice. Let the *length* $L$ be the length of the lattice, i.e. number of leaf nodes. Then for such a complete binary tree, we have $L = 2^{n}$.
Clearly, there are many pairs of leaves separated by $r$ from each other (cp. Figure \[fig-binarytree\](a)). Let $\{\ell_{n}(r)\}$ denote the set of all corresponding leaf-to-leaf distances. We now want to calculate the average leaf-to-leaf distance ${\cal L}_{n}(r)$ from the set $\{\ell_{n}(r)\}$. We first note that for a level $n$ tree the number of possible paths with separation $r$ is $2^{n}-r$. In Figure \[fig-binarytree-maximalpaths\](b), we see that any complete level $n$ tree can be decomposed into two level $n-1$ sub-trees each of which contains $2^{n-1}$ leaves. Let ${\cal S}_n(r)$ denote the sum of all possible leaf-to-leaf distances encoded in the set $\{\ell_{n}(r)\}$. The structure of the decomposition in Figure \[fig-binarytree-maximalpaths\](b) suggests that we need to distinguish two classes of separations $r$. First, for $r<2^{n-1}$, paths are either completely contained within each of the two level $n-1$ trees or they bridge from the left level $n-1$ tree to the right level $n-1$ tree. Those which are completely contained sum to $2{\cal S}_{n-1}(r)$. For those paths with separation $r$ that bridge across the two level $(n-1)$ trees, there are $r$ of such paths and each path has lengths $\ell_{n-1}=2n$. Next, for $r\geq 2^{n-1}$, paths no longer fit into a level $n-1$ tree and always bridge from left to right. Again, each such path is $2n$ long and there are $L-r=2^{n}-r$ such paths. Putting it all together, we find that $${\cal S}_{n}(r) = \left\{ \begin{array}{l l}
2{\cal S}_{n-1}(r) + 2nr, & \quad r < 2^{n-1}, \\
2n(2^{n} - r), & \quad r \geq 2^{n-1}.
\end{array} \right.
\label{eqn-rec-sumpathlength}$$ for $n>1$ and with ${\cal S}_1(r)=1$. Dividing by the total number of possible paths with separation $r$ then gives the desired average leaf-to-leaf distance $${\cal L}_{n}(r) \equiv \frac{{\cal S}_{n}(r)}{2^n-r}.
\label{eqn-rec-avgpathlength}$$
An explicit expression {#sec-binary-averagepathlength-expression}
----------------------
As long as $r < 2^{n-1}$, equation (\[eqn-rec-sumpathlength\]) can be recursively expanded, i.e.
$$\begin{aligned}
\label{eqn-ana-Sexpand}
{\cal S}_{n}(r) & = 2{\cal S}_{n-1}(r) + 2nr\\
& = 2 \left[ 2{\cal S}_{n-2}(r) + 2(n-1)r \right] + 2nr \\
& = \ldots \nonumber\end{aligned}$$
After $\nu$ such expansions, we arrive at $${\cal S}_{n}(r) = 2^{\nu} {\cal S}_{n-\nu}(r) + \sum_{k=0}^{\nu-1} 2^{k+1} (n-k) r.
\label{eqn-ana-Snsum}$$ The expansion can continue while $r < 2^{n-\nu-1}$. It terminates when $n-\nu$ becomes so small such that the leaf separation $r$ is no longer contained within the level-$(n-\nu)$ tree. Hence the smallest permissible value of $n-\nu$ is given by $$n_c (r)= \lfloor \log_2 r \rfloor +1 ,
\label{eqn-ana-nc}$$ where $\lfloor \cdot \rfloor$ denotes the floor function. For clarity, we will suppress the $r$ dependence, i.e. we write $n_c\equiv n_c(r)$ in the following. Continuing with the expansion of ${\cal S}_n(r)$ up to the $n_{c}$ term, we find
$$\begin{aligned}
{\cal S}_{n}(r) & = 2^{n-n_c} {\cal S}_{n_c}(r) + \sum_{k=0}^{n-n_c-1} 2^{k+1}(n-k)\ r \label{eqn-ana-Sn-sum-1}\\
& = 2^{n-n_c} {\cal S}_{n_c}(r) + [2^{n-n_c+1}(n_c+2)-2(n+2)]\ r \, .
\label{eqn-ana-Snrec}\end{aligned}$$
Details for the summations occurring in Equation (\[eqn-ana-Snrec\]) are given in Appendix \[sec-series\]. From Equation (\[eqn-rec-sumpathlength\]), we have ${\cal S}_{n_c}(r) =2n_c(2^{n_c} - r)$, so equation (\[eqn-ana-Snrec\]) becomes $${\cal S}_{n}(r)= 2^{n+1} ( n_c + 2^{1-n_c}r ) -2(n+2)r \, .
\label{eqn-ana-SnN}$$ Hence the average leaf-to-leaf distances are given by $${\cal L}_n(r)= \frac{2}{2^n-r}\left[ 2^{n} ( n_c + 2^{1-n_c}r ) -(n+2)r \right] .
\label{eqn-ana-Lq}$$
(a)![image](n20tree.eps){width="45.00000%"} (b)![image](lim_m_1048575.eps){width="45.00000%"}
In the limit of $n \rightarrow\infty$ for fixed $r$, we have $$\lim_{n\rightarrow \infty} {\cal L}_n(r)\equiv {\cal L}_{\infty}(r)= 2 \left( n_c + 2^{1-n_c}r \right).
\label{eqn-ana-Lq-inf}$$ We emphasize that ${\cal L}_{\infty}(r) < \infty$ $\forall\ r < \infty$.
In Figure \[fig-bin-Lr\](a) we show finite and infinite leaf-to-leaf distances ${\cal L}_n(r)$. We see that whenever $r = 2^i$, $i \in \mathbb{N}$, we have a cusp in the ${\cal L}_n(r)$ curves. Between these points, the $\lfloor \cdot \rfloor$ function enhances deviations from the leading $\log_2 r$ behavior. This behavior is from the self-similar structure of the tree. Consider a sub-tree with $\nu$ levels, the largest separation that can occur in that sub-tree is $r = 2^{\nu}$, which has average distance $2\nu$. When $r$ becomes larger than the sub-tree size the leaf-to-leaf distance can no longer be $2\nu-1$ but always larger, so there is a cusp where this distance is removed from the possibilities. The constant average distance when $r \geq \frac{L}{2}$ is because there is only one possible leaf-to-leaf distance that connects the two primary sub-trees, which is clear from (\[eqn-rec-sumpathlength\]).
Generalization to complete m-ary trees {#sec-general-averagepathlength}
======================================
Average leaf-to-leaf distance in complete ternary trees {#sec-ternary-averagepathlength}
-------------------------------------------------------
Ternary trees are those where each node has *three* children. Let us denote by ${\cal S}^{(3)}_n(r)$ and ${\cal L}^{(3)}_n(r)$ the sum and average, respectively, of all possible leaf-to-leaf distances $\{ \ell^{(3)}_n(r)\}$ for given $r$ in analogy to the binary case discussed before. Furthermore, $L=3^n$. Following the arguments which led to Equation (\[eqn-rec-sumpathlength\]), we have $${\cal S}^{(3)}_{n}(r) = \left\{ \begin{array}{l l}
3{\cal S}^{(3)}_{n-1}(r) + 4nr, & \quad r < 3^{n-1} ,\\
2n(3^{n} - r), & \quad r \geq 3^{n-1}.
\end{array} \right.
\label{eqn-ternary-rec-sumpathlength}$$ This recursive expression can again be understood readily when looking at the structure of a ternary tree. Clearly, ${\cal S}^{(3)}_{n}(r)$ will now consist of the sum of leaf-to-leaf distances for three level $n$ trees, plus the sum of all paths that connect the nodes across the three trees of level $n$. The distances of these paths is solely determined by $n$ irrespective of the number of children and hence remains $2n$. As before, we need to distinguish between the case when $r$ fits within a level $n-1$ tree, i.e. $r< 3^{n-1}$, and when it connects different level $n-1$ trees, $r\geq 3^{n-1}$. For $r< 3^{n-1}$, there are now $2 r$ such paths, i.e., $r$ between the left and center level $n-1$ trees and $r$ the center and right level $n-1$ trees. For $r\geq 3^{n-1}$ there are $L-r = 3^{n}-r$ paths. We again expand the recursion (\[eqn-ternary-rec-sumpathlength\]) and find, with $n^{(3)}_{c} = \lfloor \log_{3}r \rfloor +1$ in analogy to (\[eqn-ana-nc\]), that $${\cal S}^{(3)}_n(r)= 3^{n} \left[ 2n_{c}^{(3)} + 3^{1-n_{c}^{(3)}}r \right] - (2n+3)r
\label{eqn-ternary-ana-Sq}$$ and $$\begin{aligned}
{\cal L}^{(3)}_n(r) &= \frac{S^{(3)}_n(r)}{3^n-r},\label{eqn-ternary-ana-Lq}\\
{\cal L}^{(3)}_{\infty}(r) &= 2n_{c}^{(3)} + 3^{1-n_{c}^{(3)}}r.\label{eqn-ternary-ana-Lq-inf}\end{aligned}$$
Average leaf-to-leaf distance in complete $m$-ary trees {#sec-mary-averagepathlength}
-------------------------------------------------------
The methodology and discussion of the binary and ternary trees can be generalized to trees of $m>1$ children, known as *$m$-ary* trees. The maximal leaf-to-leaf distance for any tree is independent of $m$ and determined entirely by the geometry of the tree. Each leaf node is at depth $n$, a maximal path has the root node as the lowest common ancestor, therefore the maximal path is $2n$.
A recursive function can be obtained using similar logic to before. For a given $n$, there are $m$ subgraphs with the structure of a tree with $n-1$ levels. When $r$ is less than the size of each subgraph ($r < m^{n-1}$), the sum of the paths is therefore the sum of $m$ copies of the subgraph along with the paths that connect neighboring pairs. When $r$ is larger than the size of the subgraph ($r \geq m^{n-1}$), the paths are all maximal. When all this is taken into account the recursive function is $${\cal S}^{(m)}_{n}(r) = \left\{ \begin{array}{l l}
m{\cal S}^{(m)}_{n-1}(r) + 2(m-1)nr, & \quad r < m^{n-1}, \\
2n(m^{n} - r), & \quad r \geq m^{n-1}.
\end{array} \right.
\label{eqn-mary-rec-sumpathlength}$$ This can be solved in the same way as the binary case to obtain an expression for the sum of the paths for a given $m$, $n$ and $r$ $$\mathcal{S}^{(m)}_{n}(r) = 2 m^{n} \left[ n_{c}^{(m)} + \frac{m^{1-n_{c}^{(m)}}r}{(m-1)} \right] - 2r \left( n + \frac{m}{m-1} \right),
\label{eqn-mary-ana-Sq}$$ The average leaf-to-leaf distance is then $${\cal L}^{(m)}_{n}(r) = \frac{{\cal S}^{(m)}_{n}(r)}{m^{n}-r}.
\label{eqn-mary-ana-Lq}$$ and $${\cal L}^{(m)}_{\infty}(r) = 2 \left[ n_{c}^{(m)} + \frac{m^{1-n_{c}^{(m)}}r}{(m-1)} \right] .
\label{eqn-mary-ana-Lq-inf}$$ We note that in analogy with Equation (\[eqn-ana-nc\]), we have used $$n^{(m)}_{c}= \lfloor \log_m r \rfloor + 1$$ in deriving these expressions. Figure \[fig-bin-Lr\](b) shows the resulting leaf-to-leaf distances in the $n\rightarrow\infty$ limit for various values of $m$.
Moments of the leaf-to-leaf distance distribution in complete m-ary trees
=========================================================================
Variance of leaf-to-leaf distances in complete $m$-ary trees {#sec-mary-variance}
------------------------------------------------------------
In addition to the average leaf-to-leaf distance ${\cal L}^{(m)}_{n}(r)$, it is also of interest to ascertain its variance $\textrm{var}[{\cal L}^{(m)}_{n}](r)= \langle [{\cal L}^{(m)}_{n}(r)]^2 \rangle - [{\cal L}^{(m)}_{n}(r)]^2$. Here $\langle \cdot \rangle$ denotes the average over all paths for given $r$ in an $m$-ary tree as before. In order to obtain the variance, we obviously need to obtain an expression for the sum of the squares of leaf-to-leaf distances. This can again be done recursively, i.e. with ${\cal Q}^{(m)}_n(r)$ denoting this sum of squared leaf-to-leaf distance for an $m$-ary tree of leaf separation $r$, we have similarly to Equation (\[eqn-mary-rec-sumpathlength\]) $${\cal Q}^{(m)}_{n}(r) = \left\{ \begin{array}{l l}
m{\cal Q}^{(m)}_{n-1}(r) + (m-1)4n^2r, & \quad r < m^{n-1}, \\
4n^2(m^{n} - r), & \quad r \geq m^{n-1} .
\end{array} \right.
\label{eqn-mary-rec-sumpathlengthsquare}$$ Here, the difference to Equation (\[eqn-mary-rec-sumpathlength\]) is that we have squared the distance terms $2n$. As before, expanding down to $n_c$ (here and in the following, we suppress the $(m)$ superscript of $n_{c}^{(m)}$ for clarity) gives a term containing ${\cal Q}^{(m)}_{n_c}(r)$,
$$\begin{aligned}
{\cal Q}^{(m)}_{n}(r) & = m^{n-n_c} {\cal Q}^{(m)}_{n_c}(r) + \nonumber \\ & \qquad \sum_{k=0}^{n-n_c-1} 4(m-1)(n-k)^2 m^k r \\
& = m^{n-n_c} {\cal Q}^{(m)}_{n_c}(r) + \nonumber \\ & \qquad 4r(m-1) \sum_{k=0}^{n-n_{c}-1} \left[ n^{2} m^{k} - 2nk m^{k} + k^{2} m^{k} \right] \label{eqn-mary-Qn-sum-2}\\
& = \frac{4}{(m-1)^{2}} \left\{ r m^{n-n_{c}+1} \left[m + 2n_{c}(m-1) + 1 \right] - \right. \nonumber \\
& \qquad \left. r \left[ n^{2} + m^2(n+1)^{2} + m(1-2n(n+1)) \right] + \right. \nonumber \\ & \left. \qquad \qquad m^{n}(m-1)^{2}n_{c}^{2} \right\} .
\nonumber \\
\label{eqn-mary-ana-Qnrec}\end{aligned}$$
As before, details for the summations occurring in Equation (\[eqn-mary-Qn-sum-2\]) are given in Appendix \[sec-series\]. We can therefore write for the variance $$\begin{aligned}
\textrm{var}[{\cal L}^{(m)}_{n}](r)
&=\frac{{\cal Q}^{(m)}_n(r)}{m^n -r} - \left[ {\cal L}^{(m)}_n(r) \right]^2 \nonumber \\
&=\frac{{\cal Q}^{(m)}_n(r)}{m^n -r} - \left[ \frac{{\cal S}^{(m)}_n(r)}{m^n -r} \right]^2 .\end{aligned}$$ Using Equations (\[eqn-mary-ana-Qnrec\]), (\[eqn-mary-ana-Lq\]) and (\[eqn-mary-ana-Sq\]), we then have explicitly
$$\begin{aligned}
\textrm{var}[{\cal L}^{(m)}_{n}](r) & = \frac{4r}{m^{2n_{c}-2}(m^{n}-r)^{2}(m-1)^{2}} \Bigl( m^{2n} \Bigl[ m^{n_{c}-1}(m+1)-r \Bigr] + m^{2n_{c}-1}r -
m^{n} \Bigl\{ m^{n_{c}-1}(2n-2n_{c}+1)(m-1)r - \nonumber \\
& \qquad \quad m^{2n_{c}-2}(n_{c}-n)^{2} + m^{2n_{c}}(n-n_{c}+1)^{2} -
m^{2n_{c}-1}\Bigl[2n^{2}-n(4n_{c}-2)+2n_{c}(n_{c}-1)-1\Bigr] \Bigr\} \Bigr),\end{aligned}$$
and also $$\textrm{var}[{\cal L}^{(m)}_{\infty}](r)
= \frac{4r \left[ m^{n_{c}-1} (m+1) -r \right]}{m^{2 n_{c}-2} (m-1)^2}.$$
(a)![image](m2_n20_var_fin_in.eps){width="45.00000%"} (b)![image](lim_m_1048575_variance.eps){width="45.00000%"}
When $r= m^i$, $i \in \mathbb{N}^{0}$, then $\textrm{var}[{\cal L}^{(m)}_{\infty}]$ has a local minima and we find that $\textrm{var}[{\cal L}^{(m)}_{\infty}](m^i)= \frac{4 m}{(m-1)^2}$. Similarly, it can be shown that the local maxima are at $r=\frac{1}{2}m^{i}(m+1)$, then $\textrm{var}[{\cal L}^{(m)}_{\infty}]= \frac{4 m}{(m-1)^2} + 1$. These values are indicated in Figure \[fig-mary-moments-lim\_m\] for selected $m$.
General moments of leaf-to-leaf distances in complete $m$-ary trees {#sec-mary-moments}
-------------------------------------------------------------------
The derivation in section \[sec-mary-variance\] suggests that any ${q}$-th raw moment of leaf-to-leaf distances can be calculated similarly as in Equation (\[eqn-mary-rec-sumpathlengthsquare\]). Indeed, let us define ${\cal M}^{(m)}_{q,n}(r)$ as the ${q}$-th moment of an $m$-ary tree of level $n$ with leaf separation $r$. Then ${\cal M}^{(m)}_{1,n}(r)={\cal L}^{(m)}_{n}(r)$, ${\cal M}^{(m)}_{2,n}(r)={\cal Q}^{(m)}_{n}(r)$ and $$\textrm{var}[{\cal L}^{(m)}_{n}](r)= \frac{{\cal M}^{(m)}_{2,n}(r)}{m^n -r} - \left[\frac{{\cal M}^{(m)}_{1,n}(r)}{(m^n -r)}\right]^2.$$ Following Equation (\[eqn-mary-rec-sumpathlengthsquare\]), we find $${\cal M}^{(m)}_{q,n}(r) = \left\{ \begin{array}{l l}
m{\cal M}^{(m)}_{q,n-1}(r) + 2^{q} n^{q} (m-1) r, & \quad r < m^{n-1}, \\
2^{q} n^{q} (m^{n} - r), & \quad r \geq m^{n-1} .
\end{array} \right.
\label{eqn-mary-rec-sumpathlengthmoment-recursion-a}$$ By expanding, this gives $$\begin{aligned}
{\cal M}^{(m)}_{q,n}(r) &=
m^{n-n_c} {\cal M}^{(m)}_{q,n_c}(r) + \nonumber \\
& \qquad \sum_{k=0}^{n-n_c-1} 2^{q} m^k (m-1) (n-k)^{q} r.
\label{eqn-mary-rec-sumpathlengthmoment-nc}\end{aligned}$$ As before, $n_c$ corresponds to the first $n$ value where, for given $r$, we have to use the second part of the expansion as in Equation (\[eqn-mary-rec-sumpathlengthmoment-recursion-a\]). Hence we can substitute the second part of (\[eqn-mary-rec-sumpathlengthmoment-nc\]) for ${\cal M}^{(m)}_{q,n_c-1}(r)$ giving $$\begin{aligned}
{\cal M}^{(m)}_{q,n}(r) &= m^{n-n_c} 2^{q} n_{c}^{{q}} (m^{n_c}-r) + \nonumber \\ & \qquad \sum_{k=0}^{n-n_c-1} 2^{q} m^k (m-1) (n-k)^{q} r.
\label{eqn-mary-rec-final}\end{aligned}$$ In order to derive an explicit expression for this similar to section \[sec-binary-averagepathlength-expression\], we need again to study the final sum of Equation (\[eqn-mary-rec-final\]). We write
$$\begin{aligned}
\sum_{k=0}^{n-n_{c}-1}2^q m^k (m-1) (n-k)^{q} r &= r(m-1)(-2)^{q} \left[ \sum_{k=0}^{\infty}m^{k} (k-n)^{{q}} - \sum_{k=n-n_{c}}^{\infty} m^{k} (k-n)^{{q}} \right] \\
\quad &= r(m-1)(-2)^{q} \left[ \sum_{k=0}^{\infty}m^{k} (k-n)^{{q}} - m^{n-n_{c}} \sum_{k=0}^{\infty} m^{k} (k - n_{c})^{{q}} \right] \\
\quad &= r(m-1)(-2)^{q} \left[ \Phi \left( m,-{q},-n \right) -m^{n-n_{c}}\Phi \left( m,-{q},-n_{c} \right) \right] ,\end{aligned}$$
where in the last step we have introduced the *Hurwitz-Lerch Zeta function* $\Phi$ [@KanKY00; @Sri13] (also referred to as the *Lerch transcendent* [@GuiS08] or the *Hurwitz-Lerch Transcendent* [@Mathematica9]). It is defined as the sum $$\Phi(z,s,u) = \sum_{k=0}^{\infty} \frac{z^{k}}{(k+u)^{s}}, \quad z\in\mathbb{C}.$$ The properties of $\Phi(z,s,u)$ are [@GuiS08]
$$\begin{aligned}
\Phi(z,s,u+1) & = \frac{1}{z} \left[ \Phi(z,s,u)-\frac{1}{u^{s}} \right] \label{eqn:Phi_up1}, \\
\Phi(z,s-1,u) & = \left(u + z\frac{\partial}{\partial z} \right) \Phi(z,s,u) \label{eqn:Phi_sm1}, \\
\Phi(z,s+1,u) & = - \frac{1}{s} \frac{\partial \Phi}{\partial u} (z,s,u). \label{eqn:Phi_sp1}\end{aligned}$$
Hence we can write $$\begin{aligned}
{\cal M}^{(m)}_{q,n}(r) &= m^{n-n_c} 2^q n_{c}^{{q}} (m^{n_c}-r) + \nonumber \\
& \qquad r(m-1)(-2)^{q} \left[ \Phi \left( m,-{q},-n \right) - \right. \nonumber \\
& \qquad \quad \left. m^{n-n_{c}}\Phi \left( m,-{q},-n_{c} \right) \right].
\label{eqn-mary-kmom}\end{aligned}$$ Averages of ${\cal M}^{(m)}_{,n}(r) $ can be defined as previously via $${\cal A}^{(m)}_{q,n}(r) = \frac{{\cal M}^{(m)}_{{q},n}(r) }{m^n-r}$$ such that ${\cal L}^{(m)}_{n}(r) = {\cal A}^{(m)}_{1,n}(r)$ and $\textrm{var}[{\cal L}^{(m)}_{n}](r)= {\cal A}^{(m)}_{2,n}(r) - \left[ {\cal A}^{(m)}_{1,n}(r) \right]^2$.
The properties (\[eqn:Phi\_up1\]) – (\[eqn:Phi\_sp1\]) can be used to show that, for a given $m$ and ${q}$, $\Phi \left( m,-{q},-n \right)$ can be expressed as a polynomial of order $(-n)^{{q}}$. Therefore in the $n \to \infty$ limit, we find $$\begin{aligned}
\lim_{n\rightarrow \infty} {\cal A}^{(m)}_{q,n}(r) &\equiv {\cal A}^{(m)}_{{q},\infty}(r) \nonumber \\
&= m^{-n_{c}} \left[ 2^q n_{c}^{{q}} (m^{n_c}-r) - \right. \nonumber \\
& \left. \qquad r(m-1)(-2)^{q} \Phi \left( m,-{q}, - n_{c} \right) \right]. \label{eqn-mary-kmom-limit}\end{aligned}$$
Complete $m$-ary trees with periodicity
=======================================
Up to now we have always dealt with trees in which the maximum separation $r$ was set by the number of leaves, i.e. $r \leq m^n$. This is know as a hard wall or *open* boundary in terms of physical systems. A *periodic* boundary can be realized by having the leaves of the tree form a circle as depicted in Figure \[fig-periodicbinarytree\] for a binary tree.
![A periodic, complete, binary tree with $n=8$ levels. Circles and lines as in Figure \[fig-binarytree\](a).[]{data-label="fig-periodicbinarytree"}](binary_8_PBC.eps){width="0.8\columnwidth"}
For such a binary tree, only separations $r \leq L/2$ are relevant since all cases with $r > L/2$ can be reduced to smaller $r=\mathrm{mod}(r,L/2)$ values by going around the periodic tree in the opposite direction. Therefore we can write $${\cal M}^{(m,\circ)}_{1,n}(r) = {\cal M}^{(m)}_{1,n}(r) + {\cal M}^{(m)}_{1,n}(m^{n}-r),
\label{eqn-pbc-opc}$$ where $r < L/2$ and the superscript $\circ$ denotes the periodic case. Note that the case where $r = L/2$ the clockwise and anti-clockwise paths are the same so only need to be counted once. In the simple binary tree case we can expand this via (\[eqn-ana-SnN\]) as in section \[sec-binary-averagepathlength-expression\] and find $$\begin{aligned}
{\cal M}^{(2,\circ)}_{1,n}(r) & \equiv{\cal S}^{(2,\circ)}_{n}(r) \nonumber \\
&= 2^{n+1}\left[ n_{c} + \tilde{n}_{c} -n -2 + \right. \nonumber \\
& \qquad \left. 2^{1-n_{c}}r + 2^{1-\tilde{n}_{c}}(2^n-r) \right],
\label{eqn-pbc-ana-Sq}\end{aligned}$$ with $n_{c}$ as in Equation (\[eqn-ana-nc\]) and $\tilde{n}_{c}= \lfloor \log_2 (2^n -r) \rfloor + 1$. For every $r$, we have $2^n$ possible starting leaf positions on a periodic binary tree and hence the average leaf-to-leaf distance can be written as $$\begin{aligned}
{\cal A}^{(2,\circ)}_{1,n}(r) &\equiv {\cal L}^{(2,\circ)}_n(r) = \frac{ {\cal S}^{(2,\circ)}_{n}(r) }{2^n} \nonumber \\
& = 2 \left[ n_{c} + \tilde{n}_{c} -n -2 + 2^{1-n_{c}}r + 2^{1-\tilde{n}_{c}}(2^n-r) \right] .
\label{eqn-pbc-ana-Lq}\end{aligned}$$ This expression is the periodic analogue to Equation (\[eqn-ana-Lq\]). Generalizing to $m$-ary trees, with $\tilde{n}_{c} = \lfloor \log_m (m^n -r) \rfloor + 1$, we find $$\begin{aligned}
{\cal M}^{(m,\circ)}_{1,n}(r) & = {\cal M}^{(m)}_{1,n}(r) + {\cal M}^{(m)}_{1,n}(m^{n}-r) \\
& = 2m^{n} \left[ n_{c}+\tilde{n}_{c} -n + \right. \nonumber \\
& \qquad \left. \frac{1}{m-1} \left( m^{1-n_{c}}r + m^{1-\tilde{n}_{c}}(m^{n}-r) -m \right) \right].\end{aligned}$$ The average leaf-to-leaf distance for $m$-ary periodic trees is then given as $$\begin{aligned}
{\cal A}^{(m,\circ)}_{1,n}(r)
&= \frac{{\cal M}^{(m,\circ)}_{1,n}(r) }{m^{n}} \nonumber \\
&= 2\left[ n_{c}+\tilde{n}_{c} -n + \right. \nonumber \\
&\qquad \left. \frac{1}{m-1} \left( m^{1-n_{c}}r + m^{1-\tilde{n}_{c}}(m^{n}-r) -m \right) \right] .
\label{eqn-pbc-av}\end{aligned}$$
To again study the case of $n \to \infty$, it is necessary to observe how $\tilde{n}_{c}$ behaves for large $n$ and fixed $m$, $r$. When $n\gg r$, we have $r<m^{n-1}$ and hence $\lim_{n\rightarrow\infty}\lfloor \log_m (m^n -r) \rfloor = n-1$. This enables us to simply take the limits of Equation (\[eqn-pbc-av\]) to give $$\lim_{n\rightarrow \infty} {\cal A}^{(m,\circ)}_{1,n}(r) \equiv {\cal A}^{(m,\circ)}_{1,\infty}(r) = 2 \left[ n_{c} + \frac{m^{1-n_{c}}r}{(m-1)} \right],
\label{eqn-pbc-av-lim}$$ which is the same as the open boundary case (\[eqn-mary-ana-Lq-inf\]). This is to be expected as a small region of a large circle can be approximated by a straight line.
Last, the $q$-moments can be expressed similarly to Equation (\[eqn-mary-kmom\]) via the Lerch transcendent as $$\begin{aligned}
{\cal M}^{(m,\circ)}_{q,n}(r) & = {\cal M}^{(m)}_{q,n}(r) + {\cal M}^{(m)}_{{q},n}(m^{n}-r), \\
& = m^{n-n_c} 2^q n_{c}^{{q}} (m^{n_c}-r) + \nonumber \\
& \qquad m^{n-\tilde{n}_{c}} 2^{q} \tilde{n}_{c}^{{q}} (m^{\tilde{n}_{c}}-m^{n}+r) + \nonumber\\
& \qquad (m-1)(-2)^{q} \Big[ m^{n} \Phi(m,-q,-n) - \nonumber \\
& \qquad rm^{n-n_{c}} \Phi(m,-q,-n_{c}) - \nonumber\\
& \qquad (m^{n}-r)m^{n-\tilde{n}_{c}}\Phi(m,-q,-\tilde{n}_{c}) \Big],\end{aligned}$$ The average $q$-moments in full are therefore $${\cal A}^{(m,\circ)}_{q,n}(r) = \frac{{\cal M}^{(m,\circ)}_{{q},n}(r) }{m^{n}}$$ for a complete, periodic, $m$-ary tree. To take the limit $n \to \infty$ notice that $\tilde{n}_{c} = n$ when $r<m^{n-1}$ for large $n$. Just like with Equation (\[eqn-pbc-av-lim\]), this results in ${\cal A}^{(m,\circ)}_{q,\infty}(r) = {\cal A}^{(m)}_{{q},\infty}(r)$.
Asymptotic scaling of the correlation for a homogeneous tree tensor network {#sec-TTN}
===========================================================================
Tree tensor networks (TTNs) are tensor networks that have the structure of a tree graph and are often used to model critical one-dimensional many-body quantum lattice systems as they can be efficiently updated [@ShiDV06; @TagEV09]. In principle it is possible to start from a tensor network wavefunction and derive a *parent Hamiltonian* for which the wavefunction is a ground state [@VerWPC06; @WolOVC06]. In the case of homogeneous TTNs the procedure to create such a parent Hamiltonian seems likely to be highly non-trivial and not unique. Here we build such a TTN from the *binary* tree structure shown in Fig. \[fig-binarytree\]. At each internal vertex we place an isometric tensor [@EveV09; @GolR14] with initially random entries and so-called bond dimension $\chi=4$. Using as proxy a spin-1/2 Heisenberg model $H = \sum_{i=1}^{L-1} \vec{s}_{i} \cdot \vec{s}_{i+1}$, with $\vec{s}_{i}$ the spin-$1/2$ operator, we perform energy minimisation [@EveV09; @TagEV09] at a bulk site. After each minimization, we replicate the bulk tensor to all other tensors such that every isometry is kept identical [@Sch11]. The process is then repeated until convergence (in energy). A two-point correlation function $\langle \vec{s}_{x_{1}} \cdot \vec{s}_{x_{2}} \rangle$ is calculated [@TagEV09; @GolR14] for all pairs of sites and averaged for all points separated by $|x_{2} - x_{1}|$. The results are given in Fig. \[fig-TTNcorrcomplete\].
![ (Color Online) Two point correlation function for TTNs with $\chi=4$ averaged over all pairs of sites separated by $|x_{2} - x_{1}|$ as discussed in text. The TTNs have $L = 128$ (blue diamonds), $256$ (green squares), $512$ (red circles), $1024$ (black crosses) corresponding to $n = 7$, $8$, $9$, $10$ levels respectively. The vertical dashed lines highlight $|x_{2} - x_{1}| = 16$, $32$, $64$, $128$, $256$, $512$. The orange dashed line corresponds to a fit of $A \text{ exp}[-\alpha \mathcal{L}_{n}^{(2)}(r)]$ with $A = 99 \pm 9$ and $\alpha = 0.742 \pm 0.006$. The grey shaded region is the standard error on the fit. []{data-label="fig-TTNcorrcomplete"}](TTNcorrcomplete.eps){width="\columnwidth"}
For a homogeneous tensor network, a two-point correlation should scale as [@EveV11] $$C(x_{1},x_{2}) \sim \text{exp}[-\alpha D_{TN}(x_{1},x_{2})],$$ where $\alpha$ is a constant and $D_{TN}(x_{1},x_{2})$ is the number of tensor connecting sites $x_{1}$ and $x_{2}$. Hence we expect the asymptotic correlation function to scale as $ \sim \text{exp} \left[ -\alpha \mathcal{L}_{n}^{(2)}(r) \right]$. Figure \[fig-TTNcorrcomplete\] shows that, away from small separations (e.g. $|x_{2} - x_{1}| > 32$ for $L=1024$), the content of the tensors no longer dominates the structural contribution and $\langle \vec{s}_{x_{1}} \cdot \vec{s}_{x_{2}} \rangle$ exhibits many of the properties we find in Fig. \[fig-bin-Lr\](a). The overall form of the long range correlations is a power law. There are also the characteristic fluctuations from the self-similar structure of the tree with cusps at $|x_{2} - x_{1}| = 2^{i}$ for integer $i\geq 5$ (corresponding to $|x_{2} - x_{1}| > 32$). When reaching the finite-size dominated regime $|x_{2}-x_{1}| \geq \frac{L}{2}$, we find an approximate constant average correlation. This is smaller than expected from Eq. (\[eqn-ana-Lq-inf\]) because the top tensor of the TTN only has $\chi = 1$ and contributes less to the correlation function than the other tensors. We emphasize that we have chosen a low bond dimension $\chi=4$ so that we can study the asymptotic form of the correlation functions for smaller system sizes.
The form of the correlations expressed in Fig. \[fig-TTNcorrcomplete\] corresponds to those of a suitable parent Hamiltonian, i.e. one that has a ground state implied by this holographic tree structure. In addition, the results may also be useful for those building TTNs as a variational method for the study of critical systems. The appearance of this form of the correlation for models that do not have a natural tree structure in the wavefunction, such as the Heisenberg model, is an indicator that the chosen $\chi$ is too small to capture the physics of the model. This is similar to the erroneous exponential decay of correlation functions found by DMRG for critical systems with power-law correlations in case of small $\chi$ [@Sch11]. In all these situations the structure of the network dominates the value of the correlation rather than the information in the tensors.
Leaf-to-leaf distances for random binary trees
==============================================
In Figure \[fig-randombinarytree\](a) we show a binary tree where the leaves do not all appear at the same level $n$, but rather each node can become a leaf node according to an independent and identically distributed random process. Such trees are no longer complete, but nevertheless have many applications in the sciences [@SedF13; @GolR14].
(a)![(a) A random binary tree. (b) A complete set of random binary trees for $n=$ 1,2 and 3 ($L=2,3,4$). Circles and lines are as in Figure \[fig-binarytree\](a). []{data-label="fig-randombinarytree"}](TTNtree2.eps "fig:"){width="0.8\columnwidth"} (b)![(a) A random binary tree. (b) A complete set of random binary trees for $n=$ 1,2 and 3 ($L=2,3,4$). Circles and lines are as in Figure \[fig-binarytree\](a). []{data-label="fig-randombinarytree"}](random_tree.eps "fig:"){width="0.8\columnwidth"}
Let us again compute the average leaf-to-leaf distance ${\cal L}^{(2,{\cal R})}_{n}(r)$ for a given $r$, when all possible pairs of leaves of separation $r$ and all possible trees of $L-1$ internal nodes are considered. Here ${\cal R}$ denotes the random character of trees under consideration. For each $n=L-1$, there are $n!$ different such random trees as shown in Figure \[fig-randombinarytree\](b). We construct these trees numerically and measure ${\cal L}^{(2,{\cal R})}_{n}(r)$ as shown in Figure \[fig-binary-random\](a) [^2]. For small $n$, we have computed all $(L-1)!$ trees (cp. Figure \[fig-binary-random\](a)) while for large $n$, we have averaged over a finite number $N\ll (L-1)!$ of randomly chosen binary trees among the $(L-1)!$ possible trees (cp. Figure \[fig-binary-random\](b)).
(a)![image](alltrees_9.eps){width="45.00000%"} (b)![image](randomtrees_1000_500.eps){width="45.00000%"}
\[fig:alltrees\_9\] \[fig-binary-random\]
We see in Figure \[fig-binary-random\](a) that, similar to the complete binary trees considered in the section \[sec-binary-averagepathlength\], the leaf-to-leaf distances increase with $r$ until they reach a maximal value. Unlike the complete tree in Figure \[fig-bin-Lr\](a), they start to decrease rapidly beyond this point. We also see that for such small trees, we are still far from the infinite complete tree result ${\cal L}^{(2)}_{\infty}(r)$ of Equation (\[eqn-ana-Lq-inf\]). Finally, we also see that when we choose $10,000$ random binary trees from the $10!=3,628,800$ possible such trees at $L=11$ that the average leaf-to-leaf distances for each $r$ is still distinguishably different from an exact summation of all leaf-to-leaf distances. This suggests that rare tree structures are quite important. In Figure \[fig-binary-random\](b) we nevertheless show estimates of ${\cal L}^{(2,{\cal R})}_{n}(r)$ for various $n$. As before, the shape of the curves for large $n$ is similar to those for small $n$. Clearly, however, the cusps in ${\cal L}^{(2)}_{n}(r)$ are no longer present in ${\cal L}^{(2,{\cal R})}_{n}(r)$. Also, the values of ${\cal L}^{(2,{\cal R})}_{n}(r)$ are larger than those for ${\cal L}^{(2)}_{n}(r)$ for small $r$.
Conclusions
===========
We have calculated an analytic form for the average distance between two leaves with a given separation — ordered according to the physical distance long a line — in a complete binary tree graph. This result is then generalized to a complete tree where each vertex has any finite number of children. In addition to the mean leaf-to-leaf distance, it is found that the raw moments of the distribution of leaf-to-leaf distances have an analytic form that can be expressed in a concise way in terms of the Hurwitz-Lerch Zeta function. These findings are calculated for open trees, where the leaves form an open line, periodic trees, where the leaves form a circle, and infinite trees, which is the limit where the number of levels, $n$, goes to infinity. Each of these results has a concise form and characteristic features due to the self-similarity of the trees. We believe that these results provide a useful insight into the structure of the regular tree graphs that are relevant for the field of tensor networks [@SilGMR10; @GerSRF14]. We also note that leaf-to-leaf distances computed here are qualitatively similar, but quantitatively different from those for random-spin chains [@GolR14]. This points to a subtle, yet physically relevant, difference in their Hilbert space properties.
We thank M. Bates, A. Czumaj and G. Rowlands for discussions. We would like to thank the EPSRC for financial support (EP/J003476/1) and provision of computing resources through the MidPlus Regional HPC Center (EP/K000128/1).
Some useful series expressions {#sec-series}
==============================
Series used in section \[sec-binary-averagepathlength-expression\]
------------------------------------------------------------------
When the last sum in Equation (\[eqn-ana-Sn-sum-1\]) is expanded, it is simply the sum of two geometric series. The first part can be simplified using $$\sum_{k=1}^{l} x^{k} = \frac{x(1-x^{l})}{1-x},
\label{eqn:Pgeo1}$$ the second part uses the arithmetico-geometric series $$\sum_{k=1}^{l} kx^{k+1} = \frac{x(1-x^{l+1})}{(1-x)^2} - \frac{x+lx^{l+2}}{1-x}.
\label{eqn:Pgeo2}$$
Series used in section \[sec-mary-variance\]
--------------------------------------------
The explicit expressions for the series terms occurring in Equation (\[eqn-mary-Qn-sum-2\]) are given here. The first part is again a simple geometric series $\sum_{k=0}^{l} x^{k} = \frac{1-x^{l+1}}{1-x}$ similar to \[eqn:Pgeo1\]. The second part is similar to (\[eqn:Pgeo2\]), $\sum_{k=0}^{l} kx^{k} = \frac{x(1-x^{l})}{(1-x)^2} - \frac{lx^{l+1}}{1-x}$. The final part is also an arithmetico-geometric series and has the form [@GraR94] $$\begin{aligned}
\sum_{k=0}^{l-1} k^{2}x^{k} &= \frac{1}{(1-x)^{3}} \left[(-l^{2}+2l-1)x^{l+2}+ \right. \nonumber \\
& \qquad \left. (2l^{2}-2l-1)x^{l+1}-l^{2}x^{l}+x^{2}+x \right].\end{aligned}$$
[27]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{}, ed. (, , ) [****, ()](\doibase 10.1021/ja01193a005) @noop [**]{} (, , ) @noop [**]{}, ed. (, , ) in @noop [**]{} (, ) @noop [****, ()]{} @noop [****, ()]{} @noop [**]{}, (, ) [****, ()](\doibase
http://dx.doi.org/10.1016/0304-3975(89)90115-1) [****, ()](\doibase 10.1137/S0097539790189368) [****, ()](\doibase 10.1016/j.aop.2010.09.012) [****, ()](\doibase 10.1103/PhysRevB.89.214203) [****, ()](\doibase 10.1007/s10955-011-0237-4) [****, ()](\doibase
10.1103/PhysRevA.81.062335) [****, ()](\doibase
10.1103/PhysRevB.90.125154) [****, ()](\doibase 10.1007/PL00000117) in [**](\doibase 10.1007/978-3-642-28821-0_17), (, ) pp. [****, ()](\doibase 10.1007/s11139-007-9102-0) @noop [**]{}, () [****, ()](\doibase 10.1103/PhysRevA.74.022320) [****, ()](\doibase 10.1103/PhysRevB.80.235127) [****, ()](\doibase 10.1103/PhysRevLett.96.220601) [****, ()](\doibase 10.1103/PhysRevLett.97.110403) [****, ()](\doibase 10.1103/PhysRevB.79.144108) @noop [**]{} (, , )
[^1]: This is the information needed by the holography methods used in Ref. [@GolR14].
[^2]: We emphasize that this definition of a random trees is different from the definition of so-called Catalan tree graphs [@SedF13], as the number of unique graphs is given by the Catalan number $C_n$ and does not double count the degenerate graphs as shown in the center of the $n=3$ case of Figure \[fig-randombinarytree\](b).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'If $G$ is a finite group with subgroup $H$ then the [*Chermak-Delgado measure of $H$ (in $G$)*]{} is defined as $\ord H \ord {C_G(H)}$. The Chermak-Delgado lattice of $G$, denoted $\CD G$, is the set of all subgroups with maximal Chermak-Delgado measure; this set is a sublattice within the subgroup lattice of $G$. In this paper we provide an example of a $p$-group $P$, for any prime $p$, where $\CD P$ is lattice isomorphic to $2$ copies of $\MM {4}$ (a quasiantichain of width $2$) that are adjoined maximum-to-minimum. We introduce terminology to describe this structure, called a $2$-string of $2$-diamonds, and we also give two constructions for generalizing the example. The first generalization results in a $p$-group with Chermak-Delgado lattice that, for any positive integers $n$ and $l$, is a $2l$-string of $n$-dimensional cubes adjoined maximum-to-minimum and the second generalization gives a construction for a $p$-group with Chermak-Delgado lattice that is a $2l$-string of $\MM {p+3}$ (quasiantichains, each of width $p + 1$) adjoined maximum-to-minimum.'
address:
- |
Lijian An, Shanxi Normal University, Department of Mathematics\
Linfen, China 041004
- 'Joseph Brennan, Binghamton University, Department of Mathematical Sciences, Binghamton, New York 13902'
- 'Haipeng Qu, Shanxi Normal University, Department of Mathematics, Linfen, China 041004'
- 'Elizabeth Wilcox, State University of New York at Oswego, Mathematics Department, Oswego, New York 13126-3599'
author:
- Lijian An
- Joseph Brennan
- Haipeng Qu
- Elizabeth Wilcox
bibliography:
- 'references.bib'
title: '**Chermak-Delgado Lattice Extension Theorems**'
---
The Chermak-Delgado measure was originally defined by A. Chermak and A. Delgado as one in a family of functions from the subgroup lattice of a finite group into the positive integers. I. Martin Isaacs re-examined one of these function, dubbed it the Chermak-Delgado measure, and proved that subgroups with maximal Chermak-Delgado measure form a sublattice in the subgroup lattice of the group. B. Brewster and E. Wilcox then demonstrated that, for a direct product, this Chermak-Delgado lattice decomposes as the direct product of the Chermak-Delgado lattices of the factors, giving rise to the attention on the Chermak-Delgado lattice of $p$-groups (for a prime $p$) in this paper and others.
The variety seen in the structure of the Chermak-Delgado lattices of $p$-groups seems inexhaustible; for example, there are many $p$-groups with a Chermak-Delgado lattice that is a single subgroup, a chain of arbitrary length, or a quasiantichain of width $p + 1$. In this paper we show that for any non-abelian $p$-group $N$ with $N$ is in its own Chermak-Delgado measure and $\Phi (N) \leq \Z N$, there exist two $p$-groups $\mathcal{LE}(m,n)$ and $\mathcal{QE}(n)$ with similar properties and such that the Chermak-Delgado lattices of $\mathcal{LE}(m,n)$ and $\mathcal{QE}(n)$ are the Chermak-Delgado lattice of $N$ with either a $m$-diamond or a quasiantichain of width $p + 1$ (respectively) adjoined at both the maximum and minimum subgroups in the Chermak-Delgado lattice of $N$.
Let $G$ be a finite group and $H \leq G$. The [*Chermak-Delgado measure of $H$ (in $G$)*]{} is $m_G(H) = \ord H \ord {C_G(H)}$. When $G$ is clear from context we write simply $m(H)$. For the maximum Chermak-Delgado measure possible in $G$ we write $m^*(G)$ and we use $\CD G$ to denote the set of all subgroups $H$ with $m(H) = m^*(G)$. The proof that this set is actually a modular sublattice in the lattice of subgroups of $G$, called the [*Chermak-Delgado lattice of $G$*]{}, can be found in [@cd1989] and is also discussed in [@Isaacs Section 1G].
Of particular note regarding $\CD G$ are the properties: If $H, K \in \CD G$ then $\langle H, K \rangle = HK$, $C_G(H) \in \CD G$, and also $C_G(C_G(H)) = H$. This latter property is typically referred to as the “duality property” of the Chermak-Delgado lattice. It is also known that the maximum subgroup in $\CD G$ is characteristic and the minimum subgroup is characteristic, abelian, and contains $\Z G$.
To describe the lattices constructed wherein we introduce the following terms for a positive integer $n$. A [*quasiantichain of width $n$*]{} will be denoted by $\mathcal{M}_{n+2}$. An [*$n$-diamond*]{} is a lattice with subgroups in the configuration of an $n$-dimensional cube. These structures form the most common [*components*]{} used in this paper, though “component” can refer to a lattice of any configuration. A [*(uniform) $n$-string*]{} is a lattice with $n$ lattice isomorphic components, adjoined end-to-end so that the maximum of one component is identified with the minimum of the other component. A [*mixed $n$-string*]{} is a lattice with $n$ components adjoined in the same fashion, though with at least one component not lattice isomorphic to the remaining components.
[0.3]{}
(ZP) at (1,0) \[subgroup\] [$\bullet$]{}; (AO) at (0,1) \[subgroup\] [$\bullet$]{}; (AE) at (2,1) \[subgroup\] [$\bullet$]{}; (A) at (1,2) \[subgroup\] [$\bullet$]{}; (CAO) at (0,3) \[subgroup\] [$\bullet$]{}; (CAE) at (2,3) \[subgroup\] [$\bullet$]{}; (P) at (1,4) \[subgroup\] [$\bullet$]{}; (M1) at (0,5) \[subgroup\] [$\bullet$]{}; (M2) at (2,5) \[subgroup\] [$\bullet$]{}; (M) at (1,6) \[subgroup\] [$\bullet$]{}; (ZP) to (AE); (ZP) to (AO); (AO) to (A); (AE) to (A); (A) to (CAO); (A) to (CAE); (CAO) to (P); (CAE) to (P); (P) to (M1); (P) to (M2); (M1) to (M); (M2) to (M);
[0.3]{}
(ZP) at (2,0) \[subgroup\] [$\bullet$]{}; (A11) at (0,1) \[subgroup\] [$\bullet$]{}; (A12) at (1,1) \[subgroup\] [$\bullet$]{}; (A13) at (2,1) \[subgroup\] [$\bullet$]{}; (A14) at (3,1) \[subgroup\] [$\bullet$]{}; (A15) at (4,1) \[subgroup\] [$\bullet$]{}; (P) at (2,2) \[subgroup\] [$\bullet$]{}; (D) at (2,3) \[subgroup\] [$\bullet$]{}; (ZM) at (2,4) \[subgroup\] [$\bullet$]{}; (M) at (2,6) \[subgroup\] [$\bullet$]{}; (B11) at (0,5) \[subgroup\] [$\bullet$]{}; (B12) at (1,5) \[subgroup\] [$\bullet$]{}; (B13) at (2,5) \[subgroup\] [$\bullet$]{}; (B14) at (3,5) \[subgroup\] [$\bullet$]{}; (B15) at (4,5) \[subgroup\] [$\bullet$]{}; (ZP) to (A11); (ZP) to (A12); (ZP) to (A13); (ZP) to (A14); (ZP) to (A15); (P) to (A11); (P) to (A12); (P) to (A13); (P) to (A14); (P) to (A15); (ZM) to (B11); (ZM) to (B12); (ZM) to (B13); (ZM) to (B14); (ZM) to (B15) ;(M) to (B11); (M) to (B12); (M) to (B13); (M) to (B14); (M) to (B15); (ZM) to (D); (P) to (D);
[0.3]{}
(ZP) at (1,0) \[subgroup\] [$\bullet$]{}; (A11) at (0,1) \[subgroup\] [$\bullet$]{}; (A12) at (1,1) \[subgroup\] [$\bullet$]{}; (A13) at (2,1) \[subgroup\] [$\bullet$]{}; (A21) at (0,2) \[subgroup\] [$\bullet$]{}; (A22) at (1,2) \[subgroup\] [$\bullet$]{}; (A23) at (2,2) \[subgroup\] [$\bullet$]{}; (P) at (1,3) \[subgroup\] [$\bullet$]{}; (ZP) to (A11); (ZP) to (A12); (P) to (A23); (P) to (A21); (P) to (A22); (ZP) to (A13); (A11) to (A21); (A11) to (A22); (A12) to (A21); (A12) to (A23); (A13) to (A23); (A13) to (A22); (B11) at (0,4) \[subgroup\] [$\bullet$]{}; (B12) at (1,4) \[subgroup\] [$\bullet$]{}; (B13) at (2,4) \[subgroup\] [$\bullet$]{}; (B21) at (0,5) \[subgroup\] [$\bullet$]{}; (B22) at (1,5) \[subgroup\] [$\bullet$]{}; (B23) at (2,5) \[subgroup\] [$\bullet$]{}; (M) at (1,6) \[subgroup\] [$\bullet$]{}; (P) to (B11); (P) to (B12); (M) to (B23); (M) to (B21); (M) to (B22); (P) to (B13); (B11) to (B21); (B11) to (B22); (B12) to (B21); (B12) to (B23); (B13) to (B23); (B13) to (B22);
In Section \[DD\] we produce an example of a $2$-string of $2$-diamonds, laying the foundation for the proofs in later sections. In Sections \[nDiamonds\] and \[qac\] we start with a positive integer $n$ and a non-abelian $p$-group $N$ with $N \in \CD N$ and $\Phi(N) \leq \Z N$ and create a group with Chermak-Delgado lattice that is a mixed $n+1$-string with $\CD N$ as the center component. In Section \[nDiamonds\] the remaining $n$ components are $m$-diamonds for a fixed $m \geq 2$ and in Section \[qac\] the remaining $n$ components are $\MM {p+3}$. As a corollary we show there exists a $p$-group $P$ with $P \in \CD P$ and $\Phi (P) \leq \Z P$ such that $\CD P$ is a $2l$-string of $m$-diamonds or $\MM {p+3}$ for all positive integers $l$. We additionally describe circumstances under which a $2l+1$-string may be constructed.
Example: A $2$-String of $2$-Diamonds {#DD}
=====================================
To give the flavor of the techniques used in later sections, we present the original construction that motivated the main theorems of the paper: the construction of a $p$-group $P$ with $\CD P$ a $2$-string of $2$-diamonds that contains $P$.
\[example\] For any prime $p$ there exists a $p$-group $P$ with $\CD P$ a $2$-string of $2$-diamonds, meaning that $\CD P = \{ \Z P, A_1, A_2, A, AB_1, AB_2, P \}$ where $\Z P < A_i < A < AB_j < P$ for $1 \leq i, j \leq 2$. Moreover, $\Phi(P) \leq \Z P$.
\[doublediamond\] For any integer $m > 1$, let $P$ be the group generated by $\{a_i, b_i \mid 1\leq i \leq 2m\}$ subject to the defining relations: $$\begin{gathered}
[a_i,a_j]^p = [a_i,b_j]^p = a_i^p = b_i^p = 1 \textrm{ for all $1 \leq i, j\leq 2m$},\\
[a_i,b_j] \neq 1 \textrm{ for $i \not\equiv j$ mod $2$, } [b_i,b_j] \neq 1 \textrm{ for all $1\leq i, j \leq 2m$},\\
\textrm{all other commutators between generators equal 1, and}\\
\textrm{all commutators are in $\Z P$.}\\\end{gathered}$$
From the definition it’s clear that $\Phi (P) \leq \Z P$ and $\Z P$ is elementary abelian. Counting the non-trivial commutator relations gives $\ord {\Z P} = p^{4m^2 - m}$ and $\ord P = p^{4m^2 + 3m}$. Define the following subgroups of $P$: $$\begin{array}{lll}
A = \langle a_i \mid 1\leq i \leq 2m \rangle Z(P), && A_1 = \langle a_{2i-1} \mid 1\leq i \leq m \rangle Z(P),\\
A_2 = \langle a_{2i} \mid 1\leq i \leq m \rangle Z(P), && B_1 = \langle b_{2i-1} \mid 1\leq i \leq m \rangle Z(P), \textrm{ and}\\
B_2 = \langle b_{2i} \mid 1\leq i \leq m \rangle Z(P). &&\\
\end{array}$$ It’s straightforward to verify $C_P(A_1) = AB_1$ and $C_P(A_2) = AB_2$. Notice, too, that the maximal abelian subgroups have order less than or equal to $|A|$. If we let $z$ be the integer (dependent upon $m$) such that $\ord {\Z P} = p^z$ then observe the following orders: $$\ord A = p^{2m + z}, \quad |A_0| = |A_1| = p^{m+z}, \quad \textrm{ and } \quad \ord {AB_1} = \ord {AB_2} = p^{3m+z}.$$ Therefore the Chermak-Delgado measures of the above groups are all equal, yielding $\cdm P \geq \ord P \ord {\Z P}$. We show this is exactly $m^*(P)$, thereby establishing the theorem.
[*Proof of Theorem \[example\].*]{} To prove the theorem we first establish that the minimal subgroup in $\CD P$ is a subgroup of $A$, then determine it must be exactly one of $\Z P$, $A_1$, $A_2$, or $A$. All of these have the same Chermak-Delgado measure; therefore we establish $m^*(P)$ and use duality to finish the proof.
To begin, let $H \leq P$ be such that $\ord H \leq \ord A$ and $H \in \CD P$. Suppose $x \in H$ can be written $x=ry$ where $r\in \{b_i \mid 1\leq i \leq 2m\}$ and $y \in {\left\langle}\{a_i, b_i\mid 1\leq i \leq 2m\} - \{r\}{\right\rangle}Z(P)$. The center of $P$ is elementary abelian and the non-trivial commutators of the generators are linearly independent generators of $\Z P$, therefore $$\ord{P:C_P(x)} = \ord{x^P} = \ord{[x,P]} \geq \ord{[r,P]} = \ord{r^P} = \ord{P:C_P(r)}.$$ Counting the generators which stabilize $r$ under conjugation gives $$|C_P(x)| \leq |C_P(r)| = \frac{|P|}{p^{3m-1}}.$$ That $H \in \CD P$ and $x\in H$ together imply $$|H||C_P(x)| \geq |H||C_P(H)| \geq |P||Z(P)|.$$ From $m>1$ and as $\ord H \leq \ord A$ we obtain a contradiction: $$p^{2m} = |A:Z(P)| \geq \ord {H : \Z P} \geq |P:C_P(x)| = p^{3m-1};$$ thus $H\leq A$.
Yet the orders of abelian subgroups in $P$ are bounded above by $\ord A$; therefore the minimal subgroup in $\CD P$ must be a subgroup of $A$. To determine this minimal subgroup, let $H \in \CD P$ be such that $\Z P < H \leq A$. If $a_1y_1 \in H$ for $y_1 \in {\left\langle}a_i\mid 2\leq i \leq 2m{\right\rangle}Z(P)$ and $a_{2}y_2 \in H$ for $y_2 \in {\left\langle}a_1, a_i\mid 3\leq i \leq 2m {\right\rangle}Z(P)$ then $C_P(H) \leq A$. Notice that if $z = b_1^{k_1} \cdots b_{2m}^{k_m}a \in C_P(H)$ for integers $k_i$ and $a\in A$ then commutator calculations in a group with nilpotence class 2 give the following implications: $$\begin{gathered}
[a_1y_1,z] = 1 \Rightarrow k_{2i} = 0 \mbox{ for } 1\leq i \leq m \quad \textrm{and}\\
[a_2y_2,z] = 1 \Rightarrow k_{2i-1} = 0 \mbox{ for } 1\leq i \leq m.\end{gathered}$$ This generalizes for any $a_k$ with $k$ odd and $a_j$ with $j$ even. Since $C_P(H)\leq A$ and $H < A$, $$|H||C_P(H)| \leq |A|^2 = |P||Z(P)| \leq m^*(P);$$ equality holds exactly when $H = A$.
We may now conclude that if $ \Z P < H< A$ then $H \leq A_1$ or $H \leq A_2$. If the former then $C_P(H) = AB_1$ and in the latter case $C_P(H) = AB_2$; therefore $m(H) \leq m(A_1) = m^*(P)$, with equality exactly when $H = A_1$ or $H=A_2$.
Since $A$, $A_1$, $A_2$, and $\Z P$ all have the same Chermak-Delgado measure, all of these subgroups are in $\CD P$. From the duality of the Chermak-Delgado lattice the centralizers $AB_1$, $AB_2$, and $P$ are also in $\CD P$. Additionally the duality gives that there can be no subgroups $H \in \CD P$ with $A < H < P$ besides those already described, completing the proof of the theorem.
It is worth noting that $\ord {A_1} = \ord {A_2}$ is not necessary for achieving a $2$-string of $2$-diamonds. If we instead let $A_1 = \langle a_i \mid 1 \leq i \leq n - 1 \rangle$ and $A_2 = \langle a_i \mid n \leq i \leq 2n \rangle$, and similarly adjust the commutativity relations so that $C_P(A_1) = \langle b_1 \mid n \leq i \leq 2n \rangle A_1$ and $C_P(A_2) = \langle b_i \mid 1 \leq i \leq n - 1 \rangle A_2$, then the proof still holds. The result is a Chermak-Delgado lattice that is $2$-string of $2$-diamonds where the subgroups in the diamonds each have distinct order.
$m$-Diamond Lattice Extension Theorem {#nDiamonds}
=====================================
\[DiamondExt\] Let $N$ be a $p$-group such that $N \in \CD N$ and $\Phi(N) \leq \Z N$. For any integers $m \geq 1$ and $n \geq 2$ there exists a $p$-group $\mathcal{LE}(m,n)$ and a normal embedding of $N$ into $P$, resulting in $\CD P$ being a mixed $3$-string with center component isomorphic to $\CD N$ and the remaining components being $m$-diamonds.
\[GemExtension\] Choose $m \geq 1$ and $n \geq 2$.
1. For all $i, j$ such that $1 \leq j \leq n$ and $1 \leq i \leq m$, choose distinct $a_{ij}$ with order $p$ and define $A$ to be the direct product of all $\langle a_{ij} \rangle$.
2. Suppose that $N / \Z N = \langle x_1 \rangle \Z N \times \cdots \times \langle x_r \rangle \Z N$ and choose distinct $z_{ijr}$ of order $p$ for $1 \leq i \leq m$, $1 \leq j \leq n$, and $1 \leq k \leq r$. For all $i, j, t$ such that $1 \leq i, j \leq n$ and $1 \leq t \leq m$, choose distinct $\wt z_{ijt}$ with order $p$. For every $u$ and $v$ with $(1,1) \leq u < v \leq (m,n)$ under the lexicographic ordering, choose a distinct $z_{v}^{u}$ of order $p$. From these generators define: $$Z_N = \prod\limits_{\substack{1 \leq i \leq m\\ 1 \leq j \leq n\\ 1 \leq k \leq r}} \langle z_{ijk} \rangle, \qquad
Z_A = \prod\limits_{\substack{1 \leq i \leq n\\ 1 \leq j \leq n\\ 1 \leq t \leq m}} \langle \wt z_{ijt} \rangle, \qquad \textrm{ and } \quad Z_B = \prod\limits_{(1,1) \leq u < v \leq (m,n)} \langle z_{v}^{u} \rangle.$$ Define $Z = Z_A \times Z_N \times Z_B$ and $\wt N = N \times Z \times A$. Notice that $\Z {\wt N} = \Z N \times Z \times A$. Let $\wt A = \Z {\wt N}$.
3. For each $i, j$ such that $1 \leq i \leq m$ and $1 \leq j \leq n$, choose distinct $b_{ij}$ with order $p$. Define $P = \wt N \rtimes \langle b_{ij} \mid 1 \leq i \leq m, 1 \leq j \leq n \rangle$. Under this construction the following conjugation relations are observed:
$$\begin{gathered}
[x_k, b_{ij}] = z_{ijk} \textrm{ for all $i, j, k$}, [z, b_{ij}] = 1 \textrm{ for all $z \in \Z N \times Z$},\\
[b_{u},b_{v}] = z_v^u \textrm{ for all $(1,1) \leq u < v \leq (m,n)$},\\
[a_{i'j'},b_{ij}] = 1 \textrm{ for all $i' \ne i$, and } [a_{ti},b_{tj}] = \wt z_{ijt} \textrm{ for all $i$, $j$, $t$}.\\\end{gathered}$$
We prove that $P$ defined here exactly fits the requirements for the group $\mathcal{LE}(m,n)$ described in Theorem \[GemExtension\]. From the construction of $P$ it’s clear that $\Phi (P) \leq \Z P = \Z {\wt N} \times Z$ and $\Phi (P) = \Phi (N) \times Z$, and also that $\Z P$ is elementary abelian. By counting generators and commutators, one can determine that the exponents on $\ord {P / \Z P}$ and $\ord {\Z P}$ are $2mn + r$ and $\frac{1}{2} mn(2n + 2r + mn - 1) + z$, respectively, where $\ord {\Z N} = p^z$. Additionally observe $\ord {\wt A} \ord {\wt N} = \ord P \ord {\Z P}$, where $C_P(\wt A) = \wt N$ and $C_P(\wt N) = \wt A$. This gives that $m(P) = m(\wt N) \leq m^*(P)$.
To establish the structure of $\CD P$ we define additional subgroups of $\wt A$ and their centralizers in $P$. For $k$ with $0 \leq k \leq m$, let $\Delta_k$ be a $k$-subset of $\Omega = \{1, 2, \dots, m\}$ and let $A_{\Delta_k} = \langle a_{ij} \mid i \in \Delta_k, 1 \leq j \leq n \rangle$. Let $\wt {A}_{\Delta_k} = A_{\Delta_k}\Z P$ and $\wt {A}_k = \{ \wt {A}_{\Delta_k} \mid \Delta_k \textrm{ a $k$-subset of } \Omega \}$. From this definition it is clear that $\wt A_k$ has precisely $\binom{m}{k}$ subgroups. Moreover, any subgroup in $\wt{A}_k$ has a centralizer of the form $\wt{B}_{\Delta_k} = B_{\Delta_k^c} \wt N$ where $B_{\Delta_k^c} = \langle b_{ij} \mid i \not\in \Delta_k, 1 \leq j \leq n \rangle$.
Notice that $m(\wt A_{\Delta}) = m(\wt A)$ for all $\Delta \subseteq \Omega$. Ultimately we will show that $m^*(P) = m(P)$ and, for every $k$ with $1 \leq k \leq m$, the set $\wt {A}_k \subset \CD P$. This gives the bottom component, an $m$-diamond, in the $3$-string. The centralizers $C_P(\wt A_{\Delta})$ where $\Delta \subseteq \Omega$ give the second $m$-diamond that forms the top component, part of $\CD P$ under the duality property. Notice $\CD N {\cong}\CD {\wt N}$, as $\wt N$ is the direct products of $N$ by abelian groups [@bw2012], which will give the center component of $\CD P$.
We begin by examining the subgroups in $\CD P$ with order no greater than $\ord {\wt A}$.
\[lemma1\] If $H \in \CD P$ and $\ord H \leq \ord {\wt A}$ then there exists some $\Delta \subseteq \Omega$ such that $H = \wt A_{\Delta}$.
Let $H \in \CD P$ and suppose $\ord H \leq \ord {\wt A}$; then $m^*(P) = m(H) \geq m(\wt A) = \ord {\wt A} \ord {\wt N}$ yields $\ord {C_P(H)} \geq \ord {\wt N}$. If $C_P(H) = \wt N$ then $H = C_P(\wt N) = \wt A$, using the duality of the Chermak-Delgado lattice. In the following arguments we assume that $C_P(H) \ne \wt N$ and so there exists $x \in C_P(H) - \wt N$.
Suppose that $x = wy$ where $w \in \{b_{ij} \mid 1 \leq i \leq m, 1 \leq j \leq n\}$ and $y \in \langle \{b_{ij}\} - \{w\} \rangle \wt N$. The center of $P$ is elementary abelian and and the non-trivial commutators of generators of $P$ are linearly independent generators of $\Z P$; therefore $$\ord {P:C_P(x)} = \ord {x^P} = \ord {[x,P]} \geq \ord {[w,P]}.$$ Equality holds if and only if $y \in \wt N$; this can be verified straightforwardly using the bilinearity of commutators in $P$. Thus $C_p(x) \leq C_p(w)$; we calculate $C_P(w)$ in order to place an upper bound on $\ord {C_P(x)}$.
Fix $u, v$ so that $w = b_{uv}$ and let $\Delta = \Omega - \{u\}$. The only generators of $P$ that commute with $w$ are precisely $w$ itself and those $a_{ij}$ where $i \ne u$. Therefore $C_P(w) = \langle w \rangle \wt A_{\Delta}$. The duality property of the Chermak-Delgado lattice yields: $H = C_P(C_P(H)) \leq C_P(x) \leq \langle w \rangle \wt A_{\Delta}$. Therefore $\ord H \leq p \ord {\wt A_{\Delta}}$. Recalling that $m(H) \geq \ord {\wt A} \ord {\wt N}$, one may observe that $$\ord {C_P(H)} \geq \frac{\ord {\wt A} \ord {\wt N}}{p \ord {\wt A_{\Delta}}} = p^{n-1} \ord {\wt N}.$$ If $\ord {C_P(H)} = p \ord {\wt N}$ then $n = 2$. Thus, since $m(H) \geq \ord {\wt A} \ord {\wt N}$, we can say $\ord H p \geq \ord {\wt A}$. Yet $H \leq \langle w \rangle A_{\Delta}$ and $\ord {A:A_{\Delta}} = p^2$; therefore $H = \langle w \rangle A_{\Delta}$ and $$C_P(H) = C_P(w) \cap C_P(A_{\Delta}) \leq C_P(x) = H.$$ Thus $\ord {C_P(H)} \leq \ord {H} \leq \ord {\wt A}$, contradicting the choice of $H$.
Suppose instead that $\ord {C_P(H)} > p \ord {\wt N}$; there exists an $x' = w'y' \in H$ where $w' \in \{ b_{ij} \mid 1 \leq i \leq m, 1 \leq j \leq n\} - \{w\}$ and $y' \in \langle \{b_{ij}\} - \{w'\} \rangle \wt N$. Apply the previous argument to $x'$, arriving at $H \leq \langle x' \rangle \wt A_{\Delta'}$ (where $\wt A_{\Delta'} \in \wt A_{m-1}$, possibly $\Delta = \Delta'$). This gives $H \leq C_P(w) \cap C_P(w') \leq \wt A_{\Delta}$, which implies $H \leq \wt A$.
Therefore if $H \in \CD P$ and $\ord H \leq \ord {\wt A}$ then $H = \wt A$ or $H \leq \wt A_{\Delta}$ where $\ord {\Delta} = m - 1$. We prove by induction that if $H \in \CD P$ and $H \leq \wt A$ then $H = \wt A_{\Delta}$, where $\Delta$ is now any subset of $\Omega$. Assume that $\Delta$ is any subset of $\Omega$ such that $H \in \CD P$ but $H < \wt A_{\Delta}$.
In this case $\ord H \ord {C_P(H)} \geq \ord {\wt A_{\Delta}} \ord {\wt B_{\Delta}}$, since $\wt B_{\Delta} = C_P(\wt A_{\Delta})$. By order considerations, it’s clear that $\ord {C_P(H)} > \ord {\wt B_{\Delta}}$. Yet $\wt B_{\Delta} < C_P(H)$ because $H \leq \wt A_{\Delta}$, so there must exist an element $x \in C_P(H) - \wt B_{\Delta}$. Therefore, without loss of generality, there exists $i' \in \Delta$ and $j$ with $1 \leq j \leq n$ such that $x = b_{i'j}y$ for some $y \in \langle b_{ij} \mid i \ne i', 1 \leq j \leq n \rangle \wt N$. Notice that the bilinearity of the commutator in $P$ gives $$1 = [h, b_{i'j}y] = [h, b_{i'j}][h,y] \textrm{ for all } h \in H \implies 1 = [h, b_{i'j}] \textrm{ for all } h \in H.$$ Thus no element of $H$, written as a product of generators of $\wt A$, contains $a_{i'j}$ as a factor. Therefore $H \leq \wt A_{\Delta-\{i'\}}$.
By induction, we know that if $H \in \CD P$ and $\ord H \leq \ord {\wt A}$ then $H \leq \wt A_{\Delta}$, for some $\Delta \subseteq \Omega$. However, if $\ord {\wt A_{\Delta - \{i\}}} < \ord H \leq \ord {A_{\Delta}}$ for any $i \in \Delta$ then $C_P(H) = \wt B_{\Delta}$. Therefore $m(H) \leq m(\wt A_{\Delta})$ with equality if and only if $H = \wt A_{\Delta}$. This proves that if $H \in \CD P$ and $\ord H \leq \ord {\wt A}$ then $H = \wt A_{\Delta}$ for some $\Delta \subseteq \Omega$.
In the proof of Theorem \[DiamondExt\] we show that there is a subgroup of order less than $\wt A$ in $\CD P$, thereby Proposition \[lemma1\] automatically generates an $m$-diamond above the minimal subgroup in $\CD P$. To prove this, and also to describe the structure of $\CD P$ above the $m$-diamond, we prove:
\[lemma2\]If $H \in \CD P$ and $\ord H \leq \ord {\wt N}$ then $H \leq \wt N$.
Let $H \in \CD P$ with $\ord H \leq \ord {\wt N}$, so that $m^*(P) \geq \ord {\wt A} \ord {\wt N}$ implies $\ord {C_P(H)} \geq \ord {\wt A}$. By way of contradiction, suppose that there exists $x \in H - \wt N$. Then there exists $w \in \{b_{ij} \mid 1 \leq i \leq m, 1 \leq j \leq n\}$ and $y \in \langle \{ b_{ij} \} - \{w\} \rangle \wt N$ such that $x = wy$. As in the proof of Proposition \[lemma1\], we note that $$\ord {P:C_P(x)} =\ord {x^P} = \ord {[x,P]} \geq \ord {[w, P]}$$ and therefore $C_P(H) \leq C_P(w)$. Fix $u, v$ so that $w = b_{uv}$ and let $\Delta = \Omega - \{u\}$. The only generators that commute with $w$ are precisely $w$ itself and those $a_{ij}$ where $i \ne u$, and therefore $C_P(w) = \langle w \rangle \wt A_{\Delta}$. Thus $$\ord {C_P(H)} \leq \ord {C_P(x)} \leq \ord {\langle w \rangle \wt A_{\Delta}} = \frac{\ord {\wt A}}{p^{n-1}}.$$ This contradicts the earlier statement that $\ord {C_P(H)} \geq \ord {\wt A}$; therefore if $H \in \CD P$ and $\ord H \leq \ord {\wt N}$ then $H \leq \wt N$.
These two lemmas are enough to prove that $P$ has the desired Chermak-Delgado lattice.
[*Proof of Theorem \[DiamondExt\].*]{} Let $P$ be as described above; we first consider abelian subgroups in $\CD P$. Let $H\in \CD P$ be abelian and assume, by way of contradiction, that $\ord H > \ord {\wt N}$. There exists an element $b_{i'j}y \in H$ such that $y \in \langle \{b_{ij} \mid 1 \leq i \leq m, 1 \leq j \leq n \} - \{b_{i'j}\} \rangle \wt N$. As $C_P(b_{i'j}) = A_{\Delta}$ for $\Delta = \Omega - \{i'\}$, it follows that $H\not\leq C_P(H)$. Thus if $H \in \CD P$ is abelian then $H \leq \wt N$.
We show that if $H \in \CD P$ and $H \leq \wt N$ then $m(H) = m(P)$, and since the minimal member of $\CD P$ is a subgroup of $\wt N$ this is enough to establish that $m^*(P) = m(P)$. Proposition \[lemma1\] already shows that if $H \in \CD P$ with $H \leq \wt A$ then $m_P(H) = m_P(P)$, so we consider the case where $\wt A < H < \wt N$.
Let $H \leq P$ with $C_P(H) \not\leq \wt N$; then there exists $b_{i'j}y \in C_P(H)-\wt N$ with $y \in \langle \{ b_{ij} \mid 1 \leq i \leq m, 1 \leq j \leq n \} - \{b_{i'j}\} \rangle \wt N$. It follows that every $h \in H$ must be an element of $\wt A_{\Delta}$ where $\Delta \subseteq \Omega - \{i'\}$. Therefore if $H \in \CD P$ with $\ord {\wt A} < \ord H < \ord {\wt N}$ then $C_P(H) = C_{\wt N}(H)$ and $m^*(P) = m_P(H) = m_{\wt N}(H)$. However, $m_P(H) = \ord {\wt A} \ord {\wt N} = m_{\wt N} (\wt N) = m^*(\wt N)$, by designation of $N \in \CD N$ (and hence $\wt N \in \CD {\wt N})$. This gives $m_P(H) = m_P(\wt N) = m_P(P)$.
Therefore $m^*(P) = \ord {\wt N} \ord {\wt A}$. This implies that if $H \in \CD {\wt N}$ then $m_{\wt N} (H) = m_P(H)$ and hence $H \in \CD P$. Therefore $\{\wt A_{\Delta}, \wt B_{\Delta} \mid \Delta \subseteq \Omega \} \cup \CD {\wt N} \subseteq \CD P$. In view of Proposition \[lemma1\] and Lemma \[lemma2\], we need only show that if $H > \wt N$ and $H \in \CD P$ then $H = C_P(\wt A_{\Delta})$ for some $\Delta \subseteq \Omega$. However, if $H \in \CD P$ with $H > \wt N$ then order considerations and Proposition \[lemma1\] give a $\Delta \subseteq \Omega$ such that $C_P(H) = A_{\Delta}$. The duality property of the Chermak-Delgado lattice yields $H = C_P(A_{\Delta})$ as required.
As $\Phi (P) \leq \Z P$ for the resulting group $P$ we may reiterate the construction of this section $l$ times to produce a group with a Chermak-Delgado lattice that is a $2l+1$-string with center component isomorphic to $\CD N$ and all remaining components $m$-diamonds. Notice that for any two subgroups $H, K$ in the resulting $m$-diamonds where there does not exist $M$ in the Chermak-Delgado lattice such that $H < M < K$ we have $\ord {K H} = p^n$. As a result:
Let $l, m, n$ be integers with $l, m \geq 1$ and $n \geq 2$. There exists a $p$-group $P$ such that $P \in \CD P$ and $\CD P$ is a $2l$-string of $m$-diamonds. Moreover, if $H, K \in \CD P$ and there does not exist $M \in \CD P$ with $H < M < K$ then $\ord {K:H} = p^n$.
If there exists a non-abelian $p$-group $N$ such that $N \in \CD N$ and $\Phi (N) \leq \Z N$ with the appropriate indices between subgroups in $\CD N$ then one can construct a $p$-group $P$ with the same properties such that $\CD P$ is a $2l + 1$-string of $m$-diamonds. In particular, if there exists a non-abelian $p$-group $J$ with $\Phi(J) \leq \Z J$ and $\CD J= \{J, \Z J\}$then the desired $2l + 1$-string of $m$-diamonds can be constructed.
For a group $P$ with $\CD P$ being a $2l$-string of $m$-diamonds, let $N = 1$ and reiteratively apply Theorem \[DiamondExt\] $l$ times. The resulting group is exactly as desired.
For a group $P$ with $\CD P$ being a $2l+1$-string of $m$-diamonds, one must start with a group $N$ in order to reiteratively apply Theorem \[DiamondExt\]. Suppose that there exists a non-abelian $p$-group $J$ such that $\Phi(J) \leq \Z J$ and $\CD J = \{J, \Z J\}$, with $\ord {J:\Z J} = p^n$. Then $N = J \times J \times \cdots \times J$ ($m$ factors) has an $m$-diamond as its Chermak-Delgado lattice with the desired indices. Moreover that $\Phi (J) \leq \Z J$ implies $\Phi (N) \leq \Z N$, therefore Theorem \[DiamondExt\] may be reiterated $l$ times to give the desired group.
Such a group $J$ was constructed in [@bhw2013 Proposition 3.3] for $n = 3$.
Quasiantichain Lattice Extension Theorem {#qac}
========================================
\[QACExt\] Let $N$ be a $p$-group such that $N \in \CD N$ and $\Phi (N) \leq \Z N$. For any integer $n \geq 2$ there exists a $p$-group $\mathcal{QE}(n)$ with $\Phi (P) \leq \Z P$ and a normal embedding of $N$ into $P$, resulting in $P \in \CD P$ and $\CD P$ being a mixed $3$-string with center component isomorphic to $\CD N$ and other components being lattice isomorphic to $\MM {p+3}$.
\[QACExtension\] Choose $n \geq 2$.
1. For all $i, j$ such that $1 \leq j \leq n$ and $i \in \{1,2\}$, choose distinct $a_{ij}$ with order $p$ and define $A$ to be the direct product of all $\langle a_{ij} \rangle$.
2. Suppose $N / \Z N = \langle x_1 \rangle \Z N \times \langle x_2 \rangle \Z N \times \cdots \times \langle x_r \rangle \Z N$ for a positive integer $r$. For $i, j, k$ with $1 \leq i \leq m$, $1 \leq j \leq n$, and $1 \leq k \leq r$, choose distinct $z_{ijk}$, each of order $p$. For $i, j$ such that $1 \leq i, j \leq n$, choose distinct $z_{ij}$, each with order $p$. For every $u$ and $v$ with $(1,1) \leq u < v \leq (2,n)$ under the lexicographic ordering, choose a distinct $z_{v}^{u}$ of order $p$. Define: $$Z_N = \prod\limits_{\substack{1 \leq i \leq 2\\ 1 \leq j \leq n\\ 1 \leq k \leq r}} \langle z_{ijk} \rangle, \quad
Z_A = \prod\limits_{1 \leq i,j \leq n} \langle z_{ij} \rangle, \quad \textrm{ and } \quad
Z_B = \prod\limits_{(1,1) \leq u < v \leq (2,n)} \langle z_{v}^{u} \rangle.$$ Let $Z = Z_N \times Z_B \times Z_A$ and $\wt N = N \times Z \times A$. Notice that $\Z {\wt N} = \Z N \times Z \times A$. Let $\wt A = \Z {\wt N}$.
3. For each $i, j$ such that $1 \leq i \leq 2$ and $1 \leq j \leq n$, choose distinct $b_{ij}$ with order $p$. Define $P = \wt N \rtimes \langle b_{ij} \mid 1 \leq i \leq 2, 1 \leq j \leq n \rangle$. Under this construction the following conjugation relations are observed:
$$\begin{gathered}
[x_k, b_{ij}] = z_{ijk} \textrm{ for all $i$, $j$, $k$}, [z, b_{ij}] = 1 \textrm{ for all $z \in \Z N \times Z$},\\
[b_{u},b_{v}] = z_v^u \textrm{ for all $(1,1) \leq u < v \leq (2,n)$},\\
[a_{i'j'},b_{ij}] = 1 \textrm{ for all $i' \ne i$, and } [a_{ti},b_{tj}] = z_{ij} \textrm{ for $t = 1$ or $2$.}\\\end{gathered}$$
We show that $P$ satisfies the requirements of Theorem \[QACExtension\]. The main difference between this construction and that of Section \[nDiamonds\] is in the generators of $Z_A$. In the latter, $a_{ti}^{b_{tj}}$ resulted in a different central element for each choice of $t$. In the present construction $a_{12}^{b_{11}} = a_{22}^{b_{21}}$, for example. The effect of “glueing” the commutators together in this manner is reminiscent of construction of a single quasiantichain as given in [@bhw2013a].
The construction clearly dictates that $\Phi (P) \leq \Z P$ and shows $\Z P$ is elementary abelian. Counting generators and commutators shows that $\ord {\Z P}$ has exponent $n(3n - 2r +1) + z$ where $\ord {\Z N} = p^z$ and $\ord {P / \Z P}$ has exponent $4n + r$.
It’s straightforward to show that $C_P(\wt N) = \wt A$ and vice versa; this gives $m^*(P) \geq m(P) = \ord P \ord \Z P = \ord {\wt N} \ord {\wt A} = m(\wt A)$. Other subgroups of interest include $A_k = \langle a_{1j}a_{2j}^k \mid 1 \leq j \leq n \rangle$ for $1 \leq k \leq p - 1$ and $A_{p} = \langle a_{2j} \mid 1 \leq j \leq n \rangle$. Each of these abelian subgroups has index $p^n$ in $\wt A$.
We show, through a series of lemmas, that $\CD P = \{\Z P, P, A_k, C_P(A_k) \mid 0 \leq k \leq p \} \cup \CD {\wt N}$. One $\MM {p+3}$ is formed by $\{ \Z P, A_k \wt A \mid 0 \leq k \leq p \}$ and the second by $\{ \wt N, C_P(A_k), P \mid 0 \leq k \leq p\}$. Notice that $\CD {\wt N} {\cong}\CD N$ because ${\wt N}$ is the direct product of $N$ with abelian groups [@bw2012].
We begin by examining $C_P(A_k)$ for $0 \leq k \leq p$.
Let $A_k$ be as described. The centralizer of $A_0$ is $\langle b_{2j} \mid 1 \leq j \leq n \rangle \wt N$ and $C_P(A_p) = \langle b_{1j} \mid 1 \leq j \leq n \rangle \wt N$. For $k$ with $1 \leq k \leq p - 1$, the centralizer of $A_k$ is $C_P(A_k) = \langle b_{1j}^kb_{2j}^{-1} \mid 1 \leq j \leq n \rangle \wt N$.
The centralizer of $A_0$ and $A_p$ follow immediately from the conjugation relations given in the construction of $P$. The structure of $C_P(A_k)$ for $1 \leq k \leq p - 1$ is less obvious; first observe the following: $$[a_{1j}a_{2j}^k, b_{1j}^kb_{2j}^{-1}] = [a_{1j},b_{1j}]^k [a_{2j},b_{2j}]^{-k} = z_{jj}^k z_{jj}^{-k} = 1$$ for $j$ such that $1 \leq j \leq n$. Thus $\langle b_{1j}^kb_{2j}^{-1} \mid 1 \leq j \leq n \rangle \wt N \leq C_P(A_k)$.
Let $x \in C_P(A_k)$. Then $x = b_{11}^{\alpha_{11}}b_{12}^{\alpha_{12}} \cdots b_{2n}^{\alpha_{2n}} y$ where $0 \leq \alpha_{ij} \leq p - 1$ for $1 \leq i \leq 2$, $1, \leq j \leq n$ and $y \in \wt N$. Given the linearity of the commutator, we may assume without loss of generality that $y = 1$ and consider the following commutator: $$[a_{1j'}a_{2j'}^k, b_{11}^{\alpha_{11}}b_{12}^{\alpha_{12}} \cdots b_{2n}^{\alpha_{2n}}] = \prod\limits_{1 \leq j \leq n} [a_{1j'},b_{1j}]^{\alpha_{1j}} [a_{2j'},b_{2j}]^{k \alpha_{2j}} = \prod\limits_{1 \leq j \leq n} z_{j'j}^{\alpha_{1j}} z_{j'j}^{k\alpha_{2j}}.$$ This commutator equals 1 if and only if $\alpha_{1j} + k \alpha_{2j} = 0$ for all $j$. Therefore there exists $n$ linear equations, each of 2 variables and solution space $\langle \big[ \begin{array}{c}
k\\
-1\\
\end{array} \big] \rangle$, as desired. Thus $C_P(A_k) = \langle b_{1j}^kb_{2j}^{-1} \mid 1 \leq j \leq n \rangle$.
This tells us that $m(P) = m(\wt A) = m(A_k)$ for all $k$ with $0 \leq k \leq p$. Identifying the centralizers of the subgroups $A_k$ is the first step in showing that if $\wt A \in \CD P$ then there is a component with maximum $\wt A$ that is lattice isomorphic to $\MM {p+3}$ and a second mirrored in the structure above $\wt N$. The next step is to establish the quasiantichain structure; in a manner similar to that of Section \[nDiamonds\] we study subgroups in $\CD P$ that have order less than $\ord {\wt A}$.
\[qaclem1\] If $H \in \CD P$ and $\ord H < \ord {\wt A}$ then $H \in \{ \wt A, \Z P, A_k \mid 0 \leq k \leq p\}$.
Let $H \in \CD P$ and $\ord H \leq \ord {\wt A}$. Then $\ord {C_P(H)} \geq \ord {\wt N}$ because $m^*(P) = \ord H \ord {C_P(H)} \geq \ord {\wt A} \ord {\wt N}$. If $\ord {C_P(H)} = \ord {\wt N}$ then $H = \wt A$.
Suppose, instead, that $\ord {C_P(H)} > \ord {\wt N}$; there exists $x \in C_P(H) - \wt N$ with $x = b_{11}^{\alpha_{11}}b_{12}^{\alpha_{12}} \cdots b_{2n}^{\alpha_{wn}} y$ for $y \in \wt N$. By counting the non-central generators of $\wt A$ that do not commute with the $b_{ij}$ in $x$, notice: $$\ord {\wt A: C_{\wt A}(x)} = \ord {x^{\wt A}} = \ord {[x, \wt A]} \geq p^n.$$ Thus $\ord {C_{\wt A}(x)} \leq p^{1-n} \ord {\wt A}$. Additionally, $C_{\wt A}(x) = C_{\wt N}(x)$ and the $b_{ij}$ do not commute with one another. This gives $C_P(x) = \langle x \rangle C_{\wt A} (x)$. However, $\ord x \leq p^2$ and $\ord { \langle x \rangle \cap C_{\wt A} (x)} \leq p$; therefore $\ord {C_P(x)} \leq p^{1-n} \ord {\wt A}$.
By choice of $H$, we know $m^*(P) = \ord {\wt H} \ord {\wt C_P(H)} \geq \ord {\wt A} \ord {\wt N}$. The duality property of the Chermak-Delgado lattice gives $H = C_P(C_P(H)) \leq C_P(x)$. These two facts together give $$\ord {C_P(H)} \geq p^{n-1} \ord {\wt N}.$$ If $\ord {C_P(H)} > p \ord {\wt N}$ then there exists an $x' \in C_P(H) - \langle x \rangle \wt N$. Note that $[x, x'] = 1$ and therefore $H \leq C_P(x) \cap C_P(x') \leq \wt A$, as desired.
Suppose instead that $\ord {C_P(H)} = p \ord {\wt N}$. From $\ord {C_P(H)} \geq p^{1-n} \ord {\wt N}$ we know $n = 2$. However $\ord H \leq p \ord {C_{\wt A} (x)}$ and $C_{\wt A} (x) < \wt A$ gives the implication: $$\frac{\ord H}{p} \leq \ord {C_{\wt A} (x)} < \ord {\wt A} \implies \ord H \leq p^2 \ord {\wt A}.$$ Then $\ord H \ord {C_P(H)} < \ord {\wt A} \ord {\wt N}$, contradicting the choice of $H$.
Thus we have shown if $H \in \CD P$ with $\ord H \leq \ord {\wt A}$ then $H \leq \wt A$. Notice that $m(\wt A) = m(\Z P)$ so we consider $\Z P < H < \wt A$, showing that such $H \in \CD P$ must also have the same measure as $\wt A$. Let $x \in H$ and write $x = a_{11}^{\alpha_{11}} a_{12}^{\alpha_{12}} \cdots a_{2n}^{\alpha_{2n}} y$ for $y \in \Z P$ and $0 \leq \alpha_{ij} \leq p-1$. Since $x \in \wt A$ we know that $\wt N \leq C_P(x)$; suppose $z \not\in \wt N$ centralizes $x$. Let $z = b_{11}^{\beta_{11}} \cdots b_{2n}^{\beta_{2n}}w$ where $w \in \wt N$ and $0 \leq \beta_{ij} \leq p-1$. The bilinearity of the commutator allows for the computation of $[x,z]$, resulting in $$\prod\limits_{1 \leq i \leq n} (z_{i1}^{\beta_{11}}z_{i2}^{\beta_{12}} \cdots z_{in}^{\beta_{1n}})^{\alpha_{1i}} (z_{i1}^{\beta_{21}}z_{i2}^{\beta_{12}} \cdots z_{in}^{\beta_{2n}})^{\alpha_{2i}}.$$ If $[x,z] = 1$ then the exponents for each fixed $i$ give a linear equation $$\alpha_{1i} (\beta_{11} + \beta_{12} + \cdots + \beta_{1n}) + \alpha_{2i}(\beta_{21} + \beta_{22} + \cdots + \beta_{2n}) = 0.$$ Each of the $i$ linear equations must be solved simultaneously in order to produce $z \in C_P(x)$ with $z \not\in \wt N$. This corresponds to a $2 \times n$ consistent matrix with a unique solution, requiring $\alpha_{1i}$ be a negative scalar multiple of $\alpha_{2i}$. Hence there exists $k \in \{ 0, 1, \dots, p \}$ such that $x \in A_k$. Moreover, any other $x' \in H$ must fit this same form in order to commute with $z$ and hence $H \leq A_k$ and $C_P(H) = C_P(A_k)$. If $H < A_k$ then $m(H) < m(A_k)$; because $H \in \CD P$ we know then that $H = A_k$.
As desired, if $H \in \CD P$ with $\ord H \leq \ord {\wt A}$ then $H \in \{ \Z P, \wt A, A_k \mid 0 \leq k \leq p \}$.
Therefore if there is a subgroup of $\wt A$ in $\CD P$ then there is a component that is lattice isomorphic to $\MM {p+3}$ directly above $\Z P$ in $\CD P$ as well as a second quasiantichain in $\CD P$ between $\wt N$ and $P$. We turn now to the structure of $\CD P$ between $\wt A$ and $\wt N$.
\[qaclem2\] If $H \in \CD P$ and $\ord H \leq \ord {\wt N}$ then $H \leq \wt N$. If $H \in \CD P$ and $\wt A \leq H \leq \wt N$ then $m_P(H) = m_{\wt N}(H)$.
Let $H \in \CD P$ and $\ord H \leq \ord {\wt N}$. Suppose, by way of contradiction, that there exists $x \in H - \wt N$. Similar consideration as in the proof of Lemma \[qaclem1\] gives $$\ord {C_P(H)} \leq \ord {C_{\wt A}(x)} \leq p^{1-n} \ord {\wt A}.$$ Then $m^*(P) = \ord H \ord {C_P(H)} < \ord {\wt A} \ord {\wt N}$. This contradiction implies that $H \leq \wt N$.
Now suppose that $\wt A \leq H \leq \wt N$ and $H \in \CD P$. The generators of $\wt N$ and the elements $\{b_{ij}\}$ do not commute. Combined with the bilinearity of commutators in $P$, this implies no non-trivial element of $H$ can commute with an element from $P - \wt N$. Thus $C_P(H) = C_{\wt N}(H)$, giving $m^*(P) = \ord H \ord {C_P(H)} = m_{\wt N}(H)$.
We are now prepared to prove that $P$ satisfies the description from Theorem \[QACExt\].
[*Proof of Theorem \[QACExt\].*]{} Let $P$ be described as above. We first determine $m^*(P)$ by showing all abelian subgroups in $\CD P$ have measure equal to $m(P)$. Suppose $H \geq \Z P$ and $H \in \CD P$. Assume, by way of contradiction, that $H \not\leq \wt N$; there exists $x = b_{ij}^{\alpha} y \in H$ where $1 \leq \alpha \leq p$ and $y \in \wt N$. If $i = 1$ then $C_P(b_{1j}) = A_p$ and if $i = 2$ then $C_P(b_{2j}) = A_0$. Since $C_P(H) \leq C_P(b_{ij})$, using the bilinearity of the commutator, we know that $C_P(H) < \wt A$. This directly contradicts the assumption that $H$ is abelian. Therefore $H \leq \wt N$. If $\wt A < H < \wt N$ then $m_P(H) = m_{\wt N}(H)$. This forces $m_P(H) = m_P(\wt N) = m_P(P)$. If $H \leq \wt A$ then Proposition \[qaclem1\] immediately gives $m(H) = m(P)$.
Because the minimum subgroup of $\CD P$ is abelian and contains $\Z P$, we know that $m^*(P) = m(P)$ and that minimum is $\Z P$. This additionally implies that $\{ \Z P, P, A_k, C_P(A_k) \mid 0 \leq k \leq p\} \cup \CD {\wt N}$ is a subset of $\CD P$. Proposition \[qaclem1\] and Lemma \[qaclem2\] give that no subgroups of $\wt N$ other than those listed can be members of $\CD P$. To finish, suppose there exists $H \in \CD P$ such that $\wt N < H < P$. The centralizer calculation of the preceeding paragraph holds and shows that $C_P(H) < \wt A$. The duality property of the Chermak-Delgado lattice then allows for applying Proposition \[qaclem1\] to $C_P(H)$ to see that $C_P(H) = A_k$ or $C_P(H) = \Z P$. If the former is the case then $H = C_P(C_P(H)) = C_P(A_k)$ and if the latter is true then $H = P$. Therefore $\CD P$ is as described by the statement of the theorem.
\[string\] Let $l, n$ be integers with $l \geq 1$ and $n \geq 2$. There exists a $p$-group $P$ such that $P \in \CD P$ and $\CD P$ is a $2l$-string with components that are lattice isomorphic to $\MM {p+3}$, for any positive integer $l$. Moreover, if $H, K \in \CD P$ such that there does not exist $M \in \CD P$ with $H < M < K$ then $\ord {K:H} = p^n$.
If there exists a non-abelian $p$-group $N$ with $\Phi (N) \leq \Z N$ and $N \in \CD N$ such that $\CD N$ is $\MM {p+3}$ with indices $p^n$ then there exists a non-abelian $p$-group $P$ with $P \in \CD P$ and $\CD P$ being a $2l+1$-string with components that are lattice isomorphic to $\MM {p+3}$ and having the appropriate indices.
The proof of Corollary \[string\] is a matter of reiteratively applying Theorem \[QACExt\]. For an even length string let $N = 1$. The matter of finding a $p$-group $N$ with $N \in \CD N$ and $\CD N$ being a quasiantichain with the needed indices is still open. Such an $N$ exists when $n = 3$, as described in [@bhw2013a].
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors would like to thank Ben Brewster of Binghamton University for posing the original challenge to find a group $P$ with $\CD P$ a $2$-string of $2$-diamonds. The second and fourth authors would like to thank Qinhai Zhang and Shanxi Normal University for the gracious invitation and support during their visit.
This work was supported by the National Natural Science Foundation of China (grant number 11071150) and the Natural Science Foundation of Shanxi Province (grant numbers 2012011001-3 and 2013011001-1).
| {
"pile_set_name": "ArXiv"
} |
\#1[\#1 |]{} \#1[| \#1 ]{} \#1\#2[\#1 | \#2 ]{} 6.1in -0.2in -0.5in 9.1in
**Charmless final states and S–D-wave mixing**
**in the $\ppp$ [^1]**
Jonathan L. Rosner [^2]
*Enrico Fermi Institute and Department of Physics*
*University of Chicago, 5640 S. Ellis Avenue, Chicago, IL 60637*
(Received June 2001)
> The $\ppp = \psi(3770)$ resonance is expected to be mainly $c \bar c(1^3D_1)$, but tensor forces and coupling to charmed particle pairs can mix it with $\pp(2^3S_1)$ and other states. Implications of this mixing for decays of $\ppp$ to non-charmed final states are discussed. (i) The ratio $\Gamma(\ppp
> \to \gamma + \chi_{c2})/ \Gamma(\ppp \to \gamma + \chi_{c0})$ is expected to be highly suppressed if $\ppp$ is a pure D-wave state, and is enchanced by mixing. (ii) The expected decay $\pp \to \rho \pi$ and other “missing” modes can appear as corresponding $\ppp$ partial widths, enhanced by a factor depending on the mixing angle. General arguments then suggest a branching ratio of about 1%, give or take a factor of 2, for charmless hadronic decays of $\ppp$. (iii) Enhancements can appear in penguin amplitudes in $B$ decays, $B \to K \eta'$ branching ratios, and direct CP-violating asymmetries in $B \to K \pi$ decays.
PACS numbers: 13.25.Gv, 13.20.Gd, 14.40.Gx, 12.39.Jh
Introduction
============
The lowest resonance in electron-positron collisions above charmed particle pair production threshold is the $\ppp = \psi(3770)$, discovered somewhat after the $J/\psi(3097)$ and the $\pp = \psi(3686)$ [@Rap].[^3] It provides a rich source of $D^0 \od$ and $D^+ D^-$ pairs, as anticipated theoretically [@Eetal]. The largest data sample of $\ppp$ decays studied so far, by the Mark III Collaboration at the Stanford electron-positron collider SPEAR [@MkIII], has been $9.56 \pm
0.48$ pb$^{-1}$. Plans are under way to accumulate as much as 3 fb$^{-1}$ at the Cornell Electron Storage Ring (CESR), which will permit much more incisive tests of a number of open questions [@Wkshp]. In the present note we discuss several of these which involve observation of [*non-charmed final states*]{} of the $\ppp$. These have been studied in two previous Ph. D. theses [@Zhu; @Walid] based on the Mark III data.
The $\ppp$ is the only present candidate for a D-wave $(l = 2)$ quarkonium level. (Strategies for finding the corresponding $b \bar b$ levels have been noted in Refs. [@KR; @GRD].) Although it is primarily $c \bar
c(1^3D_1)$, [^4] its leptonic width (quoted in Table I [@MkIII; @PDG]) indicates a contribution from mixing with S-wave states, such as the nearby $\pp(2^3S_1)$ and to a lesser extent with $J/\psi(1^3S_1)$ [@Rich] and $n \ge 3$ S-wave states above 4 GeV/$c^2$. Early calculations of this mixing based on contributions from intermediate real and virtual states of charmed particle pairs [@Eetal] predicted a $\ppp$ contribution to the $e^+ e^- \to D \bar D$ cross section which indicated the utility of this state as a “charm factory” and predicted its leptonic width quite well.[^5] It was later found that mixing due to a tensor force based on perturbative QCD also was adequate to explain the observed leptonic width [@MR]. Probably both perturbative and non-perturbative (e.g., coupled-channel) effects are present.
Mass (MeV/$c^2$) $\Gamma_{\rm tot}$ (MeV) $\Gamma_{ee}$ (keV) ${\cal B}(D^0 \od)$ ${\cal B}(D^+ D^-)$
------------------ -------------------------- --------------------- --------------------- ---------------------
$3769.9 \pm 2.5$ $23.6 \pm 2.7$ $0.26 \pm 0.04$ 58% 42%
: Properties of the $\ppp = \psi(3770)$
The mixing of the $\ppp$ with other states can affect both its decays and those of the other states. In Section II we discuss a simplified model for $\pp$–$\ppp$ mixing and its implications for leptonic and radiative partial decay rates of these states. The ratio $\Gamma(\ppp \to \gamma +
\chi_{c2})/ \Gamma(\ppp \to \gamma + \chi_{c0})$ is expected to be highly suppressed if $\ppp$ is a pure D-wave state, but could be enhanced by mixing [@Zhu; @KR; @YNY; @KL; @B88].
The “missing decay modes” of the $\pp$ [@psip], such as $\rho \pi$ and $K^* \bar K + {\rm~c.c.}$, are a long-standing puzzle [@rhopi; @Suz; @fsi; @GW; @FK]. Recently Suzuki [@Suz01] showed that if a $\pp$ decay amplitude due to coupling to virtual (but nearly on-shell) charmed particle pairs interferes destructively with the standard three-gluon amplitude, the suppression of these (and other) modes in $\pp$ final states can be understood. We pursue this suggestion further in Section III using the $\pp$–$\ppp$ mixing model described earlier. We propose that as a result of coupled-channel effects the expected decay width $\Gamma(\pp \to \rho \pi) \simeq 0.5$ keV and other “missing” modes could show up as corresponding partial widths in $\ppp$ decays, possibly enhanced by a considerable factor depending on the mixing angle. Since the latter state has a total width nearly 100 times that of the $\pp$, each of these partial widths still corresponds to a small branching ratio.
If coupling to charmed particle pairs is responsible for mixing the $\pp$ and the $\ppp$, and for significant effects on non-charmed final states in decays of both particles, it is likely that virtual or real $D^{(*)}
\bar D^{(*)}$ pairs produced in low partial waves in other contexts may undergo significant rescattering into non-charmed final states. Foremost among these cases are the decays of $B$ mesons, which can involve such pairs via the subprocesses $\bar b \to \bar c c \bar s$ or $\bar b \to \bar c c
\bar d$. The re-annihilation of the final $c \bar c$ pair can lead to an effective $\bar b \to \bar s$ or $\bar b \to \bar d$ penguin amplitude [@fsi; @Dun; @Ciu; @KLS], which appears to be needed in understanding large branching ratios for $B \to K \eta'$ [@eta] and $B \to K \pi$. Moreover, Suzuki [@Suz01] has proposed that this reannihilation, at least in $\pp$ decays, is associated with a large final-state phase. We discuss implications of this suggestion for CP violation in $B$ decays in Section IV, while Section V concludes.
Radiative $\ppp$ decays
=======================
The relative branching ratios for radiative decays to $\chi_c$ ($1^3P_1$) states are very different for $2S$ and $1D$ states. The observation of radiative decays $\ppp \to \gamma + \chi_c$ can determine the degree to which the $\ppp$ is mixed with an S-wave state [@Zhu; @KR; @YNY; @KL; @B88].
The rates for electric dipole ($E1$) transitions in quarkonium can be written \[eqn:rate\] = e\_Q\^2 \^3 C r \^2 , where $e_Q$ is the quark charge (in units of $|e|$), $\alpha = 1/137.036$ is the fine-structure constant, $\omega$ is the photon energy, and $\langle r
\rangle$ is the matrix element of $r$ between initial and final radial wave functions. The coefficients $C$ are summarized in Table II, where we compare relative rates for $E1$ transitions from $\ppp$ to $\chi_c$ states under the two extreme assumptions of a pure S-wave or a pure D-wave. The distinctive pattern associated with the pure $^3D_1$ configuration is a ratio ${\cal B}(\gamma + \chi_{c1})/{\cal B}(\gamma + \chi_{c0}) = 0.3$ and an almost complete suppression of the ratio ${\cal B}(\gamma + \chi_{c2})
/{\cal B}(\gamma + \chi_{c0})$.
--------- ---------- ----- ------------------------------- -- ------ -------------------------------
Final $\omega$
state (MeV) $C$ $\Gamma(^3P_J)/\Gamma(^3P_0)$ $C$ $\Gamma(^3P_J)/\Gamma(^3P_0)$
$^3P_0$ 338 1/9 1 2/9 1
$^3P_1$ 250 1/3 1.22 1/6 0.30
$^3P_2$ 208 5/9 1.16 1/90 0.012
--------- ---------- ----- ------------------------------- -- ------ -------------------------------
: Comparison of transitions $\ppp \to \gamma \chi_c$ under the assumptions of a pure S-wave or D-wave initial state. Coefficients $C$ are those in the expression (\[eqn:rate\]) for electric dipole transitions.
A more detailed model can be constructed by assuming that the $\ppp$ is a mixture of a $1^3D_1$ and a $2^3S_1$ state [@B88]: \[eqn:mix\] = + , = - + . The leptonic widths of $\ppp$ and $\pp$ are then [@Nov] (e\^+ e\^-) = | R\_[2S]{}(0) + \_[1D]{}(0) |\^2 , (e\^+ e\^-) = | R\_[2S]{}(0) - \_[1D]{}(0) |\^2 , where $e_c = 2/3$, $R_{2S}(0) = (4 \pi)^{1/2} \Psi_{2S}(0)$ is the radial $2S$ wave function at $r=0$, and ${R''}_{1D}(0)$ is the second derivative of the radial $2D$ wave function at the origin. The values $R_{2S}(0) =
0.734$ GeV$^{3/2}$ and $5R_{1D}''(0)/(2 \sqrt{2}m_c^2) = 0.095$ GeV$^{3/2}$ were taken in Ref. [@B88]. Assuming a common QCD correction to $\pp$ and $\ppp$ leptonic widths, we then fit the ratio = | |\^2 = 0.128 0.023 , with solutions $\phi = (12 \pm 2)^{\circ}$ or $\phi = -(27 \pm 2)^{\circ}$. These values agree with those of Kuang and Yan [@KY90], whose $\theta$ is the same as our $- \phi$. As they note, the smaller-$|\phi|$ solution is consistent with coupled-channel estimates [@EGKLY; @CC] and with the ratio of $\pp$ and $\ppp$ partial widths to $J/\psi \pi \pi$.
A nonrelativistic calculation along the lines of Ref. [@YNY] then yields the following predictions [@B88]: (\_[c0]{}) = 145 [keV]{} \^2 (1.73 + )\^2 , (\_[c1]{}) = 176 [keV]{} \^2 (-0.87 + )\^2 , (\_[c2]{}) = 167 [keV]{} \^2 (0.17 + )\^2 , (\_[c0]{}) = 67 [keV]{} \^2 (1 - 1.73 )\^2 , (\_[c1]{}) = 56 [keV]{} \^2 (1 + 0.87 )\^2 , (\_[c2]{}) = 39 [keV]{} \^2 (1 - 0.17 )\^2 . Other predictions are given, for example, in Ref. cite[GZS]{}. Zhu has apparently neglected to take account of relative signs of S-wave and D-wave contributions in the first three of the above equations when presenting his results for mixed states (Fig. 1.6.2, Ref. [@Zhu]). For small $\phi$, as suggested by the $\pp$ and $\ppp$ leptonic widths, the experimental rates for the $\pp$ radiative decays are about a factor of three below these predictions [@PDG], probably as a result of relativistic corrections [@MR; @MB]. The $\pp$ decays are expected to be particularly sensitive to such corrections as a result of the node in the $2S$ wave function; it is possible that the $\ppp$ predictions could be more reliable, since neither the $1D$ nor $1P$ radial wave functions has a node.
Results for $\ppp$ radiative decays [@Zhu], for $\sigma(e^+ e^-
\to \ppp) \equiv \sigma(\ppp) = 5.0 \pm 0.5$ nb, are: (\_[c0]{}) = 510 190 [keV]{} , (\_[c1]{}) = 440 160 [keV]{} , (\_[c2]{}) 520 [keV]{} (90% [ c.l.]{}) . These partial widths scale as $1/\sigma(\ppp)$. So far it does not seem possible to reconcile the central values of these results with the values of $\phi$ suggested earlier.[^6] The model for mixing between $\pp$ and $\ppp$ may be oversimplified, and relativistic corrections undoubtedly play a role. Nevertheless, the above results bear revisiting with improved statistics. The search for a 338 MeV monochromatic photon in the decays of the $\ppp$ would represent a worthwhile first step in the determination of this interesting resonance’s mixing parameters.
Missing modes of the $\pp$
==========================
F. A. Harris [@FH] has summarized a wide class of hadronic decay modes of the $\pp$ which appear to be suppressed relative to expectations. Of these the foremost is the $\rho \pi$ final state, with $K^+ K^{*-}(892)
+ {\rm c.c.}$ in second place. Let us review the expectations and the data for these two modes. (The decay $\pp \to K^0 \overline{K}^{*0}(892) + {\rm c.c.}$ has been observed with a branching ratio of $(8.1 \pm 2.4 \pm 1.6) \times
10^{-5}$ which indicates the contribution of a significant one-virtual-photon contribution [@Suz; @fsi; @Suz01], and we shall not discuss it further.)
We summarize in Table III the total widths, branching ratios, and derived partial widths for $J/\psi$ and $\pp$ decays into $\rho \pi$ and $K^+
\overline{K}^*(892)^-$, as well as the partial widths predicted for the $\pp$ decays to these final states. Both hadronic and leptonic decay rates are proportional to the square of the wave function at the origin $|\Psi(0)|^2$. Although one might expect an additional factor of $1/M_V^2$, where $M_V$ is the mass of the decaying vector meson, entering into the leptonic width, we shall ignore this effect, since it is probably offset by a (form) factor suppressing the hadronic decay of the higher-mass $\pp$ into low-multiplicity final states such as $\rho \pi$. Then we expect for any hadronic final state $f$ [@rhopi; @Suz01; @FH] (f) = (J/f) . This relation has been used to predict the quantities ${\Gamma_{\rm pred}}$ in Table III. One sees that $\pp \to \rho \pi$ is suppressed by a factor of at least $\sim 50$ with respect to naïve expectations, while the corresponding factor for $K^+ K^{*0}(892) + {\rm c.c.}$ is at least $\sim 20$.
----------------------- --------------------- ----------------- -- ------------------------ --------------- ---------------------------------
Decay mode
${\cal B}$ $\Gamma$ (keV) ${\cal B}$ $\Gamma$ (eV) ${\Gamma_{\rm pred}}^{~a}$ (eV)
$\rho \pi$ $(1.27 \pm 0.09)\%$ $1.10 \pm 0.10$ $< 2.8 \times 10^{-5}$ $ < 8.6$ $443 \pm 63$
${K^+ K^{*-}(892)}^b$ $(0.50 \pm 0.04)\%$ $0.44 \pm 0.04$ $< 3.0 \times 10^{-5}$ $ < 9.2$ $177 \pm 24$
----------------------- --------------------- ----------------- -- ------------------------ --------------- ---------------------------------
: Total widths, branching ratios, and derived partial widths for $J/\psi$ and $\pp$ decays.
Suzuki [@Suz01] has proposed that the coupling of $\pp$ to virtual pairs of charmed particles could provide an amplitude which interferes destructively with the perturbative QCD process $\pp \to 3g$ in the specific cases of $\rho \pi$ and $K \overline{K}^*(892) + {\rm c.c.}$ hadronic decays. If this is the case, and if virtual charmed particle pairs also play a role in mixing $\pp$ and $\ppp$, we would expect a similar amplitude to contribute to $\ppp \to D^{(*)} \overline{D}^{(*)} \to \rho \pi$ or $K
\overline{K}^*(892) + {\rm c.c.}$
In the absence of a detailed coupled-channel analysis, let us assume that the main effect on $\pp$ and $\ppp$ of their mutual coupling to charmed particle pairs is precisely the mixing discussed in the previous section. Let us assume that this mixing and the couplings of $\pp$ and $\ppp$ to $\rho \pi$ and $K \overline{K}^*(892) + {\rm c.c.}$ are such as to cancel the $\pp$ hadronic widths to these final states \[which are related to one another by flavor SU(3)\]. In this case we have $$\mat{\rho \pi}{\pp} =
\mat{\rho \pi}{2^3S_1} \cos \phi - \mat{\rho \pi}{1^3D_1} \sin \phi = 0~~,$$ = + = / , so that [*the missing $\rho \pi$ (and related) decay modes of $\pp$ show up instead as decay modes of $\ppp$, enhanced by the factor of $1/\sin^2 \phi$*]{}. The possible effects of this enhancement are shown in Table IV for the two solutions for $\phi$. One expects ${\cal B}(\ppp \to \rho \pi) \simeq 10^{-4}$ for $\phi \simeq -27^\circ$ and $\simeq 4 \times 10^{-4}$ for the favored value $\phi \simeq 12^\circ$. Either branching ratio is compatible with the current upper bound ${\cal B}(\ppp \to \rho \pi) < 1.3 \times
10^{-3} \times[5~{\rm nb}/\sigma(\ppp)]$ [@Zhu].
----------------------------------------- --------------- ---------------
$\phi$ ($^\circ$) $- 27 \pm 2$ $12 \pm 2$
$1/\sin^2 \phi$ $4.8 \pm 0.6$ $22 \pm 6$
$\Gamma(\ppp \to \rho \pi)$ (keV) $2.1 \pm 0.4$ $9.8 \pm 3.0$
${\cal B}(\ppp \to \rho \pi)~(10^{-4})$ $0.9 \pm 0.2$ $4.1 \pm 1.4$
----------------------------------------- --------------- ---------------
: Predicted $\ppp \to \rho \pi$ partial widths and branching ratios for two solutions of mixing angle $\phi$.
An alternative mechanism discussed by Suzuki [@Suz01] for introducing an additional non-perturbative $\pp$ decay amplitude is mixing with a vector glueball state (first discussed in the context of $J/\psi$ decays [@glu]). In this case the $\ppp$ is permitted, but not required, to mix with the vector glueball, so there is no particular reason for the missing partial widths for $\pp$ decays to show up as corresponding $\ppp$ partial decay rates.
Gérard and Weyers [@GW] have proposed that the three-gluon decay of the $\pp$ is absent or suppressed, and that the $\pp$ decays to hadrons instead mainly via a two-step process involving an intermediate $c \bar c(^1P_1)$ state. Feldmann and Kroll [@FK] have proposed that the $J/\psi \to \rho \pi$ decay is [*enhanced*]{} (rather than $\pp \to
\rho \pi$ being suppressed) by mixing of the $J/\psi$ with light-quark states, notably $\omega$ and $\phi$. Both mechanisms do not imply any special role for $\ppp$ charmless decays. Arguments against them raised in the last of Refs.[@rhopi] and in Ref. [@FH] include the appearance of certain unsuppressed light-quark decay modes of the $\pp$ and the lack of evidence for helicity suppression in $J/\psi$ decays involving a single virtual photon.
As Suzuki has noted, the cases of suppressed hadronic final states of the $\pp$ cannot extend to all its decays; indeed, the total hadronic width of $\pp$ exceeds estimates based on extrapolating from the $J/\psi$ using perturbative QCD by some 60–70% [@Suz01; @GL]. The non-perturbative effect of coupling to virtual charmed particle pairs, followed by the re-annihilation of these pairs into non-charmed final states, must thus be responsible for some tens of keV of the total width of the $\pp$ in Suzuki’s scheme.
A corresponding effect in the decays of the $\ppp$, which is about 85 times as wide as the $\pp$, would contribute at most a percent to its total width. Present searches for non-charmed decays of the $\ppp$ [@Zhu; @Walid] are not sensitive enough to exclude this possibility since they did not compare on-resonance data with data taken off-resonance at a sufficiently close energy [@JTpc].
A related method allows one to estimate the partial decay rate of $\ppp$ to non-charmed final states. The branching ratio ${\cal B}(J/\psi \to \rho \pi)$ is $(1.27 \pm 0.09)\%$. Since about 1/3 of $J/\psi$ decays can be ascribed to non-$3 g$ mechanisms, we expect $\rho \pi$ to account for about 2% of all [*hadronic*]{} $J/\psi$ decays, and thus no more than this percentage of $\ppp$ hadronic charmless decays. (The availability of more final states undoubtedly reduces the $\rho \pi$ fraction in comparison with $J/\psi$ hadronic decays.) We thus estimate for hadronic charmless decays ${\cal B}(\ppp) \gsim 2 \times
10^{-4} /2\% \simeq 1\%$, again give or take a factor of 2 depending on the sign of $\phi$. This is consistent with our previous estimate.
It is even possible that we have seriously underestimated the role of non-charmed final states in hadronic $\ppp$ decays. If so, there is a chance of reconciling the smaller cross section for $e^+ e^- \to \ppp$ measured by the Mark III Collaboration using a comparison of single-charm and double-charm production, $\sigma(\ppp) = 5.0 \pm 0.5$ nb [@MkIII], with higher values obtained by other groups using direct measurement [@LGW; @XB; @MkII; @BES], whose average I find to be $8.0 \pm 0.7$ nb.[^7] This possible discrepancy was a factor motivating the studies in Refs. [@Zhu; @Walid]. Those and related searches need to be performed with greater sensitivity and with off-resonance running in order to determine backgrounds from such processes as $e^+ e^- \to \gamma^* \to {\rm charmless~hadrons}$. In any event, the search for the “missing final states” of the $\pp$ among the decay products of the $\ppp$ is a reasonable goal of foreseen studies [@Wkshp].
Implications for $B$ decays
===========================
A key observation in Ref. [@Suz01] with regard to the additional contribution to $\pp$ hadronic decays is that it is likely to have a large final-state phase, in order to interfere destructively with the pertubative $3g$ contribution in the $\rho \pi$ and $K \bar K^*(892) + {\rm
c.c.}$ channels. If this new contribution is due to rescattering into non-charmed final states through charmed particle pairs, it is exactly the type of contribution proposed in Refs. [@fsi; @Dun; @Ciu; @KLS] in which the decay $\bar b \to \bar c c \bar s$ or $\bar b \to \bar c c \bar d$ contributes to a penguin amplitude with a large strong phase. Several implications of this possibility were reviewed in [@fsi], and others have been pointed out in [@Ciu]. These include the following:
1. The semileptonic branching ratio ${\cal B}(B \to X \ell \nu)$ can be diminished with respect to the theoretical prediction if the penguin amplitude leads to a net enhancement of $\bar b \to \bar s$ and $\bar b \to \bar d$ transitions. The enhancement need not be large enough to conflict with any experimental upper limits on such transitions, which are in the range of a few percent of all $B$ decays [@slims].
2. The number $n_c$ of charmed particles per average $B$ decay can be reduced by the reannihilation of $c \bar c$ to light quarks. The degree to which this improves agreement with experiment is a matter of some debate [@Lenz], since a recent SLD measurement [@SLD] finds $n_c = 1.238
\pm 0.027 \pm 0.048 \pm 0.006$, closer to theoretical expectations than earlier values [@Barker].
3. The enhancement of the inclusive branching ratio ${\cal B}(B \to \eta'
X)$ [@CLEOeta] in comparison with theoretical expectations [@incl] can be explained.
4. The required additional contribution [@eta] to the exclusive branching ratios ${\cal B}(B \to K \eta')$ [@CLEOeta], in comparison with the penguin contribution leading to $B^0 \to K^+ \pi^-$ or $B^+ \to K^0 \pi^+$, can be generated.
5. In any $B \to K \pi$ process in which the dominant penguin amplitude interferes with tree-amplitude contributions, notably in $B^+ \to \pi^0
K^+$ and $B^0 \to K^+ \pi^-$, a CP-violating asymmetry can occur up to the maximum allowed by the ratio of the tree to penguin amplitudes’ magnitudes. This asymmetry, estimated to be about 1/3 in Ref. [@fsi], is not yet excluded by experiment [@CLEOasy]. The enhancement of the penguin amplitude by the intrinsically non-perturbative charm rescattering mechanism seems to fall outside the purview of the essentially perturbative approach of Ref. [@BBNS], so we would not expect to encounter it in that treatment.
The charm rescattering model for suppression of $\pp \to \rho \pi$ and related decays has no [*a priori necessity*]{} for the final state phase to be large [@Suz01]. Additional evidence for such a large final-state phase in closely related processes would be the presence of large direct CP-violating symmetries in $B^+ \to \pi^0 K^+$ and $B^0 \to K^+
\pi^-$, with similar expected asymmetries for the two processes [@Ciu; @KLS; @comb; @MN]. Since the process $B^+ \to \pi^+ K^0$ is not expected to have a tree contribution, we expect it to have a much smaller CP-violating asymmetry. Present data [@CLEOasy] are consistent at the level of 10–20% with vanishing asymmetry for all three processes: (K\^+ \^-) = -0.04 0.16, [A]{}(K\^+ \^0) = -0.29 0.23, [A]{}(K\_S \^+) = 0.18 0.24.
Conclusions
===========
The coupling of $\pp$ and $\ppp$ to charmed particle pairs can lead to S–D-wave mixing, the distortion of the relative branching ratios of the $\ppp$ to $\gamma + \chi_c$ final states, and the suppression of some decay modes of $\pp$ and their appearance instead in products of the $\ppp$. If $\ppp$ to $\gamma + \chi_{c2}$ is observed at a branching ratio level exceeding a couple of parts in $10^4$, this will be evidence for S–D-wave mixing, while the branching ratio for $\ppp$ to $\gamma + \chi_{c0}$ is expected to be a percent, give or take a factor of 2. A similar branching ratio is expected for [*hadronic*]{} charmless decays of $\ppp$. This picture provides a rationale for large observed $\bar b \to \bar s$ penguin amplitudes in $B$ meson decays, and would be further supported by the observation of large direct CP-violating asymmetries in the decays $B^+ \to \pi^0 K^+$ and $B^0 \to K^+ \pi^-$.
Acknowledgments {#acknowledgments .unnumbered}
===============
I thank San Fu Tuan for asking questions which led to this investigation and for useful comments, and Thorsten Feldmann, David Hitlin, Kenneth Lane, and Jon J. Thaler for discussions. This work was supported in part by the United States Department of Energy through Grant No. DE FG02 90ER40560.
\#1\#2\#3[Am. J. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Ann. Phys. (N.Y.) [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Acta Phys. Polonica [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Ann. Rev. Nucl. Part. Sci. [**\#1**]{}, \#2 (\#3)]{} 97[[*Beauty ’97*]{}, Proceedings of the Fifth International Workshop on $B$-Physics at Hadron Machines, Los Angeles, October 13–17, 1997, edited by P. Schlein]{} \#1\#2\#3[Comments on Nucl. Part. Phys. [**\#1**]{}, \#2 (\#3)]{} 89[[*CP Violation,*]{} edited by C. Jarlskog (World Scientific, Singapore, 1989)]{} \#1\#2\#3[Commun. Theor. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Eur. Phys. J. C [**\#1**]{}, \#2 (\#3)]{} 79[[*Proceedings of the 1979 International Symposium on Lepton and Photon Interactions at High Energies,*]{} Fermilab, August 23-29, 1979, ed. by T. B. W. Kirk and H. D. I. Abarbanel (Fermi National Accelerator Laboratory, Batavia, IL, 1979]{} 87[[*Proceeding of the 1987 International Symposium on Lepton and Photon Interactions at High Energies,*]{} Hamburg, 1987, ed. by W. Bartel and R. Rückl (Nucl. Phys. B, Proc. Suppl., vol. 3) (North-Holland, Amsterdam, 1988)]{} \#1\#2\#3[ [**\#1**]{}, \#2 (\#3)]{} 72[[*Proceedings of the XVI International Conference on High Energy Physics*]{}, Chicago and Batavia, Illinois, Sept. 6 – 13, 1972, edited by J. D. Jackson, A. Roberts, and R. Donaldson (Fermilab, Batavia, IL, 1972)]{} \#1\#2\#3[Int. J. Mod. Phys. A [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[JHEP [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[J. Phys. B [**\#1**]{}, \#2 (\#3)]{} 87[[*Selected Topics in Electroweak Interactions*]{} (Proceedings of the Second Lake Louise Institute on New Frontiers in Particle Physics, 15 – 21 February, 1987), edited by J. M. Cameron (World Scientific, Singapore, 1987)]{} \#1\#2\#3[[Kong. Danske Vid. Selsk., Matt-fys. Medd.]{} [**\#1**]{}, No. \#2 (\#3)]{} 85[[*Proceedings of the International Symposium on Lepton and Photon Interactions at High Energy,*]{} Kyoto, Aug. 19-24, 1985, edited by M. Konuma and K. Takahashi (Kyoto Univ., Kyoto, 1985)]{} \#1\#2\#3[Mod. Phys. Lett. A [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Nature [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Nuovo Cim. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Nucl. Instr. Meth. A [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Nucl. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Nucl. Phys. B Proc. Suppl. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3\#4[Pis’ma Zh. Eksp. Teor. Fiz. [**\#1**]{}, \#2 (\#3) \[JETP Lett. [**\#1**]{}, \#4 (\#3)\]]{} \#1\#2\#3[Phys. Lett. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Lett. A [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Lett. B [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. C [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. D [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rep. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Prog. Theor. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Rev. Mod. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1[ ……[rp ]{}[\#1]{} ]{} \#1\#2\#3[Rep. Prog. Phys. [**\#1**]{}, \#2 (\#3)]{} 87[[*Proceedings of the Salt Lake City Meeting*]{} (Division of Particles and Fields, American Physical Society, Salt Lake City, Utah, 1987), ed. by C. DeTar and J. S. Ball (World Scientific, Singapore, 1987)]{} 89[[*Proceedings of the XIVth International Symposium on Lepton and Photon Interactions,*]{} Stanford, California, 1989, edited by M. Riordan (World Scientific, Singapore, 1990)]{} 82[[*Proceedings of the 1982 DPF Summer Study on Elementary Particle Physics and Future Facilities*]{}, Snowmass, Colorado, edited by R. Donaldson, R. Gustafson, and F. Paige (World Scientific, Singapore, 1982)]{} 90[[*Research Directions for the Decade*]{} (Proceedings of the 1990 Summer Study on High Energy Physics, June 25–July 13, Snowmass, Colorado), edited by E. L. Berger (World Scientific, Singapore, 1992)]{} \#1\#2\#3\#4[Yad. Fiz. [**\#1**]{}, \#2 (\#3) \[Sov. J. Nucl. Phys., \#4 (\#3)\]]{} \#1\#2\#3\#4\#5\#6[Zh. Eksp. Teor. Fiz. [**\#1**]{}, \#2 (\#3) \[Sov.Phys. - JETP [**\#4**]{}, \#5 (\#6)\]]{} \#1\#2\#3[Zeit. Phys. C [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Zeit. Phys. D [**\#1**]{}, \#2 (\#3)]{}
[99]{}
P. A. Rapidis , .
E. Eichten , ; ; K. Lane and E. Eichten, .
Mark III , J. Adler , , .
See http://www.lns.cornell.edu/public/CLEO/CLEO-C/ for a description of plans for running of CLEO/CESR at the $\ppp$ and other energies.
Yanong Zhu, Ph. D. Thesis, California Institute of Technology, 1988, Caltech report CALT-68-1513 (unpublished), based on Mark III data [@MkIII].
Walid Abdul Majid, Ph. D. Thesis, University of Illinois, 1993 (unpublished), based on Mark III data [@MkIII].
W. Kwong and J. L. Rosner, .
S. Godfrey and J. L. Rosner, 01-14, hep-ph/0105273, submitted to Phys. Rev. D.
.
J.-M. Richard, .
E. Eichten, .
P. Moxhay and J. L. Rosner, .
H. Yamamoto, A. Nishimura, and Y. Yamaguchi, ; H. Yamamoto and A. Nishimura, .
K. D. Lane, Harvard University preprint HUTP-86/A045 (unpublished).
J. L. Rosner, in [*Particles and Fields 3*]{}, Proceedings of the Banff Summer Institute (CAP) 1988, Banff, Alberta, 14–26 August 1988, edited by A. N. Kamal and F. C. Khanna (World Scientific, Singapore, 1989), p. 395.
M. E. B. Franklin, Ph. D. Thesis, Stanford University, 1982, Stanford Linear Accelerator Center report SLAC-0254 (unpublished); Mark II , M. E. B. Franklin , ; BES , J. Z. Bai , ; ; ; F. A. Harris, hep-ex/9903036, presented at APS Division of Particles and Fields Meeting, UCLA, January, 1999, and hep-ex/9910027, in [*Proceedings of the International Europhysics Conference on High-Energy Physics*]{} (EPS-HEP 99), Tampere, Finland, 15–21 July 1999, edited by K. Huitu, H. Kurki-Suonio, and J. Maalampi (IOP, 2000), p. 859.
S. J. Brodsky, G. P. Lepage, and S. F. Tuan, ; Y.-Q. Chen and E. Braaten, , ; S. F. Tuan, ; Y. F. Gu and S. F. Tuan, invited talk by S. F. Tuan at 8th International Conference on Hadron Spectroscopy (Hadron 99), Beijing, 24–28 August, 1999, edited by W. G. Li, Y. Z. Huang, and B. S. Zou (North-Holland, 2000), .
M. Suzuki, .
J. L. Rosner, .
J.-M. Gérard and J. Weyers, .
T. Feldmann and P. Kroll, .
M. Suzuki, .
I. Dunietz, J. Incandela, F. D. Snider, and H. Yamamoto, ; I. Dunietz, in 97, , .
M. Ciuchini, E. Franco, G. Martinelli, and L. Silvestrini, ; M. Ciuchini, R. Contino, E. Franco, G. Martinelli, and L. Silvestrini, .
Y.-Y. Keum, H.-N. Li, and A. I. Sanda, ; ; Y.-Y. Keum and H.-N. Li, .
M. Gronau and J. L. Rosner, ; A. S. Dighe, M. Gronau, and J. L. Rosner, ; .
V. A. Novikov , .
Y.-P. Kuang and T.-M. Yan, .
E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, and T. M. Yan, ; ; .
K. Heikkilä, N. A. Törnqvist, and S. Ono, .
H. Grotch, X. Zhang, and K. J. Sebastian, .
R. McClary and N. Byers, .
F. A. Harris, hep-ex/9903036 [@psip].
P. G. O. Freund and Y. Nambu, ; W.-S. Hou and A. Soni, .
Y. F. Gu and X. H. Li, .
J. Thaler, private communication.
Lead-glass Wall , I. Peruzzi , ; D. L. Scharre , : $\sigma(\ppp) = 10.3 \pm 1.6$ nb.
R. Partridge, Ph. D. thesis, California Institute of Technology, report CALT-68-1150, 1974 (unpublished), based on Crystal Ball data: $\sigma(\ppp) = 6.7 \pm 0.9$ nb.
Mark II , R. H. Schindler , : $\sigma(\ppp) = 9.3 \pm 1.4$ nb.
BES , J. Z. Bai , report hep-ex/0102003, submitted to Phys. Rev. Lett.: $\sigma(\ppp) = 8.7 \pm 2.5$ nb (my estimate).
CLEO , T. E. Coan , .
A. Lenz, talk given at UK Phenomenology Workshop on Heavy Flavor and CP Violation, Durham, England, 17-22 Sept. 2000, preprint hep-ph/0011258.
SLD , presented by A. S. Chou at Division of Particles and Fields Meeting, American Physical Society, Columbus, Ohio, August 2000, SLAC report SLAC-PUB-8686; presented by C. S. Lin at 36th Rencontres de Moriond on Electroweak Interactions and Unified Theories, Les Arcs, France, 10–17 March 2001, hep-ex/0105025.
G. J. Barker, Talk No. 07-e-04 at International Conference on High Energy Physics, Osaka, Japan, August 2000 (unpublished).
CLEO , B. H. Behrens , .
I. Halperin and A. Zhitnitsky, ; ; F. Yuan and K.-T. Chao, ; D. Atwood and A. Soni, ; ; H. Fritzsch, ; H.-Y. Cheng and B. Tseng, ; A. L. Kagan and A. A. Petrov, UCHEP-27/UMHEP-443, hep-ph/9707354 (unpublished); A. Ali and C. Greub, ; W.-S. Hou and B. Tseng, ; A. Datta, X.-G. He, and S. Pakvasa, .
CLEO , S. Chen , .
M. Beneke, G. Buchalla, M. Neubert, and C. T. Sachrajda, ; ; CERN report CERN-TH-2001-107, hep-ph/0104110 (unpublished).
M. Gronau and J. L. Rosner, .
M. Neubert, .
[^1]: Enrico Fermi Institute preprint EFI 01-21, hep-ph/0105327. Submitted to Physical Review D.
[^2]: rosner@hep.uchicago.edu
[^3]: The numbers in parentheses indicate the masses of the particles, in MeV/$c^2$.
[^4]: We shall use spectroscopic notation $n^{2S+1}L_J$, where $n = 1, 2, 3, \ldots$ is the radial quantum number; $S = 0$ or 1 is the $Q \bar Q$ spin; $L = S,~P,~D, \ldots$ ($l = 0, 1, 2, \ldots$) is the orbital angular momentum; and $J = 0, 1, 2, \ldots$ is the total spin.
[^5]: For later discussions of mixing due to coupled-channel effects see [@ECC].
[^6]: The solution with $\phi = 12^{\circ}$, favored by coupled-channel calculations [@EGKLY; @CC], predicts $\Gamma(\ppp \to \gamma \chi_{c(0,1,2)}) = (524,~73,~61)$ keV, implying that the $\chi_{c1}$ signal of Ref. [@Zhu] should not be confirmed.
[^7]: The same average was found in [@Zhu] without the data of [@BES].
| {
"pile_set_name": "ArXiv"
} |
=5000
For two-dimensional electrons in a perpendicular magnetic field $B_{\perp}$, independent electron eigenstates occur in manifolds known as Landau levels with macroscopic degeneracy $AB_{\perp}/\Phi_0$, where $A$ is the sample area and $\Phi_0$ is the magnetic flux quantum. The zero-width energy bands are responsible for a tremendous variety of many-body physics that has been observed in the quantum Hall regime [@prangegirvin; @sarmapinczuk]. Quantum Hall ferromagnetism, of interest here, occurs when two different Landau levels distinguished by the cyclotron energy, spin, or quantum well subband labels of their orbitals are brought into energetic alignment and the Landau level filling factor $\nu$ is close to an integer. Neglecting charge fluctuations, low-energy states of quantum Hall ferromagnets (QHFs) are specified by assigning to each orbital in the Landau level a two-component spinor $(\cos\theta/2,
e^{i\varphi}\sin\theta/2)$ corresponding to a pseudospin oriented along a general unit vector $\hat{m}=(\sin\theta\cos\varphi, \sin\theta\sin\varphi,
\cos\theta)$. (The influence of remote Landau levels can be captured perturbatively as necessary.) While ordered states can occur when any two Landau levels simultaneously approach the chemical potential, the nature of the ground state is sensitive to the microscopic character of the crossing Landau levels [@jungwirthprl98; @jungwirthprb01]. Isotropic [@tycko; @girvinmacd], XY [@girvinmacd; @eisenstein; @spielman] and Ising QHFs [@jungwirthprl98; @piazza; @eom; @giuliani; @daneshvar] are now well established. Our work is motivated by the recent observation [@depoortere] of hysteretic transport and unexplained resistance spikes when Landau levels with different quantized kinetic (cyclotron) energies cross. We argue that the resistance spikes are due to charge transport in the 1D quasiparticle systems of long domain wall loops and establish a correspondence between their occurrence and vanishing domain-wall free-energy density at the Ising transition temperature, $T_c$.
The dependence of the uniform QHF state energy per electron on pseudospin orientation has the form [@jungwirthprb01]: $E[\hat{m}]=-bm_z-Jm_z^2$, where $b$ is an effective magnetic field that includes both single-particle Landau level splitting and interaction contributions [@jungwirthprb01] and $J >0$ is an effective Ising interaction parameter. At $b =0$, the $m_z = 1$ (pseudospin $\uparrow$) and $m_z = -1$ (pseudospin $\downarrow$) states are degenerate. In the following we establish an association between $b=0$ and the experimental resistance spikes, and propose an explanation for the spike origin. For tilted magnetic fields and variable 2D electron densities, the $b=0$ condition at a given filling factor is achieved along a continuous line in the two-dimensional $(B_{tot}-B_{\perp})$ space, which can be explored experimentally by tilting the field away from the sample normal. ($B_{tot}$ is the total magnetic field.)
=3.3in
In Fig. \[prlfig1\] we compare our theoretical[@jungwirthprl98; @jungwirthprb99] $b=0$ line for $\nu=3$, based on numerical self-consistent-field calculations for the geometry of De Poortere [*et al.*]{}’s sample and on many-body RPA/Hartree-Fock theory, with the line along which resistance spikes were observed. The coincidence of these two curves strongly suggests that the spikes occur when $b=0$. The same calculations[@jungwirthprb01] yield the estimate $J= 0.018 e^2/\epsilon \ell/k_B \sim
2 {\rm K}$, where $\ell$ is the magnetic length defined by $2 \pi \ell^2 B_{\perp} = \Phi_0$.
The ground state of an Ising QHF has $m_z = 1$ for $b>0$ and $m_z = -1$ for $b <0$. At finite temperatures, non-trivial pseudospin magnetization configurations become important. For Ising QHFs, an elementary calculation shows that spin-wave collective excitations have a gap $\Delta/k_B = 4J/k_B
\sim 8 {\rm K}$. Since the hysteretic resistance spikes occur only for $T < 0.5 {\rm K}$, spin-wave excitations cannot play a role. Instead, as we now explain, the important thermal fluctuations in Ising QHFs involve domain walls between $m_z=1$ and $m_z=-1$ regions of the sample.
In classical 2D Ising models the critical temperature can be understood as a competition between unfavorable near-neighbor-spin interaction energy along a domain wall, $L\gamma$, and the wall configurational entropy, $Ls_c=L/\xi\, k_B\ln (3)$, where $\xi$ is the domain wall persistence length. Both give free-energy contributions proportional to wall length, $L$, with the former effect favoring short walls and the latter contribution, which is proportional to temperature, favoring long walls. For $T > T_c$ the system free energy is lowered when domain walls expand to the sample perimeters, destroying magnetic order. The structure of domain walls is more complicated in Ising QHFs. As the domain wall is transversed, the local pseudospin orientation goes from the north pole ($m_z=1$) to the south pole ($m_z=-1$), at a fixed orientation $\varphi$ of its $\hat x-\hat y$ plane projection. We have evaluated the energy per unit length $\gamma$ of an infinite domain wall by solving self-consistent Hartree-Fock equations: $$\tan\theta(X)=\frac{-2\big(b-H^F_{\uparrow,\downarrow}(X)\big)}
{H^H_{\uparrow,\uparrow}(X)+H^F_{\uparrow,\uparrow}(X)
-H^H_{\downarrow,\downarrow}(X)-H^F_{\downarrow,\downarrow}(X)}\; ,
\label{hf}$$ where the Hartree energy is given by $$\begin{aligned}
& &H^H_{\sigma,\sigma^{\prime}}(X)=\sum_{X^{\prime}}\int d^3\vec{r}_1
\int d^3\vec{r}_2 V(\vec{r}_1-\vec{r}_2)\,\times\nonumber \\
& &\psi^{\ast}_{\sigma,X}(\vec{r}_1)
\psi_{\sigma^{\prime},X}(\vec{r}_1)
\psi^{\ast}_{\hat{m}(X^{\prime}),X^{\prime}}(\vec{r}_2)
\psi_{\hat{m}(X^{\prime}),X^{\prime}}(\vec{r}_2)
\label{hartree}\end{aligned}$$ and the exchange energy by $$\begin{aligned}
& &H^F_{\sigma,\sigma^{\prime}}(X)=-\sum_{X^{\prime}}\int d^3\vec{r}_1
\int d^3\vec{r}_2 V(\vec{r}_1-\vec{r}_2)\,\times\nonumber \\
& &\psi^{\ast}_{\sigma,X}(\vec{r}_1)
\psi_{\sigma^{\prime},X}(\vec{r}_2)
\psi^{\ast}_{\hat{m}(X^{\prime}),X^{\prime}}(\vec{r}_1)
\psi_{\hat{m}(X^{\prime}),X^{\prime}}(\vec{r}_2)\; .
\label{exchange}\end{aligned}$$ In Eqs. (\[hartree\]) and (\[exchange\]), $V(\vec{r}_1-\vec{r}_2)$ is the RPA-screened Coulomb potential and the self-consistent-field one-particle orbitals, $\psi_{\sigma,X}(\vec{r})$, are extended along the domain wall and localized near wavevector $k$ dependent guiding centers $X=k \ell^2$. The energy density $\gamma$ is proportional to the increase in Hartree-Fock quasiparticle energies, integrated across the domain wall. We find that the domain wall width is typically several magnetic lengths and for the $\nu=3$ coincidence we find that $\gamma \ell =
0.009 e^2/\epsilon \ell$.
A unique property of QHFs is the proportionality between electron charge density and pseudospin topological index density. It is this property that is responsible for the fascinating skyrmion physics extensively studied in the isotropic case [@tycko; @girvinmacd]. In the case of Ising QHFs, the proportionality implies a local excess charge per unit length along a domain wall $\rho_{\parallel} = e \nabla \varphi \cdot \hat n/(2 \pi) $ where $\hat n$ specifies the local direction along the domain wall. Single-valuedness of the magnetization requires that the winding number of the angle $\varphi$ around a domain wall loop be quantized in units of $2 \pi$ and hence that the excess charge of a domain wall loop be quantized in units of the the electron charge $e$. The free-energy associated with the classical $\varphi$ field fluctuations within a domain wall, $$f_{\varphi}=\frac{1}{L}k_BT\ln Z ;\;
Z=\int{\cal D}\varphi\exp\big(E_c[\varphi]/k_BT\big)\; ,
\label{phifree}$$ is controlled by the Coulomb interaction energy $E_c$ due to the consequent charge fluctuations.
Assuming a domain wall persistence length $\xi\approx\ell$, the free energy density of Ising QHF domain wall loops is given by $f=k_BT\ln(3)/\ell+\gamma+f_{\varphi}$ and equals zero at $T=T_c$. For the $\nu=3$ QHF, these considerations imply that infinitely long domain walls proliferate and order is lost for $T$ larger than the transition temperature $T_c\approx 500$ mK. The close correspondence between this $T_c$ estimate, and the maximum temperature ($430$ mK) at which hysteretic resistance spikes are observed[@depoortere] strongly supports our contention that the unusual transport phenomena are a consequence of the existence of long domain wall loops in these materials. In the following we first discuss the $b$-field, Landau level filling factor, and temperature dependence of the system’s domain-wall soup and then demonstrate that this picture can account for many details of the transport observations.
Domain wall loops are characterized by their length and by their charge, with infinitely long loops appearing only for $b=0$ and $T > T_c$. For finite size loops with a typical radius larger than the domain wall width we can use our Hartree-Fock self-consistent results for $\theta(X)$ to estimate the Coulomb self-interaction energy. A two-dimensional charge density of a circular loop with excess charge $e$ distributed uniformly along the domain wall is given by $$\rho_{2D}(r)=\frac{e}{4\pi r}\frac{d}{dr}\cos\theta(r)\;.
\label{rho2d}$$ Since the corresponding Coulomb self-interaction energy is proportional to the square of the charge and approximately inversely proportional to the length, charged domain wall loops have a higher energy and, at integer filling factors, will always be less common than neutral domain wall loops. Resistance spikes are generically observed slightly away from integer filling factors, however, and here the situation changes because the domain wall loops can exchange charge with the rest of the 2D electron system.
=3.3in
The lowest energy elementary charged excitations of the $\nu=3$ ground state are ordinary Hartree-Fock electron and hole quasiparticles [@lilliehook], not charged domain wall loops. In systems with no disorder, the chemical potential lies in the middle of the Hartree-Fock gap when $\nu$ is an integer but moves quickly (by $\delta \mu$) toward the electron quasiparticle energy for $\nu >3$ and toward the hole quasiparticle energy for $\nu < 3$. These chemical potential shifts will be reduced by disorder which broadens the quasiparticle bands. The change in chemical potential favors charge $Q$ over neutral skyrmions by a large factor $\exp (Q|\delta \mu||/k_B T)$. This factor can be estimated quantitatively using the experimental value of the quasiparticle excitation gap ($\sim 2$ K) [@depoortere].
Also important in controlling the domain wall soup is the effective field $b$, which measures the distance from Landau level coincidence. For $b \ne 0$ the energy of a domain wall loop has a contribution proportional to $b$ and the number of condensate electrons contained within the loop: $$E_b=-\frac{b}{2\ell^2}\int dr\, r\,[\cos\theta(r)-1]\; .
\label{eb}$$ This contribution will decrease the number of large domain wall loops enclosing the minority phase and is independent of the charge carried by the loop. Summing up $E_b$, the Coulomb self-interaction and chemical potential contributions and the Hartree-Fock domain wall energy of a circular loop of radius $R$, $2\pi R\gamma$, we can estimate statistical weights of neutral and charged domain wall loops in the sample of De Poortere [*et al*]{}. In Fig. \[prlfig2\] we plot our results for temperature near $T_c$ and $\nu > 3$. For non-zero $b$-fields, small neutral domain wall loops dominate, while typical loops at the coincidence are large and carry an excess charge.
We now address characteristic features of the measured resistive hysteresis loop. Dissipation can occur in Ising QHFs as a result of Hartree-Fock quasiparticle diffusion, charged domain-wall-loop diffusion, or as a result of charge diffusion within domain-wall loops. It is clear that the resistance spikes, which appear only for small $b$ and $T < T_c$, are associated with the appearance in the sample of large domain-wall loops. Even though these loops tend to be charged at the spike maximum, we expect that they will be immobile because of their large size and that dissipation due to their motion is small. Instead we propose that mobile quasiparticles inside domain walls are responsible for the increase of dissipation. In Fig. \[prlfig3\] we plot the Hartree-Fock quasiparticle energies in a cross-section of a domain wall, obtained from Eqs. (\[hf\])-(\[exchange\]). In the center of the domain wall the quasiparticle gap is reduced by nearly 50%. Away from integer filling factors, the bottom of these 1D quasiparticle bands will lie below the chemical potential which is pinned to the bulk quasiparticle energies. We note that, unlike quantum Hall edge states, counter propagating states exist within each loop. At $\nu\approx 3$, for example, the 1D states have a nearly parabolic dispersion characterized by an effective mass $m^*\approx
2 m_e$. The particles can cross the sample by scattering between overlapping loops. For $\nu=3$ and $T\approx
T_c$, it follows from inset of Fig. \[prlfig1\] and from Fig. \[prlfig2\] that the characteristic loop radius at the spike edge is $3\ell$. At low temperatures or small magnetic lengths (high 2D electron gas densities), domain wall loops become small and dilute, loops do not overlap, and charge diffusion within domain walls cannot contribute to dissipation. This explains the absence of resistance spikes at Landau level coincidence [@depoortere] under these circumstances.
=3.3in
The above mechanism also explains the different resistance spike heights observed in up and down field sweeps [@depoortere] near $\nu=3$. As shown in the inset of Fig. \[prlfig1\], the majority pseudospin Landau level for the up-sweep is the $n=2$ spin-up level while for the down-sweep it is the $n=0$ spin-down Landau level. This difference in Landau level configurations in the two sweep directions alters remote Landau level screening in the sample which has a marked effect on the quasiparticle energy spectrum. The reduction of the quasiparticle gap in the domain wall, relative to its bulk value, is stronger in the up-sweep case, leading to more domain-wall quasiparticles and more dissipation, as seen in experiment. Similar agreement between the measured hysteresis loop properties and domain wall quasiparticle spectra applies for $\nu=4$ [@depoortereunpubl]. In Fig. \[prlfig3\] we also plot energy spectra at $\nu=5$ which are nearly identical for the up or down majority pseudospin orientations. This explains the absence of peak-height asymmetry in the hysteresis measurement at this filling factor $\nu=5$.[@depoortereunpubl]
We thank Etienne De Poortere, Herbert Fertig, and Mansour Shayegan for many important discussions. Our work was supported by R.A. Welch Foundation, by Minsistry of Education of the Czech Republic under Grant OC P5.10, and by the Grant Agency of the Czech Republic under Grant 202/01/0754.
R.E. Prange and S.M. Girvin (eds), [*The Quantum Hall Effect*]{} (Springer, New York, 1990).
S. Das Sarma and A. Pinczuk (eds), [*Perspectives in Quantum Hall Effects*]{} (Wiley, New York, 1996).
T. Jungwirth, S.P. Shukla, L. Smrčka, M. Shayegan, and A.H. MacDonald, Phys. Rev. Lett. [**81**]{}, 2328 (1998).
T. Jungwirth and A.H. MacDonald, Phys. Rev. B [**63**]{}, 035305 (2001).
R. Tycko, S.E. Barrett, G. Dabbagh, L.N. Pfeiffer, and K.W. West, Science [**268**]{}, 1460 (1995).
S.M. Girvin and A.H. MacDonald, in Ref. [@sarmapinczuk].
J.P. Eisenstein, in Reference ([*5*]{}).
I.B. Spielman, J.P. Eisenstein, L.N. Pfeiffer, and K.W. West, Phys. Rev. Lett. [**84**]{}, 5808 (2000).
V. Piazza, V. Pellegrini, F. Beltram, W. Wegscheider, T. Jungwirth, and A.H. MacDonald, Nature [**402**]{}, 638 (1999).
J. Eom, H. Cho, W. Kang, K.L. Campman, A.C. Gossard, M. Bichler, and W. Wegscheider, Science [**289**]{}, 2320 (2000).
G.F. Giuliani and J.J. Quinn, Phys. Rev. B [**31**]{}, 6228 (1985).
A.J. Daneshvar, C.J.B. Ford, M.Y. Simmons, A.V. Kahetskii, A.R. Hamilton, M. Pepper, and D.A. Ritchie, Phys. Rev. Lett. [**79**]{}, 4449 (1997).
E. P. De Poortere, E. Tutuc, S. J. Papadakis, and M. Shayegan, Science [**290**]{}, 1546 (2000).
T. Jungwirth, A.H. MacDonald, L. Smrčka, and S.M. Girvin, Phys. Rev. B [**60**]{}, 15574 (1999).
Self-consistent local-spin-density approximation calculations were carried out to establish that for the 15 nm wide AlAs quantum wells studied in these experiments, orbital effects of the in-plane field are negligibly small. In the calculation we assumed Landé g-factor $g=1.9$, electron effective mass $m^*=0.41 m_e$, and average heterostructure dielectric constant $\epsilon = 11.5$.
E. P. De Poortere, E. Tutuc, S. J. Papadakis, and M. Shayegan, unpublished data.
D. Lillieh" o" ok, Phys. Rev. B [**62**]{}, 7303 (2000).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We show that there exists no CR-regular embedding of the 5-sphere $S^5$ into $\mathbb{C}^4$, and also obtain analogous results for embeddings of higher dimensional spheres into complex space.'
author:
- 'Ali M. Elgindi'
title: 'On the Non-Existence of CR-Regular Embeddings of $S^5$'
---
Introduction
============
The h-principle was developed in the early 1970’s by M. Gromov who applied it to the problem of embeddings of real manifolds into complex space to give necessary and sufficient conditions for the existence of totally real embeddings (see \[5\]). In particular, he demonstrated that the only spheres $S^n$ which admit totally real embeddings to $\mathbb{C}^n$ are in the dimensions $n=1,3$ (the case $n=1$ being trivial).
In the 1980’s, F. Forstneric extended the work on Gromov and proved that every compact, orientable 3-manifold admits a totally real embedding into $\mathbb{C}^3$ (see \[4\]).
In our work in \[1\], we demonstrated that every topological type of knot (or link) in $S^3$ can arise as the set of complex tangents to a $\mathcal{C}^n$-embedding $S^3 \hookrightarrow \mathbb{C}^3$ (for any given integer n). In \[2\], we derived a topological invariant for the local removal of complex tangents to an embedding of a closed oriented 3-manifold into $\mathbb{C}^3$, leaving the embedding unchanged outside a small neighborhood of a chosen set of complex tangents. This led to our work in \[3\], where we demonstrated that any embedding $S^3 \hookrightarrow \mathbb{C}^3$ can be approximated $\mathcal{C}^0$-close by a totally real embedding.
In their paper \[6\], N. Kasuya and M. Takase generalized our work in \[1\] to demonstrate that every knot or link in $S^3$ can be exactly assumed as the set of complex tangents to a smooth embedding of $S^3$. In fact, they generalize further to demonstrate that any 1-dimensional submanifold of a closed orientable 3-manifold that is homologically trivial may arise as the set of complex tangents to an embedding $M \hookrightarrow \mathbb{C}^3$. Their method of proof is based on the theory of stable maps in topology and Saeki’s Theorem.
In more general situations, in particular for embeddings $f: M^n \hookrightarrow \mathbb{C}^q$ ($n \neq q$) the h-principle still holds, although there cannot be in the literal sense totally real embeddings if $n>q$. In the situation $n>q$, every point $x \in M$ must be complex tangent, and the $\textit{complex dimension}$ of $x$ is defined to be:
$dim(x) = dim_{\mathbb{C}} (f_*(T_x M) \bigcap J f_* (T_x M))$ where $J$ is the complex structure. Note that $dim(x) \geq n-q$, by elementary linear algebra.
If $dim(x) = n-q$, we say that $x$ is a $\textit{CR-regular}$ point of the embedding. If $dim(x) > n-q$, we say $x$ is called $\textit{CR-singular}$.
In his paper \[8\], M. Slapar considered the situation of a closed oriented 4-manifold embedded into $\mathbb{C}^3$. In this situation, complex tangents generically are discrete (and finite), and can be classified as either elliptic or hyperbolic by comparing the orientation of the tangent space of the embedded manifold at the complex tangent (which is necessarily complex) with the induced orientation of the tangent space as a complex subspace of the tangent space of the complex ambient manifold. Slapar also demonstrated (in \[8\]) that a 4-manifold admits a CR-regular embedding into $\mathbb{C}^3$ if and only if the manifold is parallelizable.
In this paper, we will focus on embeddings $S^5 \hookrightarrow \mathbb{C}^4$. In this situation, a generic embedding will assume CR-singular points along a knot in $S^5$, all other points being CR-regular. We will show that there exists no CR-regular embedding of $S^5 \hookrightarrow \mathbb{C}^4$, and we will also make further generalizations for embeddings of spheres of higher dimensions.
The Result and Proof
====================
An embedding of $S^5 \hookrightarrow \mathbb{C}^4$ must have a complex line in the tangent space to each point $x \in S^5$, because of dimensionality reasons. There are two classes of points in such an embedding of $S^5$. In particular a point may have a complex plane in its tangent space, we say such a point is CR-singular. If the tangent plane at the point does not contain a complex plane in its tangent space (only a complex line), we say the point is CR-regular. For a generic embedding $S^5 \hookrightarrow \mathbb{C}^4$, we will have a link of CR-singular points, and all other points will be CR-regular (this readily follows from the dimension of the relevant spaces and using an application of Sard’s theorem).
Now, let $\mathbb{G}_{5,8}$ be the Grassmannian of 5-planes in $\mathbb{R}^8 = \mathbb{C}^4$, and consider its subspace $\mathbb{Y} = \{P \in \mathbb{G}_{5,8} | P \bigcap JP \cong \mathbb{C} \}$ of planes that contain only a complex line. We say a 5-plane $P \in \mathbb{Y}$ is $\textit{totally real}$.
Consider the Gauss map of the embedding: $\textit{G}: S^5 \rightarrow \mathbb{G}$. Then the CR-regular points of the embedding are exactly the points whose image under $\textit{G}$ is contained in $\mathbb{Y}$. In particular:
$\{$CR-regular points$\} = \textit{G}^{-1} (\mathbb{Y}) \subset S^5$
A CR-regular embedding $S^5 \hookrightarrow \mathbb{C}^4$ will have that its Gauss map has image only in $\mathbb{Y}$, that is the tangent plane at every point is totally real.
We will show that no embedding can satisfy this, in particular:
: There exists no CR-regular embedding $S^5 \hookrightarrow \mathbb{C}^4$.
******:-
First, let us denote $\mathbb{G} = \mathbb{G}_{3,8}$ to be the Grassmannian of 3-planes in $\mathbb{R}^8$. Note that by taking orthogonal complements: $\mathbb{G} = \mathbb{G}_{3,8} \cong \mathbb{G}_{5,8}$. Now denote by $\mathbb{V} = \mathbb{V}_{3,8}$ the Steifel manifold of real 3-frames on $\mathbb{R}^8$. Note that the Steifel manifold manifold deformation retracts (and hence is homotopy equivalent) to the manifold of orthonormal 3-frames in $\mathbb{R}^8$, which we denote by $\mathbb{V}^o$.
Next consider the Stiefel manifold: $V^{tr}_{3,8} \subset \mathbb{V}$ of real 3-frames of $\mathbb{R}^8 \cong \mathbb{C}^4$ for which the 5-plane which is the complement of their span (as real vectors) is totally real in $\mathbb{C}^4$ (contains only a complex line).
Let $(v_1,v_2,v_3) \in V^{tr}_{3,8}$ be such a 3-frame. Applying first the Gram-Schmidt process to this 3-frame, we may assume without loss that these vectors are orthonormal (their span will not change). Denote their span by $\mathcal{N}=span\{v_1,v_2,v_3\}$, and its complement by $\mathcal{P} = \mathcal{N}^{\perp} \subset \mathbb{R}^8$, and let $L \subset \mathcal{P}$ be the (unique) complex line contained in $\mathcal{P}$. In particular, if we denote the complex structure as $J:\mathbb{R}^8 \rightarrow \mathbb{R}^8$, we will have that: $L = \mathcal{P} \cap J(\mathcal{P}) \cong \mathbb{C}$.
Note the following from linear algebra:
$(\mathcal{N}+J\mathcal{N})^\perp \subset \mathcal{N}^\perp \cap (J\mathcal{N})^\perp = \mathcal{N}^\perp \cap J(\mathcal{N}^\perp) = \mathcal{P} \cap J(\mathcal{P}) = L$.
Hence, by taking dimensions: $dim(\mathcal{N}+J\mathcal{N}) \geq 6$.
But, as each of the subspaces $\mathcal{N}$ and $J\mathcal{N}$ are of dimension 3, it must be that $\mathcal{N} \cap J\mathcal{N} = \{0\}$ and $(\mathcal{N} \oplus J\mathcal{N})^\perp = L$. Note then that $\mathcal{N}$ is a totally real 3-plane.
Now consider the (orthonormal) 6-frame of $\mathbb{R}^8$:
$\{v_1,v_2,v_3,Jv_1,Jv_2,Jv_3\}$. This will form a basis for $\mathcal{N} \oplus J\mathcal{N} \subset \mathbb{R}^8$, whose complement is $L$.
Applying the Gram-Schmidt process to these six (orthonormal) vectors, we necessarily get a vector $\theta \in L$.
We can now form an orthonormal basis of $\mathbb{R}^8$ by adding to the collection two vectors: $\{v_1,v_2,v_3,Jv_1,Jv_2,Jv_3, \theta, J\theta\}$.
Suppose now that we had a CR-regular embedding $F: S^5 \hookrightarrow \mathbb{C}^4$. As the normal bundle to any embedding $S^5 \hookrightarrow \mathbb{R}^8$ is trivial (see Massey in \[7\]), with a choice of trivialization of the normal bundle we can choose a normal frame to the embedding $F$; denote this normal frame by $\eta:S^5 \rightarrow V^{tr}_{3,8}$. Let $\eta(m) = \{v_1,v_2,v_3\}$ for a general point $m \in S^5$.
It then follows from the above that the 5-frame: $\{Jv_1,Jv_2,Jv_3, \theta, J\theta\}$ will span the tangent space to the embedded $S^5$ at the (general) point $m$. As the Gram-Schmidt process is continuous, this will give us as a global trivialization of the tangent bundle of the 5-sphere. But the 5-sphere is not parallelizable!
With this contradiction, we conclude that there exists no CR-regular embedding of $S^5 \hookrightarrow \mathbb{C}^4$.
$\textbf{\textit{QED}}$
We now recall the following result of Kervaire (see Massey in \[7\]):
The normal bundle to an $n$-sphere embedded in $\mathbb{R}^{n+k}$ with $k>\frac{n+1}{2}$ is necessarily trivial.
We may now directly generalize our above proof to achieve higher dimensional analogues:
: There exists no CR-regular embedding $S^n \hookrightarrow \mathbb{C}^{n-r}$ for $r < \frac{n-1}{4}$, $n \neq 3,7$.
******:-
Applying the theorem of Kervaire for $k=n-2r$, we find that the normal bundle of an $n$-sphere embedded in $\mathbb{R}^{2n-2r}$ is necessarily trivial for $r < \frac{n-1}{4}$. Note we already achieved the analogous result earlier for the special case $n=5, r=1$ (not in the dimension range of this theorem), and that the $r=0$ cases reduce to Gromov’s work for totally real embeddings $S^k \hookrightarrow \mathbb{C}^k$.
Suppose there exists a CR-regular embedding $F:S^n \hookrightarrow \mathbb{C}^{n-r}$, and let $x \in S^n$. We would then get the holomorphic tangent space $T_x \cap JT_x = H$ is a complex plane of dimension $r$. As the normal bundle is trivial, we can specify a normal frame for the normal bundle $\mathcal{N}$ at $x$: $\{v_1,...,v_{n-2r}\}$, which we may assume without loss is orthonormal. It is clear that $\mathcal{N} \cap J\mathcal{N} = \{0\}$ in direct analogy to our work in the proof of Theorem 1, and that: $TS^n = J\mathcal{N} \oplus H$. We may then extend the frame using the complex structure: $\{v_1,...,v_{n-2r}, Jv_1,...,Jv_{n-2r}\}$ to obtain an orthonormal $(2n-4r)$-frame of $\mathbb{R}^{2n-2r}$. Now, applying Gram-Schmidt to extend by one vector, we obtain a vector $\theta_1 \in H$.
We then have the following frame tangent to $S^n$: $\{Jv_1,...,Jv_{n-2r}, \theta_1, J \theta_1\}$, to which we apply Gram-Schmidt once more to obtain a vector $\theta_2$. Proceeding inductively, we apply Gram-Schmidt to the extended frame: $\{Jv_1,...,Jv_{n-2r}, \theta_1, J \theta_1, \theta_2, J \theta_2\}$, and we repeat ($r$ times in total) to obtain the frame: $\{Jv_1,...,Jv_{n-2r}, \theta_1, J \theta_1,..., \theta_r, J \theta_r\}$ which will necessarily span the tangent space to $S^n$ at each point. This will then trivialize the tangent bundle globally, but for $n \neq 3,7$, $S^n$ is not parallelizable. Hence, the theorem follows.
$\textbf{\textit{QED}}$
[14]{}
A.M. Elgindi, “On the Topological Structure of Complex Tangencies to Embeddings of $S^3$ into $\mathbb{C}^3$,” New York J. of Math. Vol. 18 (2012), 295-313.
A.M. Elgindi “A topological obstruction to the removal of a degenerate complex tangent and some related homotopy and homology groups,” International Journal of Mathematics Vol. 26 (2015).
A.M. Elgindi, “Totally real perturbations and nondegenerate embeddings of $S^3$,” New York J. of Math. Volume 21 (2015) 1283-1293
F. Forstneric, “On totally real embeddings into $\mathbb{C}^n$,” Expositiones Mathematicae, 4 (1986), pp. 243-255.
M.L. Gromov, “Convex Integration of Differential Relations,” 1973 Math. USSR Izv. 7.
N. Kasuya and M. Takase, “Knots and links of complex tangents.” arXiv preprint 1606.03704 (2016).
W.S. Massey, “On the Normal Bundle of a Sphere Imbedded in Euclidean Space,” Proc. of the Amer. Math. Soc., Vol. 10, No. 6 (Dec., 1959), pp. 959-964.
M. Slapar, “Cancelling complex points in codimension two,” Bull. Aust. Math. Soc. 88 (2013), no. 1, 64-69.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present 40 quasar absorption line systems at intermediate redshifts ($z \sim 1$), with focus on one of the most kinematically complex known, as examples of how the unique capabilities of space–based and ground–based facilities can be combined to glean much broader insights into astrophysical systems.'
author:
- 'Chris Churchill$^{1}$, Rick Mellon$^{1}$, Jane Charlton$^{1}$, & Buell Januzzi$^{2}$'
title: Multiphase Gas in Intermediate Redshift Galaxies
---
Hubble and the More Complete Picture
====================================
Within the field of quasar absorption lines, one long–standing question is how the halos and ISM of earlier epoch galaxies compare or relate, in an evolutionary sense, to those of the present epoch. The look–back time to $z=1$ covers well more than half the age of the universe. Furthermore, spectral and morphological properties of absorbing galaxies are accessible with present day ground–based and spaced–based observatories (Steidel, Dickinson, & Persson 1994; Steidel 1998). Thus, absorption line studies at intermediate redshifts provide an opportunity to examine the gaseous evolution of galaxies.
The ISM and halos of local galaxies are comprised of many ionization phases, including diffused ionized gas, extended coronae, and denser low ionization regions often located in front of shock fronts (e.g.Dahlem 1998). In absorption, simultaneous study of both the low and high ionization phases in our Galaxy have been required to constrain the ionization mechanisms, chemical abundance variations, and the dust properties (e.g. Savage & Sembach 1996).
A significant obstacle in the face of rapid progress with studies employing absorption lines, however, is that the strongest transitions of the cosmologically most abundant elements lie in the far to near ultraviolet (UV) portion of the electromagnetic spectrum. Fortunately, at $z\sim1$, the near UV transitions, which are most often associated with neutral and low ionization ions[^1], are redshifted into the visible. Thus, they can be observed from the ground with large aperture telescopes. However, the far UV transitions, associated with moderate and high ionization ions[^2], are redshifted to the near UV; a study of the high ionization component requires a spaced–based telescope, i.e. [*HST*]{}. The [*HST*]{} archive is rich with $R=1300$ FOS spectra of quasars, the majority due to the QSO Absorption Line Key Project (Bahcall et al. 1993).
The C0.1em [**IV**]{}–Mg0.1em [**II**]{} Kinematics Connection: Multiphase Gas
==============================================================================
We used HIRES/Keck spectra ($R\sim 6$ ) and archival FOS/[*HST*]{} spectra ($R\sim 230$ ) to place constraints on the ionization and multiphase distribution of absorbing gas at $z=0.4$ to $z=1$. In Figure 1, we present $\lambda 2796$ and the [[[C]{}0.1em[iv]{} $\lambda\lambda 1548, 1550$]{}]{} doublet for each of 40 systems (note that the velocity scale for is 500 and for is 3000 ). Ticks above the HIRES spectra give the velocities of the Voigt profile sub–components and ticks above the FOS data give the expected location of these components for the doublet. The labels “D”, “L”, and “Bl” denote detection, limit, and blend, respectively. The systems are presented in order of increasing kinematic spread from the upper left to lower right.
Based upon a highly significant correlation between the equivalent widths and the kinematics, it is inferred that most intermediate redshift galaxies have multiphase gaseous structures (Churchill et al. 1999, 2000). The low ionization gas is in multiple, narrow components, $\left< b \right> \simeq 5$ , and the high ionization gas is kinematically spread out with $\left< b
\right> \simeq 70$ (using the doublet ratio method). This is an effective velocity dispersion, for the FOS spectra are of too low resolution to resolve velocity splittings below $\sim 500$ .
Case Study; The Complex Triple System at z=0.93
===============================================
The three systems at $z=0.9254$, $0.9276$, and $0.9343$ along the line of sight to PG $1206+459$ exhibit complex kinematics and exceptionally strong , , and absorption. We investigated the ionization and spatial distribution of these systems using detailed photoionization models (Cloudy; Ferland 1996).
In the top panels of Figure 2, the HIRES/Keck spectra of the $\lambda 2600$ transition and of the [[[Mg]{}0.1em[ii]{} $\lambda\lambda 2796, 2803$]{}]{} doublet are shown with a Voigt profile model spectrum superimposed; the ticks give the component centers. The systemic redshifts of the three systems, A, B, and C, are labeled. The lower two panels show the normalized FOS/[*HST*]{} spectrum (histogram) with tuned model predictions (not fits) superimposed (see Churchill & Charton 1999). The dotted–line is a single–phase model, assuming all absorption arises due to ionization balance in the clouds; a single phase of gas fails to account for the high ionization absorption strengths. The solid spectrum is a two–phase model, which allows the higher ionization gas to reside in a separate phase.
Based upon the photoionization modeling, a highly ionized phase, not seen in , is required to account for the observed , , and absorption. An “effective” Doppler width of $50
\leq b \leq 100$ is consistent with the complex, blended data. The physical size of the high ionization component is less than 30 kpc, with the best values between 10 and 20 kpc.
Based upon the sizes and effective Doppler widths, we infer that the highly ionized material is analogous to the Galactic coronae (Savage et al. 1997), material stirred up by energetic mechanical processes, such as galactic fountains. In this scenario, the gas is concentrated around the individual galaxies which presumably provide a source of support, heating, and chemical enrichment.
It seems promising that the answer to the posed question (§ 1) may be forthcoming when [*HST*]{} resolves the FOS profiles with STIS and COS.
Thanks are due to S. Kirhakos, C. Steidel, and D. Schneider for their contributions to the work presented here. I am especially grateful to all who work to make [*HST*]{} a unique platform for astronomy.
Bahcall, J. N. et al. 1993, ApJS, 87, 1
Churchill, C. W., & Charlton, J. C. 1999, AJ, 118, 59
Churchill, C. W., et al. 1999, ApJ, 519, L43
Churchill, C. W., et al. 2000, ApJ, 543, in press
Dahlem, M. 1998, PASP, 109, 1298
Ferland, G. 1996, [*Hazy*]{}, University of Kentucky Internal Report
Savage B. D. & Sembach K. M. 1996, ARA&A, 34, 279
Steidel, C. C., Dickinson, M. & Persson, E. 1994, ApJ, 437, L75
Steidel, C. C. 1998, in Galactic Halos: A UC Santa Cruz Workshop, ASP Conf. Series, V136, ed. D. Zaritsky (San Francisco : PASP), 167
[^1]: Meaning ions with ionization potentials in the range of a few to $\sim
30$ eV.
[^2]: Meaning those with ionization potentials ranging between $\sim 30$ and $\sim 50$ eV and between $\sim 50$ and $140$ eV, respectively.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We initiate the theory of graded commutative 2-rings, a categorification of graded commutative rings. The goal is to provide a systematic generalization of Paul Balmer’s comparison maps between the spectrum of tensor-triangulated categories and the Zariski spectra of their central rings. By applying our constructions, we compute the spectrum of the derived category of perfect complexes over any graded commutative ring, and we associate to every scheme with an ample family of line bundles an embedding into the spectrum of an associated graded commutative 2-ring.'
address:
- 'Universität Bielefeld, Fakultät für Mathematik, BIREP Gruppe, Postfach 100131, 33501 Bielefeld, Germany.'
- 'Universität Bielefeld, Fakultät für Mathematik, BIREP Gruppe, Postfach 100131, 33501 Bielefeld, Germany.'
author:
- 'Ivo Dell’Ambrogio'
- Greg Stevenson
title: 'Even more spectra: tensor triangular comparison maps via graded commutative 2-rings'
---
Introduction {#intro}
============
Motivation
----------
Consider a tensor-triangulated category $\mathcal T$, that is, an essentially small triangulated category $\mathcal T$ equipped with a bi-exact symmetric tensor product. Paul Balmer [@balmer:prime] associates to $\mathcal T$ a functorial invariant – a topological space called the *spectrum* of $\mathcal T$ and denoted by ${\mathop{\mathrm{Spc}}}\mathcal T$ – which turns out to be the starting point of a powerful geometric theory known as *tensor triangular geometry*. We refer to Balmer’s 2011 ICM address [@balmer:icm] for an account of this theory and its many applications.
In order to successfully apply the abstract theory to examples it is essential that one provides a relevant description of the spectrum ${\mathop{\mathrm{Spc}}}\mathcal T$. This is in general a difficult task, not least because such a computation is equivalent to providing a classification of the thick tensor-ideals of $\mathcal T$. In [@balmer:spec3], Balmer has introduced, in full generality, a natural and continuous *comparison map* $$\rho_\mathcal T \colon {\mathop{\mathrm{Spc}}}(\mathcal T) \to {\mathop{\mathrm{Spec}}}( {\mathrm{End}}_\mathcal T({\mathbf{1}}))$$ from the spectrum of $\mathcal T$ to the Zariski spectrum of the endomorphism ring of ${\mathbf{1}}$, the tensor unit. There are useful criteria to check that this map is surjective, which is often the case, and therefore $\rho_\mathcal{T}$ displays ${\mathop{\mathrm{Spc}}}\mathcal T$ as being fibered over a more familiar and tractable space. On the other hand, injectivity is more subtle and seems to occur less frequently in examples. This can be remedied somewhat by considering *graded* endomorphism rings of ${\mathbf{1}}$: if $g$ is a tensor-invertible object of $\mathcal T$, one can define a ${\mathbb{Z}}$-graded ring ${\mathrm{End}}^{*,g}_\mathcal T({\mathbf{1}}):= \bigoplus_{n\in{\mathbb{Z}}} {\mathrm{Hom}}_\mathcal T({\mathbf{1}}, g^{\otimes n})$ that is graded commutative by the Eckmann-Hilton argument. One obtains in this way a graded version $$\rho_\mathcal T^{*,g}\colon {\mathop{\mathrm{Spc}}}(\mathcal T) \to {\mathop{\mathrm{Spec^h}}}({\mathrm{End}}^{*,g}_\mathcal T({\mathbf{1}}))$$ of the comparison map, where the space on the right-hand side is now the spectrum of homogeneous prime ideals. Then $\rho_\mathcal T^{*,g}$ is injective – an embedding – if we take $\mathcal T$ to be ${\mathrm{D}}^{\mathrm{perf}}(X)$ for a projective variety $X$, with $g=\mathcal O(1)$ (see [@balmer:spec3]\*[Remark 8.2]{}). If we identify $X={\mathop{\mathrm{Proj}}}({\mathrm{End}}^{*,g}_\mathcal T({\mathbf{1}}))$, then $\rho_\mathcal T^{*,g}$ is a homeomorphism of ${\mathop{\mathrm{Spc}}}(\mathcal T)$ onto $X$.
It is now tempting to try and produce an endomorphism ring of ${\mathbf{1}}$ which is graded over, say, the Picard group of all tensor-invertible objects of $\mathcal T$; ideally then its homogeneous spectrum would retain sufficient information for the resulting comparison map to be injective in more cases. Unfortunately, it is not at all clear – and probably false in general – that one may produce in this way a graded commutative ring: as soon as one tries to grade over several invertible objects, some hostile coherence issues appear on the scene to spoil the fun.
Results
-------
In the present paper we solve this difficulty by embracing the enemy, as it were. Instead of trying to construct a *ring* out of the data $\{ {\mathrm{Hom}}_\mathcal T({\mathbf{1}}, g)\mid g\in \mathcal T \otimes\textrm{-invertible} \}$ which is graded commutative over the Picard *group*, we instead consider this data as defining a *2-ring* which is graded commutative over its Picard *2-group*. It is actually more natural to consider ${\mathrm{Hom}}_\mathcal T(g,h)$ for all invertible $g$ and $h$, and in order to capture the relevant structure we define a *graded commutative 2-ring* (Def. \[defi:grcomm2ring\]) to be an essentially small ${\mathbb{Z}}$-linear category $\mathcal R$ equipped with an additive symmetric tensor product with respect to which every object is invertible.
This strategy is successful because, as we will show in Section \[sec:grcomm2rings\], the basic toolkit of affine algebraic geometry generalizes painlessly to this new context. Thus every graded commutative 2-ring $\mathcal R$ has a Zariski spectrum of homogeneous prime ideals, ${\mathop{\mathrm{Spec}}}\mathcal R$, and the assignment $\mathcal R\mapsto {\mathop{\mathrm{Spec}}}\mathcal R$ defines a functor to spectral spaces and spectral maps, in the sense of Hochster [@hochster:prime] (see Theorem \[thm:spec\]). The theory of localization at multiplicative subsets works perfectly well and provides in particular localizations of $\mathcal R$ at each prime $\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R$ and away from any morphism $r\in {\mathop{\mathrm{Mor}}}\mathcal R$ (Propositions \[prop:fractions\] and \[prop:loc\_spec\]). All of this is due to the observation that, if $r$ and $s$ are any two composable maps in a graded commutative 2-ring, then $r$ and $s$ “commute with each other” up to isomorphisms and twists (see Proposition \[prop:pseudo\_comm\]).
In Section \[sec:comparison\] we proceed to apply this generalized commutative algebra to tensor triangular geometry. If $\mathcal T$ is a $\otimes$-triangulated category, we call *a central 2-ring of $\mathcal T$* any graded commutative 2-ring occurring as a full tensor subcategory of $\mathcal T$. We can now state our main abstract result (see Theorem \[thm:rho\]).
\[thm:main1\] For every central 2-ring $\mathcal R$ of a $\otimes$-triangulated category $\mathcal T$ there exists a continuous spectral map $$\rho^\mathcal R_\mathcal T\colon {\mathop{\mathrm{Spc}}}(\mathcal T) \to {\mathop{\mathrm{Spec}}}(\mathcal R)$$ defined by the formula $\rho^\mathcal R_\mathcal T (\mathcal P):=\{ r \in {\mathop{\mathrm{Mor}}}\mathcal R \mid {\mathop{\mathrm{cone}}}(r) \not\in \mathcal P \}$ for all $\mathcal P\in {\mathop{\mathrm{Spc}}}\mathcal T$. Moreover $\rho^\mathcal R_\mathcal T$ is natural in an evident sense with respect to pairs $(\mathcal T,\mathcal R)$.
By viewing ordinary (graded) commutative rings as graded commutative 2-rings, it is easy to see that Theorem \[thm:main1\] generalizes the construction of Balmer’s graded and ungraded comparison maps ( Examples \[ex:comm\_2\_rings\] and \[ex:projective\]). In view of the freedom of choice for the central 2-ring $\mathcal R$ that Theorem \[thm:main1\] offers us, we now have in our hands a whole new phylum of candidate spaces – the Zariski spectra ${\mathop{\mathrm{Spec}}}\mathcal R$ – to help us compute ${\mathop{\mathrm{Spc}}}\mathcal T$ in examples. More specifically, the geometry of graded commutative 2-rings bridges the gap between the “mundane” spectra of commutative rings and the “exotic” triangular spectra.
We should mention that, on our way to establishing Theorem \[thm:main1\], we prove that every central 2-ring of a *local* tensor triangulated category ([@balmer:spec3]\*[Def. 4.1]{}) must be a local graded commutative 2-ring (see Theorem \[thm:local\_Rtot\]). Furthermore, we completely extend Balmer’s elegant technique of *central localization* to graded commutative 2-rings (see Theorems \[thm:central\_loc\] and \[thm:fractions\_general\]).
In Section \[sec:examples\] we illustrate our construction with two families of examples, namely (ordinary) graded commutative rings, and schemes with an ample family of line bundles.
Consider a $G$-graded ring $R$, where $G$ is some abelian group, which is graded commutative with respect to some signing symmetric form $G\times G\to {\mathbb{Z}}/2$. We want to study the tensor triangulated category $\mathcal T:= {\mathrm{D}}^{\mathrm{perf}}(R)= {\mathrm{D}}(R\textrm-{\mathrm{Gr}}{\mathrm{Mod}})^c$ of perfect complexes of graded $R$-modules. To this end, we note that the companion category $\mathcal C_R$ of $R$ (see Example \[ex:comm\_2\_rings\] (3)) is a graded commutative 2-ring whose spectrum ${\mathop{\mathrm{Spec}}}\mathcal C_R$ is just ${\mathop{\mathrm{Spec^h}}}R$, the homogeneous spectrum of $R$. Moreover $\mathcal C_R$ is equivalent, as a graded commutative 2-ring (essentially via the Yoneda embedding), to the central 2-ring $\mathcal R$ of ${\mathrm{D}}^{\mathrm{perf}}(R)$ generated by the twists $R(g)$, $g\in G$, of the ring itself. With these identifications, we obtain our first application (see Theorem \[thm:grcommrings\]):
\[thm:main2\] The comparison map of Theorem \[thm:main1\] yields a homeomorphism $${\mathop{\mathrm{Spc}}}({\mathrm{D}}^{\mathrm{perf}}(R)) \stackrel{\sim}{\to} {\mathop{\mathrm{Spec^h}}}(R)$$ for every graded commutative ring $R$, identifying Balmer’s universal support data in the triangular spectrum with the homological support data in the Zariski spectrum.
The proof rests on a general abstract criterion for injectivity of $\rho^\mathcal R_\mathcal T$ (see Proposition \[prop:injectivity\]) and the reduction to the case of a *noetherian* graded commutative ring, which had already been established in [@ivo_greg:graded]\*[Theorem 5.1]{}. By general tensor triangular geometry as in *loc.cit. *we can now immediately translate the previous theorem into the following classification result.
For any graded commutative ring $R$ there is an inclusion-preserving bijection between:
1. thick subcategories $\mathcal C$ of ${\mathrm{D}}^{\mathrm{perf}}(R)$ that are closed under twisting by arbitrary elements $g\in G$, and
2. subsets $S$ of the homogeneous spectrum ${\mathop{\mathrm{Spec^h}}}R$ of the form $S=\bigcup_iZ_i$, where each $Z_i$ is closed and has quasi-compact complement in ${\mathop{\mathrm{Spec^h}}}R$.
The correspondence maps a twist-closed thick subcategory $\mathcal C$ to the union of the homological supports of its objects, $\bigcup_{X\in \mathcal C} \operatorname{ssupp}X$, and conversely a given subset $S$ of the required form is mapped to the subcategory $\{X\in {\mathrm{D}}^{\mathrm{perf}}(R)\mid \operatorname{ssupp}(X)\subseteq S\}$.
We note that, by considering a problem about *ordinary* graded commutative rings $R$, we were naturally led to consider graded commutative 2-rings: namely the companion category $\mathcal C_R$ and its non-strict incarnation $\mathcal R$ inside of the derived category.
Next consider a quasi-compact and quasi-separated scheme $X$. Assume that $X$ admits an ample family of line bundles $\underline{\mathcal L}:=\{\mathcal L_\lambda\}_{\lambda\in \Lambda}$, and denote by $\mathcal R(\underline{\mathcal L})$ the central 2-ring of ${\mathrm{D}}^{\mathrm{perf}}(X)$ generated by the ample family. By applying the same abstract injectivity criterion as before, we obtain our second application (see Theorem \[thm:ample\]):
\[thm:main3\] The comparison map associated to the tensor-triangulated category ${\mathrm{D}}^{\mathrm{perf}}(X)$ and its central 2-ring $\mathcal R(\underline{\mathcal L})$ yields an injective map $$\rho^{\underline{\mathcal L}}_X \colon X \hookrightarrow {\mathop{\mathrm{Spec}}}(\mathcal R(\underline{\mathcal L}))$$ which, moreover, is a homeomorphism onto its image.
By all rights, the morphism $\rho^{\underline{\mathcal L}}_X$ should be geometric, provided we know in which sense to associate some geometry with the spectrum of a graded commutative 2-ring. In this case the geometric point of view is explained in work of Brenner and Schröer [@brenner-schroer] and our result recovers, via tensor triangular geometry, the map of topological spaces underlying their construction (see Remark \[rem:ffs\]).
Some recollections and notations {#subsec:inv}
--------------------------------
Throughout all categories are understood to be locally small.
Let us recall a few facts and fix some notation about closed symmetric monoidal categories (for further details we refer to [@kelly-laplaza] or [@lewis-may-steinberger]\*[III.1]{}). In any monoidal category $\mathcal C$ with tensor $\otimes$ and unit object ${\mathbf{1}}$, we will reserve the letters $\lambda$, $\rho$ and $\alpha$ for the structural coherence natural isomorphisms (left unitor, right unitor, and associator) $$\lambda_x \colon {\mathbf{1}}\otimes x \stackrel{\sim}{\to} x
\quad,\quad
\rho_x \colon x\otimes {\mathbf{1}}\stackrel{\sim}{\to} x
\quad,\quad
\alpha_{x,y,z} \colon x\otimes (y\otimes z) \smash{\stackrel{\sim}{\to}} (x\otimes y)\otimes z$$ for all objects $x,y,z\in \mathcal C$. By *tensor category* we will always mean a symmetric monoidal category, and we will denote the symmetry isomorphism by $$\gamma_{x,y}\colon x\otimes y\stackrel{\sim}{\to} y\otimes x \;.$$ Let $\mathcal C$ be a tensor category. The dual of a dualizable object $x$ of $\mathcal C$ will be denoted by $x^\vee$. It is determined up to isomorphism by a natural bijection $\mathcal C(x^\vee\otimes -, -)\cong \mathcal C(-, x \otimes -)$, or equivalently, by two maps $\eta_{x} \colon {\mathbf{1}}\to x\otimes x^\vee$ and $\varepsilon_{x} \colon x^\vee \otimes x\to {\mathbf{1}}$ making the following two diagrams commute (where one goes backward along the structure isomorphisms where necessary): $$\xymatrix@C=6pt{
(x\otimes x^\vee) \otimes x &&
x \otimes (x^\vee \otimes x) \ar[ll]_-\alpha^-\sim \ar[d]^{{\mathrm{id}}_x \otimes \varepsilon_x} \\
{\mathbf{1}}\otimes x \ar[r]^-\lambda_-\sim \ar[u]^{\eta_x \otimes {\mathrm{id}}_x} &
x &
\ar[l]_-\rho^-\sim x \otimes {\mathbf{1}}}
\quad\;
\xymatrix@C=6pt{
x^\vee \otimes ( x \otimes x^\vee) \ar[rr]^-\alpha_-\sim &&
(x^\vee \otimes x )\otimes x^\vee \ar[d]^{\varepsilon_x \otimes {\mathrm{id}}_{x^\vee}} \\
x^\vee \otimes {\mathbf{1}}\ar[r]_-\sim^-\rho \ar[u]^{{\mathrm{id}}_{x^\vee} \otimes \eta_x} &
x^\vee &
{\mathbf{1}}\otimes x^\vee \ar[l]^-\sim_-\lambda
}$$ These are sometimes called the “zig-zag identities” and are nothing but the two triangle identities for the adjunction between $x\otimes -$ and $x^\vee \otimes -$ (slightly disguised). The assignment $x\mapsto x^\vee$ extends canonically to morphisms to define a self-duality functor (a contravariant autoequivalence) on the full subcategory of dualizable objects in $\mathcal C$.
The evaluation $\varepsilon$ and coevaluation $\eta$ are moreover dinatural, in the sense that for any morphism $f\colon x \to y$, the following two squares are commutative: $$\label{dinat}
\xymatrix{
y^\vee \otimes x \ar[r]^-{{\mathrm{id}}_{y^\vee} \otimes f} \ar[d]_{f^\vee \otimes {\mathrm{id}}_x}
& y^\vee \otimes y \ar[d]^{\varepsilon_{y}} & {\mathbf{1}}\ar[r]^-{\eta_x} \ar[d]_{\eta_y}
& x \otimes x^\vee \ar[d]^{f \otimes {\mathrm{id}}_{x^\vee}} \\
x^\vee \otimes x \ar[r]^-{\varepsilon_{x}} & {\mathbf{1}}& y \otimes y^\vee \ar[r]^-{{\mathrm{id}}_y \otimes f^\vee } & y \otimes x^\vee
}
$$ (In fact by definition of the functor $(-)^\vee$ the isomorphism $\mathcal C(x^\vee \otimes y,z)\cong \mathcal C(y, x\otimes z)$ is natural in $x,y$ and $z$, and thus defines an adjunction with parameter. This can be seen to be equivalent to the unit and counit being dinatural as above, see [@maclane]\*[IX.4]{}.)
An object $x$ is *invertible* if there exists an object $x'$ and an isomorphism $x\otimes x'\cong {\mathbf{1}}$ (and therefore $x'\otimes x\cong {\mathbf{1}}$). If $x$ is invertible it is also dualizable, and indeed, the dual $x^\vee$ provides a canonical choice for an inverse $x'$, since in this case the unit and counit maps are isomorphisms $\eta\colon {\mathbf{1}}\stackrel{\sim}{\to} x\otimes x^\vee$ and $\varepsilon\colon x^\vee\otimes x\stackrel{\sim}{\to} {\mathbf{1}}$.
In order to alleviate our notational burden, we will often omit from displayed diagrams all tensor symbols $\otimes$ and subscripts for natural transformations, and we will often denote an identity map ${\mathrm{id}}_x$ by the object $x$. Thus for instance, if there is no danger of confusion, we will simply write $$\xymatrix{
y^\vee x \ar[r]^-{y^\vee f} \ar[d]_{f^\vee x} & y^\vee y \ar[d]^{\varepsilon} & {\mathbf{1}}\ar[r]^-{\eta} \ar[d]_{\eta}
& x x^\vee \ar[d]^{f x^\vee }\\
x^\vee x \ar[r]^-{\varepsilon} & {\mathbf{1}}& y y^\vee \ar[r]^-{y f^\vee } & y x^\vee
}
$$ for the dinaturality squares . Occasionally we will also omit the associativity and unit isomorphisms, as justified by Mac Lane’s coherence theorem.
Graded commutative 2-rings {#sec:grcomm2rings}
==========================
The following definition is commonly understood to be a sensible categorification of the concept of abelian group, see [@baez-lauda:higher], [@dupont:thesis] and the many references therein.
A *symmetric 2-group* is a symmetric monoidal groupoid in which every object is invertible for the tensor product.
\[ex:picard\_category\]
1. Every abelian group $G$ can be considered as a discrete symmetric 2-group, , as the discrete category with object set ${\mathop{\mathrm{obj}}}G=G$ equipped with the strict symmetric tensor product $g\otimes h = g+h$ and ${\mathbf{1}}=0$.
2. The *Picard 2-group* (or *Picard groupoid*, *Picard category*) of a symmetric monoidal category $\mathcal C$ is the symmetric 2-group obtained from $\mathcal C$ by considering the (non-full) monoidal subcategory of all invertible objects and isomorphisms between them.
Let $\mathcal G$ be an essentially small symmetric 2-group.
\[defi:grcomm2ring\] A *$\mathcal G$-graded commutative 2-ring $\mathcal R$* is a symmetric monoidal ${\mathbb{Z}}$-category $\mathcal R$ equipped with a symmetric monoidal functor $\mathcal G\to \mathcal R$ which is surjective on objects (thus $\mathcal R$ is essentially small and its objects are invertible).
Typically $\mathcal G\to \mathcal R$ will simply be the inclusion of the Picard 2-group of $\mathcal R$, in which case we will simply speak of a *graded commutative 2-ring*. Hence we will not distinguish notationally between the objects of $\mathcal R$ and those of $\mathcal G$; they will be written $g,h,\ell,\ldots\in \mathcal G$ and thought of as “degrees”. We will also think of the morphisms $r$ of $\mathcal R$ as “elements”, so for instance we will tend to write $r\in \mathcal R$ rather than $r\in {\mathop{\mathrm{Mor}}}\mathcal R$.
We make the convention that natural structure maps, for example the left and right unitors, are written in the direction in which they occur; when defining a composite using such maps it is understood that the composite is obtained by going backward (taking the inverse) along any such arrows which are in the “wrong” direction. Given a morphism $r\colon g\to h$ in $\mathcal{R}$ and an object $\ell \in \mathcal{R}$, we will refer to the morphisms $r\otimes \ell =r\otimes {\mathrm{id}}_{\ell} \colon g\otimes \ell \to h \otimes \ell$ and $\ell\otimes r\colon \ell\otimes g\to \ell\otimes h$ as the *right twist*, respectively *left twist*, *of $r$ by $\ell$*.
\[ex:comm\_2\_rings\]
1. The zero category $\{0\}$ is a $\mathcal G$-graded commutative 2-ring for any choice of $\mathcal G$, in a unique way. There is an evident many-objects version of $\{0\}$ for any choice of object class with a distinguished element (which will be elected to be the tensor unit), which is of course monoidally equivalent to $\{0\}$ and is graded over a suitable trivial symmetric 2-group.
2. Let $R$ be a commutative ring. We may consider $R$ as a ${\mathbb{Z}}$-linear category with a single object $*$ whose endomorphism ring is $R$, and we may equip this category with the strict symmetric tensor product $r\otimes s:= rs$, for $r,s\in R$. Thus we can view $R$ as a commutative 2-ring graded by the trivial (symmetric 2-)group.
3. (See [@ivo_greg:graded] for details.) Let $G$ be an abelian group, and let $R$ be a $G$-graded $\epsilon$-commutative ring, for some signing symmetric form $\epsilon\colon G\times G\to {\mathbb{Z}}/2$. The *companion category* of $R$, denoted $\mathcal C_R$, is the small ${\mathbb{Z}}$-linear category with object set ${\mathop{\mathrm{obj}}}\mathcal C_R:=G$, with Hom groups $\mathcal C_R(g,h):=R_{h-g}$, and with composition given by the multiplication of $R$. Multiplication also yields a strict symmetric tensor product, which on objects $g,h\in G$ is the sum $g\otimes h:=g+h$ and on two maps $r\colon g\to h$ and $r'\colon g'\to h'$ is given by the formula $r\otimes r':=(-1)^{\epsilon(g,g'-h')}rr'$. Thus the companion category $\mathcal C_R$ is a $G$-graded commutative 2-ring, where $G$ is seen as a discrete symmetric 2-group. Its tensor structure extends to the category ${\mathrm{Ab}}^{\mathcal C_R}$ of additive functors, and this is tensor equivalent to $R$-${\mathop{\mathrm{GrMod}}}$, the tensor category of left graded modules. Thus $R$ and $\mathcal C_R$ are Morita tensor equivalent, and once again we can think of $R$ as being a graded symmetric 2-ring, namely $\mathcal C_R$.
4. Let $X$ be a scheme and suppose that $\{\mathcal{L}_i\; \vert \; i\in I\}$ is a family of line bundles on $X$. Then the full subcategory of quasi-coherent $\mathcal{O}_X$-modules with objects all finite tensor products of the $\mathcal{L}_i$ and their inverses is a symmetric 2-ring. It is graded over the corresponding subcategory of the Picard 2-group of $X$.
Let $\iota\colon \mathcal G\to \mathcal R$ and $\iota' \colon \mathcal G'\to \mathcal R'$ be two graded commutative 2-rings. A *morphism* $F: \mathcal R\to \mathcal R'$ is a square of $\otimes$-functors $$\xymatrix{
\mathcal G \ar[r]^-{\iota} \ar[d]_F &
\mathcal R \ar[d]^F \\
\mathcal G' \ar[r]^-{\iota'} &
\mathcal R'
}$$ such that $F\colon \mathcal R\to \mathcal R'$ is additive and such that there exists an isomorphism $F\iota \cong \iota'F$. If $\iota$ and $\iota'$ are inclusions ( when $\mathcal R$ and $\mathcal R'$ are graded by their Picard 2-groups), then we only talk about the tensor functor $F\colon \mathcal R\to \mathcal R'$ and assume that $\mathcal G\to \mathcal G'$ is its restriction.
Among the numerous examples of morphisms that will be used later on, we mention those produced by localization (see §\[subsec:loc\]), and the inclusions between different central 2-rings in a tensor-triangulated category (see §\[subsec:central2rings\]).
Pseudo-commutativity
--------------------
Let $\mathcal R$ be a graded commutative 2-ring.
Let $r\in \mathcal R$. By a *translate* of $r$ we will mean any morphism of $\mathcal R$ obtained from $r$ by the operations of taking twists of morphisms and composing with isomorphisms on either side, in any combination.
\[lemma:translate\] Every translate of $r$ is isomorphic to a left twist and also to a right twist of $r$, that is, has the form $u(g\otimes r) v$ and also the form $u' (r\otimes g') v'$ for some objects $g,g'\in \mathcal G$ and some isomorphisms $u,v,u',v'$. Moreover, translation (the relation “$\,\tilde r$ is a translate of $r$”) is an equivalence relations on the morphisms of $\mathcal R$.
Every right twist $s\otimes h$ of a morphism $s$ by an object $h$ is isomorphic to the corresponding left twist $h\otimes s$, by the symmetry of the tensor product. Since twisting preserves isomorphisms and an iteration of twists is again (up to isomorphism) a twist, we conclude that every translate of $r$ can be brought to either of the two forms. It is similarly easy to check that translation is an equivalence relation.
\[ex:dual\_translate\] For every map $r\colon g\to h$ in $ \mathcal R$, the dinaturality square $$\xymatrix{ h^\vee g \ar[d]_{r^\vee g} \ar[r]^-{h^\vee r} & h^\vee h \ar[d]^{\varepsilon}_\sim \\
g^\vee g \ar[r]^-{\varepsilon}_-\sim & {\mathbf{1}}}$$ shows that $r$ and $r^\vee$ are translates of each other.
The next all-important proposition states that, in a graded commutative 2-ring, composition is commutative *up to translation*. Later on we will also use a slight generalization of the same argument, when the need will arise to consider algebras over graded commutative 2-rings (see §\[subsec:loc\_alg\]).
\[prop:pseudo\_comm\] Let $r\colon g \to h$ and $s\colon h\to \ell$ be two composable morphisms of $\mathcal R$. Then there exists in $\mathcal R$ a commutative diagram of the form $$\xymatrix{
g \ar@/_/[d]_u^\sim \ar[r]^-r &
h \ar[r]^-s &
{\ell} \\
hh^\vee g \ar[r]^-{s h^\vee g} &
\ell h^\vee g \ar[r]^-{\ell h^\vee r} & \ell h^\vee h \ar@/_/[u]_v^\sim
}$$ and where $u$ and $v$ are some isomorphisms. In particular, $sr= r' s'$ for some translates $r'$ of $r$ and $s'$ of $s$.
We may construct the following commutative diagram: $$\label{eq:pseudo_comm}
\xymatrix{
*+[F]{g} \ar[r]^-{r} &
*+[F]{h} \ar[r]^-{s} &
*+[F]{\ell} & \\
g{\mathbf{1}}\ar[u]^\sim_{\rho} \ar[r]^-{r{\mathbf{1}}} &
h {\mathbf{1}}\ar[u]^{\rho} \ar[r]^-{s{\mathbf{1}}} &
\ell {\mathbf{1}}\ar[u]_-\sim^-{\rho} & \\
gg^\vee g \ar[u]_{g \varepsilon_{g}}^\sim \ar[r]^-{rg^\vee g} &
h g^\vee g \ar[u]^{h \varepsilon_{g}} \ar[r]^-{sg^\vee g} &
{ \ell g^\vee g } \ar[u]_\sim^{\ell \varepsilon_{g}} \ar[r]^-\sim_-{\ell \varepsilon_{g}} & {\ell {\mathbf{1}}} \\
{\mathbf{1}}g \ar[u]^\sim_{\eta_{g}\, g} \ar[r]_-\sim^-{\eta_{h}\, g} &
*+[F]{ hh^\vee g } \ar[u]_{hr^\vee g} \ar[r]_-{sh^\vee g} &
*+[F]{\ell h^\vee g} \ar[u]_{\ell r^\vee g} \ar[r]_-{\ell h^\vee r} & *+[F]{\ell h^\vee h} \ar[u]_\sim^{\ell \varepsilon_{h}}
}$$ Indeed, the top two squares commute by the naturality of $\rho$; the bottom-left and bottom-right ones by applying $-\otimes g$, resp. $\ell\otimes -$, to appropriate dinaturality squares; the remaining three by the (bi)functoriality of the tensor product. Note that all the morphisms labeled $\sim$ are isomorphisms, because the objects $g$ and $h$ are $\otimes$-invertible. Now it suffices to compose the maps on its outer frame between the boxed objects, in order to obtain a diagram as claimed.
\[cor:composite\_iso\] In a graded commutative 2-ring, any (right or left) composite of a non-invertible morphism is again non-invertible.
We prove the converse: if $r\colon g\to h$ and $s\colon h \to \ell$ are two composable maps such that their composite $sr$ is invertible, then $s$ and $r$ are invertible. Of course, it suffices to show that $s$ is invertible. As in any category, for this it suffices to show that $s$ has both a right inverse and a left inverse. Since $sr$ is invertible, $r(sr)^{-1}$ is clearly a right inverse of $s$. To find a left inverse, consider a commutative diagram as in Proposition \[prop:pseudo\_comm\]: $$\xymatrix{
g \ar@/_/[d]_u^\sim \ar[r]^-r &
h \ar[r]^-s &
{\ell} \\
\bullet \ar[r]^-{s'} &
\bullet \ar[r]^-{r'} & \bullet \ar@/_/[u]_v^\sim
}$$ Here $s'$ is a twist of $s$, $r'$ is a twist of $r$, and $u$ and $v$ are invertible. Since $sr$ is invertible, so is $r's'$. Hence $(r's')^{-1}r'$ is a left inverse of $s'$. Since twisting is an equivalence of categories, this shows that $s$ also has a left inverse.
Homogeneous ideals
------------------
A *homogeneous ideal* $\mathcal I$ of $\mathcal R$ is a (${\mathbb{Z}}$-linear categorical) ideal of morphisms of $\mathcal R$ which is closed under tensoring with arbitrary morphisms of $\mathcal R$. In other words, it is a collection of subgroups $\mathcal I(g,h)\subseteq \mathcal R(g,h)$ for all $g,h\in \mathcal G$, satisfying the two properties
1. $\mathcal R(h,h') \circ \mathcal I(g,h)\circ \mathcal R(g',g) \subseteq \mathcal I(g',h')$
2. $\mathcal R(g',h')\otimes \mathcal I(g,h) \subseteq \mathcal I(g'\otimes g, h'\otimes h)$ and $\mathcal I(g,h) \otimes \mathcal R(g',h') \subseteq \mathcal I(g\otimes g', h\otimes h')$
for all $g',h'\in \mathcal G$.
We note that as in a symmetric 2-ring there are no direct sums all maps are “homogeneous of some degree”, so we do not need to impose a homogeneity condition on generators, as is the case for usual graded rings (cf. Example \[ex:comm\_2\_rings\] (3)). Nonetheless, here as elsewhere we have borrowed the terminology – along with the intuition – from graded rings.
\[rem:ideals\] We collect here some immediate observations on homogeneous ideals.
1. It follows from the essential smallness of $\mathcal R$ that there is only a set of homogeneous ideals of $\mathcal R$.
2. If $\mathcal I$ is a homogeneous ideal of $\mathcal R$, then $\otimes$ descends to a tensor structure on the additive quotient category $\mathcal R/\mathcal I$. The quotient functor $\mathcal R\to \mathcal R/\mathcal I$ is symmetric monoidal and thus preserves invertible objects, so $\mathcal R/\mathcal I$ is again a graded commutative 2-ring.
Moreover, the homogenous ideals of $\mathcal R/\mathcal I$ are in bijection with the homogeneous ideals of $\mathcal R$ containing $\mathcal I$, in the usual way. Conversely, the kernel (on maps) of any morphism $\mathcal R\to \mathcal R'$ of graded commutative 2-rings is a homogeneous ideal.
3. A homogeneous ideal is proper iff it contains no isomorphism, iff it does not contain any identity map, iff it does not contain the identity map of ${\mathbf{1}}$.
4. If the homogeneous ideal $\mathcal I$ contains a morphism $r$ then it also contains all the translates of $r$.
5. Since duals are translates (Example \[ex:dual\_translate\]), we see in particular that every homogeneous ideal is self-dual: $\mathcal I=\mathcal I^\vee$.
6. It follows from the factorizations $$\xymatrix{
g g' \ar[d]_{r {g'}} \ar[dr]|{r r'} \ar[r]^-{g r'} &
g h' \ar[d]^{r {h'}} \\
h g' \ar[r]_-{h r' \; } & h h'
}$$ of $r\otimes r'$, for any $r\in \mathcal R(g,h)$ and $r'\in \mathcal R(g',h')$, that an ideal $\mathcal I$ of $\mathcal R$ is closed under tensoring with arbitrary maps iff it is closed under tensoring with arbitrary objects (, with identity maps). By symmetry, it suffices that this holds on one side, that $g\otimes r \in \mathcal I$ for all $g\in \mathcal G$ and $r\in \mathcal I$.
7. Since all objects of $\mathcal R$ are invertible, we see from the commutative square $$\xymatrix{
g^\vee g h \ar[r]^-{g^\vee g r } \ar[d]_{\varepsilon_{g}\,h}^\sim &
g^\vee g \ell \ar[d]_\sim^{\varepsilon_{g}\, \ell} \\
h \ar[r]^-r &
\ell
}$$ and (6) (and the associativity of $\otimes$) that the second condition in the definition of a homogeneous ideal is equivalent to $$r\in \mathcal I \; \Leftrightarrow \; g\otimes r \in \mathcal I
\quad \textrm{ for all maps } r \textrm{ and objects } g$$ and also to $$r\in \mathcal I \; \Rightarrow \; g\otimes r \in \mathcal I
\quad \textrm{ for all maps } r \textrm{ and objects } g \,.$$
These remarks will be used repeatedly without further mention.
The set of all homogeneous ideals of $\mathcal R$ forms a poset with respect to inclusion. This poset is in fact a complete lattice, where meets $\bigwedge_\lambda \mathcal I_\lambda = \bigcap_\lambda \mathcal I_\lambda$ are just intersections, and therefore joins can be given by $\bigvee_\lambda \mathcal I_\lambda = \bigcap_\lambda \{ \mathcal J \mid \mathcal I_\lambda \subseteq \mathcal J \}$. A much more useful formula is provided by the following lemma.
\[lemma:unions\] The join of any family $\{\mathcal I_\lambda\}_{\lambda\in \Lambda}$ of homogeneous ideals of $\mathcal R$ is given by their Hom-wise sum $\left(\bigvee_\lambda \mathcal I_\lambda\right)(g,h) =\sum_\lambda \mathcal I_\lambda (g,h) $ for all $g,h\in \mathcal G$.
It suffices to prove that the Hom-wise sum of the $\mathcal I_\lambda$’s is a homogeneous ideal, since any other homogeneous ideal containing the $\mathcal I_\lambda$’s will have to contain it as well. Let $g,h\in \mathcal G$. By definition, $\sum_\lambda \mathcal I_\lambda (g,h)$ is a subgroup of $\mathcal R(g,h)$. Moreover, if $r_{\lambda_1}+\ldots +r_{\lambda_n}\in \sum_\lambda \mathcal I_\lambda (g,h)$ is an arbitrary element then its twists $g\otimes (r_{\lambda_1}+\ldots +r_{\lambda_n})= g\otimes r_{\lambda_1}+\ldots +g\otimes r_{\lambda_n}$ and its multiples $s (r_{\lambda_1}+\ldots +r_{\lambda_n})=sr_{\lambda_1}+\ldots +sr_{\lambda_n}$ and $(r_{\lambda_1}+\ldots +r_{\lambda_n})t= r_{\lambda_1}t+\ldots +r_{\lambda_n}t$ are again in $\sum_\lambda \mathcal I_\lambda (g,h)$, since each $\mathcal I_\lambda$ is closed under these operations. Thus $\bigcup_{g,h}\sum_\lambda \mathcal I_\lambda (g,h)$ is an ideal as claimed completing the proof.
Therefore we will also write $\cap$ for meets and $+$ or $\sum$ for joins.
\[prop:principal\] Let $r\colon g\to h$ be any morphism of $\mathcal R$. The principal homogeneous ideal $\langle r\rangle\subseteq \mathcal R$ generated by $r$ admits the following explicit description: for all $g',h'\in \mathcal G$, its component $\langle r\rangle (g',h')$ consists precisely of the morphisms of the form $s(\ell \otimes r)u$ for some object $\ell \in \mathcal G$, some isomorphism $u\colon g'\stackrel{\sim}{\to} \ell \otimes g$ and some morphism $s\colon \ell\otimes h\to h'$. Dually, it also consists precisely of the morphisms of the form $v(r \otimes k)t$ for some object $k\in \mathcal G$, map $t\colon g'\to g \otimes k $ and isomorphism $v\colon h\otimes k\stackrel{\sim}{\to} h'$.
We only prove the first claim, the second one being dual. Let $\mathcal I(g',h')$ be the set of maps $g'\to h'$ of the form $s(\ell \otimes r)u$, as above. It suffices to show that $\mathcal I:= \bigcup_{g',h'}\mathcal I(g',h')$ is a homogeneous ideal containing $r$, since clearly $\mathcal I\subseteq \langle r\rangle$. Note that $r$ has the required form, for instance because of the commutative square $$\xymatrix{
{\mathbf{1}}g \ar[d]^\lambda_\sim \ar[r]^-{{\mathbf{1}}r } & {\mathbf{1}}h \ar[d]_\sim^\lambda \\
g\ar[r]^-r & h\,.
}$$ Moreover $\mathcal I$ is evidently closed under twists and under compositions on the left, hence it remains only to show that it is closed under sums and compositions on the right.
Thus consider $r':= s(\ell\otimes r)u\in \mathcal I (g', h')$, with $u$ invertible, and let $t\in \mathcal R(g'', g')$ be some map. By Proposition \[prop:pseudo\_comm\], their composition $r' t$ is equal to $t' r''$ for some translate $t'$ of $t$ and some translate $r''$ of $r'$. But the latter is immediately seen to be a morphism of the form $\tilde t (\tilde{\ell}\otimes r) \tilde u$ with $\tilde u$ invertible, so it belongs to $\mathcal I(g'', h')$. This shows that $\mathcal I$ is closed under composition on the right.
Finally, let $$r'= \bigg(
\xymatrix{ g' \ar[r]^-u_-\sim & \ell\otimes g \ar[r]^-{\ell\otimes r} & \ell \otimes h \ar[r]^-{s} & h' }
\bigg)$$ and $$r''=\bigg(
\xymatrix{ g' \ar[r]^-{\tilde u}_-\sim & \tilde \ell\otimes g \ar[r]^-{\tilde \ell \otimes r} & \tilde \ell \otimes h \ar[r]^-{\tilde s} & h' } \bigg)$$ be two morphisms in $\mathcal I(g',h')$. Since $u$ and $\tilde u$ are isomorphisms and $g$ is tensor-invertible, we deduce the existence of an isomorphism $\varphi\colon \tilde \ell \cong g'\otimes g^\vee \cong \ell$ between $\tilde \ell$ and $\ell$, which we can use to constuct a commutative diagram $$\xymatrix@R=7pt{
& \tilde \ell\otimes g \ar[dd]^{\varphi\otimes g} \ar[r]^-{\tilde \ell \otimes r} &
\tilde \ell \otimes h \ar[dr]^-{\tilde s} \ar[dd]^{\varphi \otimes h} & \\
g' \ar[ru]^-{\tilde u} \ar[rd]_-{v\, :=} &
&
& h' \\
& \ell \otimes g \ar[r]^-{\ell \otimes r} & \ell \otimes h \ar[ur]_-{=:\,t}
&
}$$ where $v$ is again an isomorphism. Thus we may write $r''= t(\ell \otimes r)v$. Since $-\otimes g$ is an endo-equivalence of $\mathcal R$, the automorphism $uv^{-1}\colon \ell \otimes g\stackrel{\sim}{\to} \ell \otimes g$ must have the form $\psi \otimes g$ for some automorphism $\psi$ of $\ell$. We thus obtain a commutative diagram $$\xymatrix@R=7pt{
& \ell\otimes g \ar[dd]^{\psi\otimes g}_\sim \ar[r]^-{ \ell \otimes r} &
\ell \otimes h \ar[r]^-{t} \ar[dd]^{\psi \otimes h}_\sim &
h' \\
g' \ar[ru]^-{v} \ar[rd]_-{u} &
&
& \\
& \ell \otimes g \ar[r]^-{\ell \otimes r} & \ell \otimes h \ar[r]^-{ s}
& h'
}$$ where the upper composition $g'\to h'$ is $r''$ and the lower one is $r'$. Now we may use it to compute $$\begin{aligned}
r'+r''
& = s(\ell \otimes r)u + t(\ell \otimes r)v \\
& = s(\ell \otimes r)u + t (\psi \otimes h)^{-1} (\ell \otimes r)u \\
& = (s+ t(\psi \otimes h)^{-1}) (\ell \otimes r) u \\
& \in \mathcal I(g',h') \,,\end{aligned}$$ which shows that $\mathcal I(g',h')$ is closed under sums. Since clearly it also contains the zero map, this concludes the proof that $\mathcal I$ is a homogeneous ideal.
We now introduce one last family of homogeneous ideals.
\[ex:spectrum\] The *annihilator* of a morphism $s$ of $\mathcal R$, denoted by ${\mathrm{Ann}}_\mathcal R(s)$, is the homogeneous ideal of $\mathcal R$ generated by all the $r\in \mathcal R$ such that $r\circ s=0$.
This definition looks as if it should be called the *left* annihilator of $s$, but the next lemma shows that, as for commutative rings, there is actually no difference between left and right annihilators.
\[prop:ann\] The annihilator of $s\in \mathcal R$ has the explicit descriptions $$\begin{aligned}
{\mathrm{Ann}}_{\mathcal R}(s) &=\{r\in {\mathop{\mathrm{Mor}}}\mathcal R \mid \exists \textrm{ a translate }\tilde r\textrm{ of } r \textrm{ s.t.\ } \tilde rs=0 \} \\
&= \{r\in {\mathop{\mathrm{Mor}}}\mathcal R \mid \exists g\in \mathcal G \textrm{ and } \exists \textrm{ an isomorphism } u \textrm{ s.t.\ } (g\otimes r)us=0 \}\end{aligned}$$ and $$\begin{aligned}
{\mathrm{Ann}}_{\mathcal R}(s) &=\{r\in {\mathop{\mathrm{Mor}}}\mathcal R \mid \exists \textrm{ a translate }\tilde r\textrm{ of } r \textrm{ s.t.\ } s\tilde r=0 \} \\
&= \{r\in {\mathop{\mathrm{Mor}}}\mathcal R \mid \exists g\in \mathcal G \textrm{ and } \exists \textrm{ an isomorphism } u \textrm{ s.t.\ } s u (g\otimes r)=0 \} \,.\end{aligned}$$ In particular, by symmetry, ${\mathrm{Ann}}_\mathcal R(s)$ is also equal to the homogenous ideal of $\mathcal R$ generated by the maps $r\in \mathcal R$ such that $sr=0$.
Observe that the second equality in both parts of the statement is immediate from Lemma \[lemma:translate\]. Thus it is sufficient to prove the first equality in each statement; as they are similar we only give a proof of the top one.
Let $\mathcal I:=\{r\in {\mathop{\mathrm{Mor}}}\mathcal R \mid \exists \textrm{ a translate }\tilde r\textrm{ of } r \textrm{ s.t.\ } \tilde rs=0 \}$. Since ${\mathrm{Ann}}_\mathcal R(s)$ is a homogeneous ideal it is closed under translates and thus by definition it must contain $\mathcal I$. To prove the reverse inclusion, it suffices to show that $\mathcal I$ is a homogeneous ideal. If $r$ is in $\mathcal{I}$ then $(g\otimes r)us=0$ for some object $g$ and isomorphism $u$. Given $h\in \mathcal{G}$ we see that $h\otimes r\in \mathcal{I}$ i.e., $\mathcal{I}$ is closed under left twists, by considering $$((g\otimes h^\vee) \otimes(h\otimes r))us
\cong (g\otimes r)us =0.$$ Similarly we see that for a morphism $t$ which can be postcomposed with $r$ there is an equality $$(g\otimes (tr))us
= (g\otimes t)(g \otimes r)us
=0
\,,$$ showing that $\mathcal I$ is closed under left compositions.
To show that $\mathcal I$ is closed under sums we use the same reasoning as in the previous proposition. Let $r_1,r_2\in {\mathrm{Ann}}_{\mathcal R}(s)(g,h)$. By definition of $\mathcal I(g,h)$ and by Lemma \[lemma:translate\], this means that there exist objects $\ell_1,\ell_2$ and isomorphisms $v_1,v_2$ such that $(\ell_1\otimes r_1)v_1s=0$ and $(\ell_2\otimes r_2)v_2s=0$. In particular $v_1$ and $v_2$ have the same domain, and since $g$ is tensor-invertible we deduce the existence of an isomorphism $\varphi\colon \ell_2\stackrel{\sim}{\to}\ell_1$. Hence we may define an isomorphism $w$ fitting into the following commutative diagram. $$\xymatrix@R=7pt{
&& \ell_2\otimes g \ar[dd]_\sim^{\varphi\otimes g} \ar[r]^-{ \ell_2 \otimes r_2} &
\ell_2 \otimes h \ar[dd]_\sim^{\varphi \otimes h} \\
\bullet \ar[r]^-s & \bullet \ar[ru]^-{v_2} \ar[rd]_-{w\, :=} &
& \\
&& \ell_1 \otimes g \ar[r]^-{\ell_1 \otimes r_2} & \ell_1 \otimes h
}$$ Since $(\ell_2\otimes r_2)v_2s=0$ by hypothesis, we deduce moreover that $(\ell_1 \otimes r_2)ws=0$. And since $-\otimes g$ is an endo-equivalence of $\mathcal R$, the automorphism $v_1w^{-1}$ of $\ell_1\otimes g$ must have the form $\psi\otimes g$ for some automorphism $\psi$ of $\ell_1$. Thus we obtain the following commutative diagram: $$\xymatrix@R=7pt{
&& \ell_1\otimes g \ar[dd]_\sim^{\psi \otimes g} \ar[r]^-{ \ell_1 \otimes r_2} &
\ell_1 \otimes h \ar[dd]_\sim^{\psi \otimes h} \\
\bullet \ar[r]^-s & \bullet \ar[ru]^-{w} \ar[rd]_-{v_1} &
& \\
&& \ell_1 \otimes g \ar[r]^-{\ell_1 \otimes r_2} & \ell_1 \otimes h
}$$ This allows us to compute $$\begin{aligned}
(\ell_1\otimes (r_1+r_2))v_1s
& = \underbrace{(\ell_1\otimes r_1)v_1s}_{0} + (\ell_1\otimes r_2)v_1s \\
& = (\ell_1 \otimes r_2)(\psi\otimes g)ws \\
& = (\psi\otimes h) \underbrace{(\ell_1 \otimes r_2) ws}_{0} =0 \,,\end{aligned}$$ which shows that $r_1+r_2$ belongs to $\mathcal I$, as wished.
Finally, it remains to verify that $\mathcal I$ is also closed under composition on the right, and this follows easily from Proposition \[prop:pseudo\_comm\]. More precisely, the following claim is an immediate consequence of Proposition \[prop:pseudo\_comm\]:
Claim
: For any two maps $a,b \in \mathcal R$, we have the following equivalence:\
$a' b=0$ for some translate $ a'$ of $a$ $\quad \Leftrightarrow\quad$ $b a''=0$ for some translate $a''$ of $a$.
Therefore, if we assume that $r's=0$ for some translate $ r'$ of $r$, then $sr''=0$ for some (other) translate $r''$ of $r$, say $r''= w(\ell\otimes r)w'$ for an object $\ell$ and isomorphisms $w,w'$. But this implies $sw(\ell \otimes r)=0$ and therefore also $sw(\ell \otimes rt)= sw(\ell \otimes r)(\ell \otimes t) =0$. In other words, we have $s a'$ for some translate $a'$ of $a:=rt$. By applying the claim once again, we see that $a''s=0$ for some other translate $a''$ of $rt$. This shows that $\mathcal I$ is closed under composition on the right, as required. Hence ${\mathrm{Ann}}_{\mathcal R}(s)=\mathcal I$.
Products of ideals
------------------
In this subsection we explain how the lattice of homogeneous ideals in $\mathcal R$ — let us denote it by ${\mathop{\mathrm{Id}}}(\mathcal R)$ — is a *commutative ideal lattice*, in the sense of Buan, Krause and Solberg [@bks:ideal]. This observation provides a quick and conceptual way of defining the Zariski spectrum of $\mathcal R$.
\[lemma:products\] Let $\mathcal I,\mathcal J$ be two homogeneous ideals. Then their categorical product $$\mathcal I\circ \mathcal J = \{ s_1t_1+ \ldots + s_nt_n \mid s_1,\ldots,s_n\in \mathcal I , t_1,\ldots,t_n\in \mathcal J, n\geq0\}
$$ and their tensor product $$\mathcal I\otimes \mathcal J = \langle \{
s\otimes t \mid s\in \mathcal I, t\in \mathcal J \}\rangle
$$ define the same homogeneous ideal, that we will simply denote by $\mathcal I\mathcal J$. It follows in particular that $\mathcal I\mathcal J=\mathcal J\mathcal I$.
Clearly $\mathcal I\circ \mathcal J$ is a homogeneous ideal, and it follows from $s\otimes t= (s\otimes {\mathrm{id}})({\mathrm{id}}\otimes t)$ that it contains $\mathcal I\otimes \mathcal J$. On the other hand, consider the following truncated version of the commutative diagram : $$\xymatrix{
*+[F]{g} \ar[r]^-{t} \ar@/^4ex/[rr]^-{st} &
{h} \ar[r]^-{s} &
*+[F]{\ell} \\
g{\mathbf{1}}\ar[u]^\sim_{\rho} \ar[r]^-{t{\mathbf{1}}} &
h {\mathbf{1}}\ar[u]^{\rho} \ar[r]^-{s{\mathbf{1}}} &
\ell {\mathbf{1}}\ar[u]_-\sim^-{\rho} \\
gg^\vee g \ar[u]_{g \varepsilon}^\sim \ar[r]^-{tg^\vee g} &
h g^\vee g \ar[u]^{h \varepsilon} \ar[r]^-{sg^\vee g} &
*+[F]{ \ell g^\vee g } \ar[u]_\sim^{\ell \varepsilon} \\
{\mathbf{1}}g \ar[u]^\sim_{\eta g} \ar[r]_-\sim^-{\eta g} &
*+[F]{ hh^\vee g } \ar[u]_{h t^\vee g} \ar@/_3ex/[ur]_{s \,\otimes\, t^\vee g} &
}$$ We read off its outer frame that every composition $st$ ($s\in \mathcal I, t\in \mathcal J$) has the form $u(s \otimes t^\vee \otimes g)v$ for some isomorphisms $u,v$ and is therefore contained in the homogeneous ideal generated by the tensor products $s' \otimes t'$ ($s'\in \mathcal I, t'\in \mathcal J$), since $t^\vee \otimes g\in \mathcal J$. Hence $\mathcal I\circ \mathcal J=\mathcal I\otimes \mathcal J$. The symmetry $\mathcal I\otimes \mathcal J=\mathcal J\otimes \mathcal I$ is obvious from $s\otimes t=\gamma( t\otimes s)\gamma$.
\[lemma:cpt\_ideals\] The compact elements $\mathcal I\in {\mathop{\mathrm{Id}}}(\mathcal R)$ (, those for which $\mathcal I\subseteq \bigvee_\alpha \mathcal I_\alpha$ always implies $\mathcal I\subseteq \bigvee_{\alpha'} \mathcal I_{\alpha'}$ for some finite subset of indices) are precisely the finitely generated ideals: $\mathcal I=\langle r_1,\ldots, r_n\rangle$.
If $\mathcal I$ is finitely generated, then it is compact by the sum description of joins. Conversely, as $\mathcal I= \sum_{r\in \mathcal I}\langle r\rangle$ then if $\mathcal I$ is compact it must be finitely generated.
The poset ${\mathop{\mathrm{Id}}}(\mathcal R)$ of homogeneous ideals of $\mathcal R$, ordered by inclusion and equipped with the pairing $(\mathcal I,\mathcal J)\mapsto \mathcal I\mathcal J$, is an ideal lattice.
We need to verify the axioms (L1)-(L5) of [@bks:ideal]\*[Definition 1.1]{}, and this is quite straightforward. We have already seen that ${\mathop{\mathrm{Id}}}(\mathcal R)$ is complete, and it is compactly generated since $\mathcal I= \sum_{r\in \mathcal I}\langle r\rangle$ holds for every $\mathcal I$. By Lemma \[lemma:cpt\_ideals\], $1=\langle {\mathrm{id}}_{\mathbf{1}}\rangle$ is compact and the product of two compact elements is compact: writing $\mathcal I=\langle I\rangle$ and $\mathcal J=\langle \mathcal J\rangle$ for finite sets $I,J$, it follows from Lemma \[lemma:products\] that $\mathcal I\mathcal J=\langle I\otimes J \rangle$ is again compact. Finally, the product distributes over finite joins: $\mathcal I_1(\mathcal I_2+\mathcal I_3)=\mathcal I_1\mathcal I_2 +\mathcal I_1\mathcal I_3$.
It follows in particular that $\mathcal R$ has an associated spectrum of prime elements, which by [@bks:ideal] is a spectral space in the sense of Hochster [@hochster:prime]. In the next few subsections we (re)define the spectrum and (re)prove its spectrality in a more traditional way, using localization, as it makes clear the parallel to usual commutative rings and we will need localization in any case.
The Zariski spectrum
--------------------
Let $\mathcal R$ be a $\mathcal G$-graded commutative 2-ring.
A proper homogeneous ideal $\mathcal I$ of $\mathcal R$ is *prime* if it satisfies the usual condition, that $r\circ s\in \mathcal I$ implies either $r\in \mathcal I$ or $s\in \mathcal I$. The *(homogeneous) spectrum of $\mathcal R$*, denoted ${\mathop{\mathrm{Spec}}}\mathcal R$, is the set of all prime ideals of $\mathcal R$ endowed with the Zariski topology. Thus by definition the closed subsets are those of the form $$V(\mathcal I):=\{ \mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R\mid \mathcal I\subseteq \mathfrak p \}$$ for some homogeneous ideal $\mathcal I$ of $\mathcal R$.
\[lemma:spectop\] We have the familiar computational rules:
1. $V(0)={\mathop{\mathrm{Spec}}}\mathcal R$ and $V(\mathcal R)=\emptyset$.
2. $ V(\mathcal I) \cup V(\mathcal J) = V(\mathcal I \mathcal J) $.
3. $ \bigcap_\lambda V(\mathcal I_\lambda) = V\left(\sum_\lambda \mathcal I_\lambda \right)$.
In particular the Zariski topology is indeed a topology. Moreover, the sets $$D_r := \{ \mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R\mid r\not\in \mathfrak p \}
\quad \quad (r\in \mathcal R)$$ provide a basis of open subsets.
The computational rules are easily verified. We then obtain $$V(\mathcal I)
= V\left( \sum_{r\in \mathcal I} \langle r\rangle \right)
= \bigcap_{r\in \mathcal I} V(\langle r\rangle)$$ which shows that the sets $V(\langle r\rangle)= {\mathop{\mathrm{Spec}}}\mathcal R\smallsetminus D_r $ ($r\in \mathcal R$) are a basis of closed subsets, which is equivalent to the second claim.
\[remark:spec\_fct\] It is immediate to verify that every morphism $F\colon \mathcal R\to \mathcal R'$ induces a continuous map ${\mathop{\mathrm{Spec}}}(F)\colon {\mathop{\mathrm{Spec}}}\mathcal R'\to {\mathop{\mathrm{Spec}}}\mathcal R$ by ${\mathop{\mathrm{Spec}}}(F)(\mathfrak p):= F^{-1}\mathfrak p$, and that ${\mathop{\mathrm{Spec}}}$ is functorial: ${\mathop{\mathrm{Spec}}}({\mathrm{id}}_\mathcal R)={\mathrm{id}}_{{\mathop{\mathrm{Spec}}}\mathcal R}$ and ${\mathop{\mathrm{Spec}}}( F\circ F' )={\mathop{\mathrm{Spec}}}F' \circ {\mathop{\mathrm{Spec}}}F$.
\[lemma:max\_prime\] Every maximal homogeneous ideal is prime, and every proper homogeneous ideal is contained in a prime ideal.
The second claim follows from the first one by the usual application of Zorn’s lemma. In order to prove the first claim, note that $\mathcal I$ is prime iff $\mathcal R/\mathcal I$ is a *domain*, by which of course we mean that it has no nonzero divisors: if $rs=0$ then $ r=0$ or $s=0$. Also, $\mathcal I$ is maximal iff $\mathcal R/\mathcal I$ has no homogeneous ideals other than $0$ and itself, and they are distinct. But the latter implies the first: if $\mathcal I$ is maximal and if $s\in \mathcal R/\mathcal I$ is a nonzero element, then ${\mathrm{Ann}}_{\mathcal R/\mathcal I}(s)\neq \mathcal R/\mathcal I$ (otherwise $s=0$) so by hypothesis we must have ${\mathrm{Ann}}_{\mathcal R/\mathcal I}(s)=0$, showing that there exists no $r\in \mathcal R/\mathcal I$ with $rs=0$.
\[cor:nonempty\] The spectrum ${\mathop{\mathrm{Spec}}}\mathcal R$ is empty if and only if $\mathcal R\simeq \{0\}$.
In view of Lemma \[lemma:max\_prime\] it suffices to see that every non-invertible map $r$ in $\mathcal R$ is contained in some proper ideal, , we have to show that in this case $\langle r\rangle$ is proper. If this were not the case, then ${\mathrm{id}}_{\mathbf{1}}\in \langle r\rangle$. Therefore, by Proposition \[prop:principal\], we would be able to write ${\mathrm{id}}_{\mathbf{1}}= urs $ and ${\mathrm{id}}_{\mathbf{1}}= trv$ for some isomorphisms $u$ and $v$. But this would imply that $r$ has both a left and a right inverse and is therefore invertible, in contradiction with the hypothesis.
\[prop:qcpt\] The topological space ${\mathop{\mathrm{Spec}}}\mathcal R$ is quasi-compact.
Consider a cover ${\mathop{\mathrm{Spec}}}\mathcal R= \bigcup_{\lambda\in \Lambda} U_\lambda $ by open subsets, which by Lemma \[lemma:spectop\] we may assume of the form $U_\lambda=D_{r_\lambda}$. Thus $\emptyset = \bigcap_\lambda V(\langle r_\lambda \rangle)=V(\sum_\lambda \langle r_\lambda \rangle)$. This means that the ideal $\mathcal I := \sum_\lambda \langle r_\lambda \rangle $ is equal to $\mathcal R$ (otherwise there would be some prime containing it, by Lemma \[lemma:max\_prime\]). Hence ${\mathrm{id}}_{\mathbf{1}}\in \mathcal I $, and it follows from the explicit descriptions of principal ideals and sums (Proposition \[prop:principal\] and Lemma \[lemma:unions\]) that there exist finitely many indices $\lambda_1,\ldots,\lambda_n$ and maps $s_1,\ldots,s_n$ and $t_1,\ldots, t_n$ such that ${\mathrm{id}}_{\mathbf{1}}= s_1r_{\lambda_1} t_1 +\ldots+ s_nr_{\lambda_n} t_n$. Therefore ${\mathrm{id}}_{\mathbf{1}}\in \langle r_{\lambda_1}\rangle +\ldots + \langle r_{\lambda_n}\rangle$, that is $\mathcal R= \langle r_{\lambda_1}\rangle +\ldots + \langle r_{\lambda_n}\rangle $, that is ${\mathop{\mathrm{Spec}}}\mathcal R = D_{r_{\lambda_1}} \cup \cdots \cup D_{r_{\lambda_n}}$.
Multiplicative systems and localization {#subsec:loc}
---------------------------------------
We define homogeneous multiplicative systems in a graded commutative 2-ring in the natural way and show that they satisfy a two-sided calculus of fractions. In particular this allows us to localize $\mathcal R$ at a prime ideal.
\[defi:mult\_set\] A family $S\subseteq {\mathop{\mathrm{Mor}}}\mathcal R$ of morphisms of $\mathcal R$ is called a *homogeneous multiplicative system* if it contains all isomorphisms and is closed under taking composites and translates.
\[rem:dual\_multiplicative\] If $S$ is a homogeneous multiplicative system in $\mathcal R$ and $r$ is some morphism of $\mathcal R$, then it follows from Example \[ex:dual\_translate\] that $r \in S$ iff $r^\vee \in S$.
\[ex:mult\_r\] For any $r\in \mathcal R$, we denote by $S_r$ the smallest homogeneous multiplicative system of $\mathcal R$ containing $r$. Note that $S_r$ consists precisely of all finite composites of twists of $r$ and isomorphisms, , all finite compositions of translates of $r$.
\[ex:mult\_prime\] The complement $S_{\mathfrak p}:={\mathop{\mathrm{Mor}}}\mathcal R\smallsetminus \mathfrak p$ of every homogeneous prime ideal $\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R$ is a homogeneous multiplicative system of $\mathcal R$.
\[prop:fractions\] Every homogeneous multiplicative system $S$ in a graded commutative 2-ring $\mathcal R$ satisfies both a left and a right calculus of fractions.
We are only going to prove that $S$ satisfies a calculus of left fractions, because the proof for right fractions is dual (using the duality $(-)^\vee : \mathcal R\simeq \mathcal R^{\mathrm{op}}$, which stabilizes $S$ by Remark \[rem:dual\_multiplicative\]). Since by definition $S$ contains the identities of $\mathcal R$ and is closed under composition, it remains to verify the following two assertions:
1. (Ore condition.) Given two morphisms $r$ and $s$ with $s\in S$ as depicted, $$\xymatrix@1{
g\ar[r]^-r \ar[d]_{S \,\ni\, s} & h \ar@{..>}[d]^{s' \,\in\, S} \\
\ell \ar@{..>}[r]^-{r'} & m
}$$ then there exist $s'\in S$ and $r'$ such that $s'r=r's$.
2. (Cancellation.) Given three morphisms $r$, $t$ an $s$ as depicted, $$\xymatrix@1{
m \ar@{..>}[r]^-{s'} &
g \ar@/^/[r]^-{r} \ar@/_/[r]_-t &
h \ar[r]^-s &
\ell
}$$ with $s\in S$ and such that $sr=st$, then there exists a morphism $s'\in S$ such that $rs'=ts'$.
Since $\mathcal R$ is an $\mathsf{Ab}$-category, (2) may be conveniently reformulated as follows:
- Given two morphisms $r\colon g\to h$ and $s\colon h\to \ell$ with $s\in S$ and $sr=0$, then there is an $s'\in S$ with $rs'=0$.
To prove (1), consider the following commutative diagram: $$\label{dinat_mult}
\xymatrix{
*+[F]{h} &
*+[F]{g} \ar[r]^-s \ar[l]_-{r} &
*+[F]{ \ell }
\\
{\mathbf{1}}h \ar[u]^{\sim}_{\lambda} & {\mathbf{1}}g\ar[l]_-{{\mathbf{1}}r} \ar[u]_\lambda \ar[r]^-{{\mathbf{1}}s} &
{\mathbf{1}}\ell \ar[u]^{\sim}_\lambda \\
hh^\vee h \ar[u]^\sim_{\varepsilon_{h^\vee}\, h} \ar[d]_\sim^{h\, \varepsilon_{h}} &
hh^\vee g \ar[l]_-{hh^\vee r} \ar[u]_{\varepsilon_{h^\vee}\, g} \ar[r]^-{hh^\vee s} \ar[d]^{hr^\vee g} &
hh^\vee \ell \ar[u]_{\varepsilon_{h^\vee}\,\ell}^\sim \ar[d]^{hr^\vee \ell} & \\
h{\mathbf{1}}& hg^\vee g \ar[l]^-\sim_-{h\,\varepsilon_{g}} \ar[r]^-{hg^\vee s} &
*+[F] { hg^\vee \ell }
}$$ where the bottom-left square commutes by applying $h\otimes -$ to a dinaturality square for $r$, the two top squares by the naturality of $\lambda$, and all remaining squares by the bifunctoriality of $\otimes$. Therefore it suffices to define $r'$ to be the vertical composite from $\ell$ to $hg^\vee \ell$ on the right hand side, and $s'\in S$ to be the left-and-bottom composite from $h$ to $hg^\vee\ell$.
Let us prove (2). Given such $r$ and $s$, by Proposition \[prop:pseudo\_comm\] there exists a commutative diagram (where the $\bullet$’s denote some unnamed, possibly different objects) $$\xymatrix{
g \ar@/_/[d]_u^\sim \ar[r]^-r &
h \ar[r]^-s &
{\ell} \\
\bullet \ar[r]^-{s''} &
\bullet \ar[r]^-{r'} & \bullet \ar@/_/[u]_v^\sim
}$$ where $r'$ is some twist of $r$ (say $r'=m\otimes r$) and $s''$ some twist of $s$, and where $u$ and $v$ are invertible. Note that $s\in S$ implies $s''u\in S$, and that $sr=0$ implies $$r' \circ (s''u) = v^{-1}sr =0 \;.$$ By applying $m^\vee \otimes-$ to this vanishing composite, and using the naturality of $\lambda$ and $\varepsilon$ and the functoriality of $\otimes$, we obtain the next commutative diagram. $$\xymatrix{
m^\vee g \ar@/_5ex/[rrdd]_-{s' \,:= } \ar@/^5ex/[rrr]^-0 \ar[rr]_-{m^\vee (s''u)}^{\in \, S} &&
m^\vee mg \ar[d]_{\varepsilon g}^\sim \ar[r]_-{m^\vee r' } &
m^\vee m h \ar[d]_{\varepsilon h}^\sim \\
&&
{\mathbf{1}}g \ar[d]_\lambda^\sim \ar[r]_-{{\mathbf{1}}r} &
{\mathbf{1}}h \ar[d]_\lambda^\sim \\
&&
g \ar[r]_-r & h
}$$ The composite map labeled $s'$ satisfies $s'\in S$ and $rs'=0$ completing the proof of (2) and the statement.
Let $\mathcal R$ be a graded commutative 2-ring, let $S\subseteq {\mathop{\mathrm{Mor}}}\mathcal R$ be a homogeneous multiplicative system, and let ${\mathrm{loc}}: \mathcal R \to S^{-1}\mathcal R$ be the localization of $\mathcal R$ at $S$. Then $S^{-1}\mathcal R$ is a graded commutative 2-ring and for a unique symmetric tensor structure $\otimes$ making the localization functor ${\mathrm{loc}}$ symmetric monoidal and making the square $$\xymatrix{
\mathcal R\times \mathcal R \ar[r]^-{\otimes} \ar[d]_{{\mathrm{loc}}\times {\mathrm{loc}}} & \mathcal R \ar[d]^{{\mathrm{loc}}} \\
S^{-1} \mathcal R\times S^{-1} \mathcal R \ar[r]^-{\otimes} & S^{-1}\mathcal R
}$$ strictly commute.
It is a straightforward verification using the calculus of fractions.
In particular, by Examples \[ex:mult\_r\] and \[ex:mult\_prime\] we obtain for every map $r\in \mathcal R$ and every homogeneous prime $\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R$ localization morphisms of graded commutative 2-rings $${\mathrm{loc}}_r\colon \mathcal R\longrightarrow S^{-1}_r \mathcal R =: \mathcal R_r
\quad \textrm{ and } \quad
{\mathrm{loc}}_{\mathfrak p} \colon \mathcal R \longrightarrow S_{\mathfrak p}^{-1}\mathcal R=: \mathcal R_{\mathfrak p}
\;,$$ “away from $r$” and “at $\mathfrak p$”, respectively.
\[rem:loc\_invertible\] Note that (simply because $\mathfrak p$ is an ideal) the multiplicative system $S_\mathfrak p$ is *saturated*, that is, if we are given three composable maps $$\xymatrix{
g \ar[r]^-r & {h \phantom{g}}\!\!\! \ar[r]^-s & {\ell \phantom{g}}\!\!\! \ar[r]^-t & m
}$$ with $ts\in S_\mathfrak p$ and $sr \in S_\mathfrak p$, it must follow that $s\in S_\mathfrak p$. Hence $S_\mathfrak p$ consists precisely of all the morphisms in $\mathcal R$ whose image in $\mathcal R_{\mathfrak p}$ is invertible.
Applications to the spectrum
----------------------------
We now apply the calculus of fractions for homogeneous multiplicative systems in order to prove properties of the spectrum.
\[prop:loc\_spec\] Given a homogeneous multiplicative subset $S\subset \mathcal R$, the localization morphism ${\mathrm{loc}}\colon \mathcal R\to S^{-1}\mathcal R$ induces a homeomorphism $${\mathop{\mathrm{Spec}}}({\mathrm{loc}}) \colon {\mathop{\mathrm{Spec}}}S^{-1}\mathcal R \stackrel{\sim}{\longrightarrow}
\{ \mathfrak p \mid \mathfrak p\cap S =\emptyset \} \;\subseteq\; {\mathop{\mathrm{Spec}}}\mathcal R$$ onto its image. In particular, the morphisms ${\mathrm{loc}}_r$ and ${\mathrm{loc}}_\mathfrak p$ induce homeomorphisms $${\mathop{\mathrm{Spec}}}\mathcal R_r \cong D_r
\quad \textrm{ and } \quad
{\mathop{\mathrm{Spec}}}\mathcal R_\mathfrak p \cong \{\mathfrak q\in {\mathop{\mathrm{Spec}}}\mathcal R \mid \mathfrak q \subseteq \mathfrak p \}$$ for every $r\in \mathcal R$ and every $\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R$.
Let us first explain how to obtain the two special cases of the statement from the main claim. It follows immediately from the definition of a prime ideal $\mathfrak p$ that $r\in \mathfrak p$ iff $s\in \mathfrak p$ for some $s\in S_r$; hence $\mathfrak p\cap S_r=\emptyset $ iff $r\notin \mathfrak p$. Thus ${\mathop{\mathrm{Spec}}}\mathcal R_r \cong D_r $ by the first part of the proposition. The last homeomorphism is proved similarly. Now let $S$ be an arbitrary homogeneous multiplicative subset. For each homogeneous ideal $\mathcal I \subseteq \mathcal R$ we consider the following collection of morphisms of $S^{-1}\mathcal R$ (expressed as left fractions): $$S^{-1}\mathcal I :=\{ s^{-1}r \mid s\in S , r\in \mathcal I \} \,.$$ We now prove the proposition in a series of easy lemmas.
\[lemma:claim1\] The collection $S^{-1}\mathcal I$ forms a homogeneous ideal of $S^{-1}\mathcal R$.
Evidently $S^{-1}\mathcal I$ contains all zero maps and is closed under twists. Closure under left and right multiplication follows from the Ore condition for left fractions: if the fractions $s_1^{-1}r_1$ and $s_2^{-1}r_2$ represent two morphisms in $S^{-1}\mathcal R$ that are right, resp. left, composable with some map $s^{-1}r\in S^{-1}\mathcal I$, then we find in $\mathcal R$ a commutative diagram of the form $$\xymatrix@R=15pt@C=15pt{
&& \bullet && \bullet && \\
& \bullet \ar[ur]^{\tilde r} && \bullet \ar[ul]_{\tilde s_1}^\sim \ar[ur]^{\tilde r_2} &&
\ar[ul]^\sim_{\tilde s} \bullet & \\
\bullet \ar[ur]^{r_1} &&
\bullet \ar[ul]_{s_1}^\sim \ar[ur]^{r} &&
\bullet \ar[ul]_{s}^\sim \ar[ur]^{r_2} && \bullet \ar[ul]^\sim_{s_2}
}$$ with $\tilde s_1, \tilde s\in S$. Since $\mathcal I$ is an ideal and $S$ is closed under composition, we deduce from the equations $(s^{-1}r)(s_1^{-1}r_1)= (\tilde s_1s)^{-1}(\tilde r r_1)$ and $(s_2^{-1}r_2)(s^{-1}r)= (\tilde s s_2)^{-1}(\tilde r_2 r)$ that both composites belong again to $S^{-1}\mathcal I$. Next we prove closure under sums. Consider two summable fractions $s^{-1}_1r_1, s^{-1}_2r_2\in S^{-1}\mathcal I$. By applying Ore’s condition again, we obtain in $\mathcal R$ a diagram as follows, $$\xymatrix{
& \bullet \ar[d]^{\tilde s_1}_\sim & \\
\bullet \ar[ur]^{r_1} \ar[dr]_{r_2} & \bullet & \bullet \ar[ul]_{s_1} \ar[dl]^{s_2} \\
&\bullet \ar[u]_{\tilde s_2}&
}$$ where the right half is commutative and (say) $\tilde s_1\in S$. By the very definition of the sum of morphisms in the localization $S^{-1}\mathcal R$, we have the equation $$s_1^{-1}r_1 + s_2^{-1}r_2 = \underbrace{(\tilde s_1 s_1)}_{\in \, S}\! {}^{-1}\underbrace{(\tilde s_1r_1 + \tilde s_2r_2)}_{\in \, \mathcal I}
\,,$$ from which we deduce that the sum belongs again to $S^{-1}\mathcal I$. Thus $S^{-1}\mathcal I$ is a homogeneous ideal as claimed.
\[lemma:claim2\] If $\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R$ is such that $\mathfrak p\cap S=\emptyset$, then $S^{-1}\mathfrak p\in {\mathop{\mathrm{Spec}}}S^{-1}\mathcal R$.
For this, we may assume without loss of generality that $S$ is saturated, since $\mathcal R\smallsetminus \mathfrak p$ contains $S$ and is saturated (Remark \[rem:loc\_invertible\]). Let us see that $S^{-1}\mathfrak p$ is proper. If not then we may write ${\mathrm{id}}_{\mathbf{1}}= s^{-1}r \in S^{-1}\mathfrak p$, from which we deduce by saturation that $r\in S$ as well, but this would contradict our hypothesis that $S\cap \mathfrak p=\emptyset$. Now assume that $(s^{-1}_1r_1)(s^{-1}_2r_2)\in S^{-1}\mathfrak p$ for two left fractions $s^{-1}_1r_1, s^{-1}_2r_2 \in S^{-1}\mathcal R$. Thus $(s^{-1}_1r_1)(s^{-1}_2r_2)= s^{-1} r$ for some $s\in S$ and $r\in \mathfrak p$. By Ore, we obtain in $\mathcal R$ a commutative diagram $$\xymatrix@R=15pt@C=15pt{
&& \bullet && \\
& \bullet \ar[ur]^{\tilde r_1} && \bullet \ar[ul]_{\tilde s_2}^\sim & \\
\bullet \ar[ur]^{r_2} && \bullet \ar[ul]_{s_2}^\sim \ar[ur]^{r_1} && \bullet \ar[ul]_{s_1}^\sim
}$$ with $\tilde s_2\in S$ and therefore an equation $(s^{-1}_1r_1)(s^{-1}_2r_2)= (\tilde s_2s_1)^{-1}(\tilde r_1r_2)$. Hence $s^{-1} r$ and $(\tilde s_2s_1)^{-1}(\tilde r_1r_2)$ are equivalent fractions and thus admit a common amplification, , there exists in $\mathcal R$ a commutative diagram $$\xymatrix{
& \bullet \ar[d]^{t} & \\
\bullet \ar[ur]^{r} \ar[dr]_{\tilde r_1 r_2} & \bullet & \bullet \ar[ul]_{s} \ar[dl]^{\tilde s_2s_1} \\
&\bullet \ar[u]_{u}^\sim &
}$$ with $u\in S$. In particular we see that $u(\tilde r_1r_2)= tr\in \mathfrak p$, but since $u\not\in \mathfrak p$ we must have that $\tilde r_1r_2\in \mathfrak p$, so either $\tilde r_1$ or $r_2$ must belong to $\mathfrak p$. In the latter case $s^{-1}_2r_2\in S^{-1}\mathfrak p$; in the former, the equation $\tilde r_1s_2 =\tilde s_2 r_1$ implies that $r_1\in \mathfrak p$ and thus $s^{-1}_1r_1\in S^{-1}\mathfrak p$. This concludes the proof that $S^{-1}\mathfrak p$ is a prime ideal in $S^{-1}\mathcal R$.
\[lemma:claim3\] The construction $S^{-1}$ is left inverse to ${\mathrm{loc}}^{-1}$, , every homogeneous ideal $\mathcal J$ of $ S^{-1}\mathcal R$ has the form $\mathcal J= S^{-1}({\mathrm{loc}}^{-1} \mathcal J)$.
The inclusion $S^{-1}({\mathrm{loc}}^{-1}\mathcal J)\subseteq \mathcal J$ is obvious. On the other hand, if a fraction $s^{-1}r $ belongs to $ \mathcal J$ then ${\mathrm{loc}}(r) = ss^{-1}r \in \mathcal J$ too, so that $s^{-1}r\in S^{-1}({\mathrm{loc}}^{-1}\mathcal J)$. This proves the other inclusion $\mathcal J\subseteq S^{-1}({\mathrm{loc}}^{-1}\mathcal J)$ and therewith the claim.
\[lemma:claim4\] The two maps ${\mathrm{loc}}^{-1}$ and $S^{-1}$ induce mutually inverse bijections between $ {\mathop{\mathrm{Spec}}}S^{-1}\mathcal R $ and $ \{\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R\mid \mathfrak p\cap S= \emptyset \} $.
We have already seen (in Lemma \[lemma:claim2\] and Remark \[remark:spec\_fct\]) that $S^{-1}$ and ${\mathrm{loc}}^{-1}$ restrict to prime ideals as described (for the latter, note that ${\mathrm{loc}}^{-1}(\mathfrak q)\cap S=\emptyset$ because every $\mathfrak q\in {\mathop{\mathrm{Spec}}}S^{-1}\mathcal R $ is a proper ideal). By Lemma \[lemma:claim3\], we have $S^{-1}({\mathrm{loc}}^{-1}\mathfrak q)=\mathfrak q$ for all $\mathfrak q\in {\mathop{\mathrm{Spec}}}S^{-1}\mathcal R$, and the inclusion $ \mathfrak p \subseteq {\mathrm{loc}}^{-1}(S^{-1}\mathfrak p)$ is obvious for all $\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R$ with $\mathfrak p\cap S=\emptyset$. To prove the reverse inclusion, let $r\in \mathcal R$ be such that ${\mathrm{loc}}(r)\in S^{-1}\mathfrak p$. Then, by the definition of $S^{-1}\mathfrak p$, there exists in $\mathcal R$ a commutative diagram $$\xymatrix{
& \bullet \ar[d]^{v} & \\
\bullet \ar[ur]^{t} \ar[dr]_{r} & \bullet & \bullet \ar[ul]_{s}^<<<<\sim \ar@{=}[dl] \\
&\bullet \ar[u]_{u}^\sim &
}$$ with $s,u\in S$ and $t\in \mathfrak p$, from which we see that $ur=vt\in \mathfrak p$. Since $u\not\in \mathfrak p$ by hypothesis and $\mathfrak p$ is prime, we conclude that $r$ belongs to $\mathfrak p$. Therefore ${\mathrm{loc}}^{-1}(S^{-1}\mathfrak p)\subseteq \mathfrak p$ as well.
\[lemma:claim5\] For any left fraction $s^{-1}r\in S^{-1}\mathcal R$, the preimage of $V(\langle s^{-1}r\rangle )$ under the map $S^{-1}\colon \{\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R\mid \mathfrak p\cap S=\emptyset \}\stackrel{\sim}{\to} {\mathop{\mathrm{Spec}}}S^{-1}\mathcal R$ is $V(\langle r\rangle )$.
Consider some $\mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal R$ with $\mathfrak p\cap S=\emptyset$. Clearly, if $r\in \mathfrak p$, then $s^{-1} r\in S^{-1}\mathfrak p$. Conversely, if $s^{-1}r\in S^{-1}\mathfrak p$ then—by the easy argument already employed twice—we must have $r\in \mathfrak p$. Therefore $(S^{-1})^{-1}V(\langle s^{-1}r\rangle ) = V(\langle r\rangle)$, as claimed.
Finally, in order to complete the proof of the proposition it suffices to note that the bijection in Lemma \[lemma:claim4\] is a homeomorphism for the respective Zariski topologies: the map ${\mathrm{loc}}^{-1}$ was already seen to be continuous, and its inverse $S^{-1}$ is continuous by virtue of Lemma \[lemma:claim5\]. So we are done.
We record an easy but pleasant consequence of the proof: the operations of taking quotients and localizations commute with one another.
\[cor:SvsI\] Let $\mathcal R$ be a graded commutative 2-ring, let $\mathcal I$ be a homogeneous ideal of $\mathcal R$, and let $S$ be a homogeneous multiplicative system in $\mathcal R$. Then there exists a unique isomorphism of graded commutative 2-rings $$S^{-1}\mathcal R /S^{-1}\mathcal I \stackrel{\sim}{\to} (S/\mathcal I)^{-1}(\mathcal R/\mathcal I)$$ which is compatible with the localization and quotient morphisms, where $S^{-1}\mathcal I$ is the homogeneous ideal of Lemma \[lemma:claim1\], and where $S/\mathcal I$ is the homogeneous system generated in $\mathcal R/\mathcal I$ by the image of $S$.
The proof is obvious from the universal properties of quotients and localizations.
\[prop:2spectral\] The spectrum ${\mathop{\mathrm{Spec}}}\mathcal R$ of every graded commutative 2-ring $\mathcal R$ is a spectral topological space, : it is $T_0$, quasi-compact, it has a basis of quasi-compact open subsets closed under finite intersections, and every irreducible closed subset has a unique generic point. Moreover, we may take $\{D_r\mid r\in \mathcal R\}$ as a basis of quasi-compact opens.
We have already proved in Proposition \[prop:qcpt\] that the whole spectrum is quasi-compact. Moreover, the basic open subsets $D_r$, $r\in \mathcal R$, of Lemma \[lemma:spectop\] are also quasi-compact, because of the homeomorphisms $D_r\cong {\mathop{\mathrm{Spec}}}\mathcal R_r$ of Proposition \[prop:loc\_spec\]. As in the case of usual rings it is clear that they are closed under finite intersections (indeed $D_r\cap D_s = D_{\tilde rs}$ for any translate $\tilde r$ of $r$ that is composable with $s$). Thus it only remains to prove the existence and uniqueness of generic points. Uniqueness and the fact that the spectrum is $T_0$ are immediate from the definition of the Zariski topology, from which we see that the closure of a point has the form $\overline{\{\mathfrak p\}}=V(\mathfrak p)$; accordingly, if $\mathfrak p_1$ and $\mathfrak p_2$ have the same closure then they are contained in one another and hence equal. For the existence, it suffices to show that every nonempty close subset $Z \subseteq {\mathop{\mathrm{Spec}}}\mathcal R$ contains a minimal point (with respect to inclusion). Writing $Z= V(\mathcal I)$, this is equivalent to showing that if $\mathcal I$ is proper then there is a minimal prime containing it (as $Z\cong {\mathop{\mathrm{Spec}}}\mathcal R/\mathcal I$ and by Corollary \[cor:nonempty\]). This is a standard application of Zorn’s lemma.
\[thm:spec\] For every morphism $F\colon \mathcal R\to \mathcal R'$ of graded commutative 2-rings, there is a spectral continuous map ${\mathop{\mathrm{Spec}}}F\colon {\mathop{\mathrm{Spec}}}\mathcal R'\to {\mathop{\mathrm{Spec}}}\mathcal R$ given by $\mathfrak p\mapsto F^{-1}\mathfrak p$. This defines a contravariant functor, ${\mathop{\mathrm{Spec}}}$, from graded commutative 2-rings and their morphisms to spectral topological spaces and spectral continuous maps.
In view of the last result it remains only to verify that ${\mathop{\mathrm{Spec}}}F$ is a spectral continuous map (, that the preimage of a quasi-compact open is again a quasi-compact open). For this it suffices to notice that $$({\mathop{\mathrm{Spec}}}F)^{-1}D_r= \{ \mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal{R}' \mid r\not\in F^{-1}\mathfrak p\}= \{ \mathfrak p\in {\mathop{\mathrm{Spec}}}\mathcal{R}' \mid Fr \not\in \mathfrak p\} = D_{Fr}$$ for all $r\in \mathcal R$.
Localization of $\mathcal R$-algebras {#subsec:loc_alg}
-------------------------------------
For our applications we will need to localize not only 2-rings but also algebras over them, as we now explain. Thus we need to generalize Proposition \[prop:fractions\] accordingly. Let $\mathcal R$ be a graded commutative 2-ring.
\[defi:algebra\] An *algebra over $\mathcal R$* (or *$\mathcal R$-algebra*) is a symmetric monoidal ${\mathbb{Z}}$-category $\mathcal A$ equipped with a additive symmetric monoidal functor $F\colon \mathcal R\to \mathcal A$. (Note that the objects of $\mathcal A$ are not required to be invertible.)
Let $\mathcal R$ be a graded commutative 2-ring, let $S$ be a homogeneous multiplicative system in $\mathcal R$, and let $F\colon \mathcal R\to \mathcal A$ be an $\mathcal R$-algebra. Write $ S_\mathcal A $ for the smallest class of maps in $\mathcal A$ containing $FS$ and all isomorphisms of $\mathcal A$ and which is closed under composition and twisting with objects of $\mathcal{A}$.
\[thm:fractions\_general\] Let $\mathcal R$ be a graded commutative 2-ring, let $S$ be a homogeneous multiplicative system in $\mathcal R$, and let $F\colon \mathcal R\to \mathcal A$ be an $\mathcal R$-algebra. Then $S_\mathcal A$ satisfies in $\mathcal A$ both a left and a right calculus of fractions.
To begin with, notice that $S_\mathcal A$ would remain the same if we substitute $\mathcal R$ with the full subcategory on the replete closure of its image $F\mathcal R$, and $S$ with the homogeneous multiplicative system generated by the images of maps in $S$. Thus without loss of generality we may assume that $\mathcal R$ is a full replete subcategory of $\mathcal A$.
Let us verify that $S_\mathcal A$ satisfies a calculus of left fractions (the proof for right fractions is dual and will be omitted). Since by definition $S_\mathcal A$ contains all identity maps and is closed under composition, it remains to verify conditions (1) and (2) as in the proof of Proposition \[prop:fractions\].
We see that the set $S_\mathcal A$ consists precisely of finite composites of maps in $\mathcal A$ which are either invertible or belong to $\{x\otimes s \mid s\in S, x\in {\mathop{\mathrm{obj}}}\mathcal A\}$ (or alternatively, to $\{s\otimes x\mid s\in S, x\in {\mathop{\mathrm{obj}}}\mathcal A\}$). Thus to prove (1) it evidently suffices to consider diagrams $\smash{\bullet \leftarrow \bullet \rightarrow \bullet}$ where the map $(\bullet \leftarrow \bullet)\in S_\mathcal A$ is a twist in $\mathcal A$ of a map of $S$. Accordingly, assume we are given two maps $$\xymatrix{
{\phantom{g} xh} & {\!\!\! \phantom{h} xg} \ar[l]_-{xs} \ar[r]^-r & {\!\!\! \phantom{h} y}
}$$ for some $s\in S\subseteq \mathcal R$ and $x,y\in {\mathop{\mathrm{obj}}}\mathcal A$. In order to complete them to a square as required, draw the following commutative diagram, which is similar to . $$\label{dinat_mult_general}
\xymatrix@R=8pt@C=8pt{
&
*+[F]{xh} &
&
*+[F]{xg} \ar[ll]_-{xs} \ar[rr]^-{r} &&
*+[F]{y} \\
&&&&& \\
&
{\mathbf{1}}xh \ar[uu]^\sim_{\lambda_{xh}} &&
{\mathbf{1}}xg \ar[ll]_-{{\mathbf{1}}xs} \ar[uu]^{\lambda_{xg}} \ar[rr]^-{{\mathbf{1}}r} &&
{\mathbf{1}}y \ar[uu]_-\sim^-{\lambda_y} \\
&&&&& \\
&
hh^\vee xh \ar[uu]_{\varepsilon_{h^\vee} xh}^\sim \ar[dl]^<<<{\gamma h}_-\sim &&
hh^\vee xg \ar[ddrr]|{hs^\vee r} \ar[uu]^{\varepsilon_{h^\vee} xg} \ar[ll]_-{hh^\vee xs} \ar[rr]^-{hh^\vee r} \ar[dl]_<<<<<<{\gamma g} \ar[dd]|{hs^\vee xg \;\; } &&
hh^\vee y \ar[uu]_\sim^{\varepsilon_{h^\vee} y} \ar[dd]^{hs^\vee y} \\
xhh^\vee h \ar[dd]_\sim^{xh \varepsilon_{h}} &&
xhh^\vee g \ar[ll]^-{xhh^\vee s} \ar[dd]_{xh s^\vee g} &&& \\
&&& hg^\vee xg \ar[rr]_{hg^\vee r} \ar[dl]^\sim_{\gamma g} &&
*+[F]{hg^\vee y} \\
xh{\mathbf{1}}&& xhg^\vee g \ar[ll]^-\sim_-{xh \varepsilon_{g}} &&&
}$$ The top two squares commute by naturality of $\lambda$; the middle-bottom skew one by the naturality of $\gamma$; the bottom-left one by applying $xh\otimes-$ to a dinaturality square for $s$, and all the remaining squares by functoriality of the tensor. That each map marked $\sim$ is an isomorphism is either clear or follows from the fact that both $g$ and $h$ are invertible, which is the case since $s\in S$. If we compose maps between the objects in boxes, the outer frame of becomes a commutative square $$\xymatrix{
xg \ar[d]_{xs} \ar[r]^{r} &
y \ar[d]^{s'} \\
xh \ar[r] & hg^\vee y
}$$ with $s'\in S_\mathcal A$ (by Remark \[rem:dual\_multiplicative\]). This proves (1). It remains to prove condition (2) for $S_\mathcal A$, which reads as follows:
- Given two morphisms $r\colon x\to y$ and $s\colon y\to z$ such that $s\in S_\mathcal A$ and $sr=0$, then there is an $s'\in S_{\mathcal A}$ such that $rs'=0$.
We first prove the following special case of (2).
\[lemma:ind\_reduction\] Consider two composable maps $r\colon x \to g\otimes y$ and $s\otimes y\colon g\otimes y\to h \otimes y$ such that $(s\colon g\to h)\in S$ and $(s\otimes y)r=0$. Then there is an $s'\in S_\mathcal A$ with $rs'=0$.
Given such $r$ and $s$, build the following commutative diagram, whose similarity with will not be missed: $$\xymatrix@R=8pt@C=8pt{
x \ar[rr]^-r \ar@/^4ex/[rrrr]^-0 && gy \ar[rr]^-{sy} && hy & \\
&&&&& \\
x{\mathbf{1}}\ar[uu]^\sim_\rho \ar[rr]^-{r{\mathbf{1}}} &&
gy{\mathbf{1}}\ar[uu]_\rho \ar[rr]^-{sy{\mathbf{1}}} &&
hy{\mathbf{1}}\ar[uu]_\sim^\rho & \\
&&&&& \\
*+[F]{xh^\vee h} \ar[uu]^\sim_\varepsilon \ar[rr]^-{rh^\vee h} \ar[dd]_{xs^\vee h} &&
gyh^\vee h \ar[uu]_\varepsilon \ar[rr]^-{syh^\vee h} \ar[rd]_-{g\gamma} \ar[dd]_{gys^\vee h} &&
hyh^\vee h \ar[uu]_\sim^\varepsilon \ar[rd]^\sim_<<{h\gamma} & \\
&&& gh^\vee hy \ar[rr]_-{sh^\vee hy} \ar[dd]^{gs^\vee hy} &&
hh^\vee hy \ar[dd]_{\varepsilon hy}^\sim \\
*+[F]{xg^\vee h} \ar[rr]_-{rg^\vee h} &&
*+[F]{gyg^\vee h} \ar[dr]_\sim^{g\gamma} &&& \\
&&& gg^\vee hy \ar[rr]_-\sim^-{\varepsilon hy} && {\mathbf{1}}hy
}$$ Here, the two top squares commute by the naturality of $\rho$, the middle-bottom skew one by the naturality of $\gamma$, the bottom-right one by applying $-\otimes hy$ to a dinaturality square for $s$, and all the other squares by the functoriality of the tensor. Again the maps marked $\sim$ are all isomorphisms since both $g$ and $h$ are $\otimes$-invertible, which is the case since $s\in S$. The outer frame of the diagram tells us that the two composable arrows between the framed objects, namely $$\xymatrix{
xh^\vee h \ar[r]^-{xs^\vee h} & xg^\vee h \ar[r]^-{rg^\vee h} & gyg^\vee h
} \;,$$ compose to zero. Note also that $s'':= xs^\vee h \in S_\mathcal A$, since $s\in S$. Now it suffices to untwist this composition by the invertible object $g^\vee h$. More precisely, the following commutative diagram $$\xymatrix{
xh^\vee (g^\vee h)^\vee \ar[rr]_-{xs^\vee h(g^\vee h)^\vee} \ar@/^4ex/[rrrr]^-{0} \ar@/_5ex/[ddrr]_{s'\,:=} &&
xg^\vee h (g^\vee h)^\vee \ar[rr]_-{rg^\vee h (g^\vee h)^\vee} \ar[d]_{x\varepsilon} &&
gyg^\vee h (g^\vee h)^\vee \ar[d]_{gy\varepsilon}^\sim \\
&& x {\mathbf{1}}\ar[rr]_-{r{\mathbf{1}}} \ar[d]_\rho && gy{\mathbf{1}}\ar[d]_\rho^\sim \\
&& x \ar[rr]_-r && gy
}$$ defines a morphism $s'\in S_\mathcal A$ such that $r s' = 0$, as required.
\[lemma:S\_A\] Every $(s\colon y\to z)\in S_\mathcal A$ is a finite composition of the form $$\xymatrix{
y \ar[r]^-{u_0}_-\sim &
g_1a \ar[r]^-{s_1a} &
h_1 a \ar[r]^-{u_1}_-\sim &
\cdots
g_ia \ar[r]^-{s_i a} &
h_i a \ar[r]^-{u_i}_-\sim &
g_{i+1} a \cdots \ar[r]^-{ s_na} &
h_n a \ar[r]^-{u_n}_\sim &
z
}$$ where $a$ is some object of $\mathcal A$, each $s_i \colon g_i\to h_i$ belongs to $S$ (so in particular $g_i ,h_i$ are invertible objects in $\mathcal R$), and each $u_i$ is an isomorphism.
If $(s\colon y\to z) \in S_\mathcal A$, then $s$ must be a finite composite of the form $$\xymatrix{
y \ar[r]^-{v_0}_-\sim &
\tilde g_1a_1 \ar[r]^-{\tilde s_1a_1} &
\tilde h_1 a_1 \ar[r]^-{v_1}_-\sim &
\cdots
\tilde g_ia_i \ar[r]^-{\tilde s_i a_i} &
\tilde h_i a_i \ar[r]^-{v_i}_-\sim &
\cdots \ar[r]^-{ \tilde s_na_n} &
\tilde h_n a_n \ar[r]^-{v_n}_-\sim &
z
}$$ for some isomorphisms $v_0, \ldots, v_n$, some $\tilde s_1,\ldots , \tilde s_n\in S$, and some objects $a_1,\ldots a_n$. (To see this, it suffices to note that twists in $\mathcal A$ preserve isomorphisms, and that every left twist $x\otimes r$ of an arrow $r$ may be turned into a right twist composed with two isomorphisms, namely $\gamma (r\otimes x) \gamma$.) We deduce in particular from the isomorphisms $v_i\colon \tilde h_ia_i \cong\tilde g_{i+1}a_{i+1}$ that there exist isomorphisms $a_{i+1}\cong \tilde g_{i+1}^\vee \tilde h_i a_i $ and therefore, recursively, that there exist isomorphisms $$w_i \colon a_i \stackrel{\sim}{\longrightarrow} (\underbrace{ \tilde g_i^\vee\tilde h_{i-1} \tilde g_{i-1}^\vee\tilde h_{i-2} \cdots \tilde g^\vee_2\tilde h_1 }_{=:\; \ell_i}) a_1$$ for $i=1,\ldots, n$ (use $\ell_1={\mathbf{1}}$ when $i=1$). Setting $a:=a_1$, as well as defining $\ell_i$ as above and $u_i$ and $s_i$ as displayed below, we obtain the following commutative diagram, where the top row is the given map $s$. $$\xymatrix@C=22pt{
y \ar[r]^-{v_0}_-\sim &
\tilde g_1a_1 \ar[r]^-{\tilde s_1 a_1} \ar[d]^{\tilde g_1w_1}_\sim &
\tilde h_1 a_1 \ar[r]^-{v_1}_-\sim \ar[d]^{\tilde h_1w_1}_\sim &
\cdots
\tilde g_i a_i \ar[r]^-{\tilde s_i a_i} &
\tilde h_i a_i \ar[r]^-{v_i}_-\sim &
\cdots \ar[r]^-{ \tilde s_n a_n} &
\tilde h_n a_n \ar[r]^-{v_n}_\sim &
z \\
y \ar@{=}[u] \ar@{..>}[r]^-{u_0} &
\tilde g_1\ell_1 a \ar@{..>}[r]^-{s_1 a} &
\tilde h_1\ell_1 a \ar@{..>}[r]^-{u_1} &
\cdots \tilde g_i \ell_i a \ar@{<-}[u]_{\tilde g_i w_i}^\sim \ar@{..>}[r]^-{s_i a} &
\tilde h_i\ell_i a \ar@{<-}[u]_{\tilde h_i w_i}^\sim \ar@{..>}[r]^-{u_i} &
\cdots \ar@{..>}[r]^-{s_n a} &
\tilde h_n\ell_n a \ar@{<-}[u]_{\tilde h_n w_i}^\sim \ar@{..>}[r]^-{u_n} &
z \ar@{=}[u]
}$$ Note that each $s_i:= \tilde s_i \otimes \ell_i $ belongs to $S$ (because $\tilde s_i\in S$ and $\ell_i\in \mathcal R$) and that each $u_i$ is an isomorphism. If we further set $g_i:=\tilde g_i \otimes \ell_i$ and $h_i:= \tilde h_i\otimes \ell_i$ we see from the bottom row of the diagram that $s$ has the claimed form.
We are now ready to verify property (2) for general morphisms of $S_\mathcal A$. The proof is an easy recursion (probably best drawn on the blackboard). Let $r,s$ be as in (2). Since $s\in S_\mathcal A$, by Lemma \[lemma:S\_A\] we have $$s = u_n (s_n \otimes a) u_{n-1} (s_{n-1}\otimes a) \cdots u_1 (s_1\otimes a) u_0
\colon y \longrightarrow z$$ for some object $a\in \mathcal A$, some isomorphisms $u_i$, and some $s_i\colon g_i\to h_i$ in $S$. By hypothesis we have $sr=0$. Since $u_n$ is an isomorphism, this implies $$\begin{aligned}
0 &= \big( (s_n \otimes a) u_{n-1} (s_{n-1}\otimes a) \cdots u_1 (s_1\otimes a) u_0 \big) \circ r \\
&= (s_n \otimes a)\circ \big( \underbrace{u_{n-1} (s_{n-1}\otimes a) \cdots (s_1\otimes a) u_0r}_{=: \; r_n } \big) \,.\end{aligned}$$ Since $s_n\in S$, we can apply Lemma \[lemma:ind\_reduction\] to deduce the existence of some $s'_n\in S_{\mathcal A}$ with $r_ns_n'=0$. Since $u_{n-1}$ is an isomorphism, we actually have $$\begin{aligned}
0 &= \big( (s_{n-1} \otimes a) u_{n-2} (s_{n-2}\otimes a) \cdots u_1 (s_1\otimes a) u_0 r \big) \circ s'_n \\
&= (s_{n-1} \otimes a)\circ \big(\underbrace{ u_{n-2} (s_{n-2} \otimes a)u_{n-3} (s_{n-3}\otimes a) \cdots (s_1\otimes a)u_0rs'_n}_{=: \; r_{n-1} } \big)\end{aligned}$$ and we may now iterate: by applying the same argument $n-1$ more times we successively produce morphisms $s'_{n-1},s'_{n-2},\ldots, s'_1$ such that the composition $s':= s_n's'_{n-1}s'_{n-2} \ldots s'_1 $ belongs to $S_{\mathcal A}$ and satisfies $rs'=0$, as required.
This concludes the proof of Theorem \[thm:fractions\_general\].
We next compare the localization of 2-rings with that of their algebras.
\[lemma:S\_AvsA\] Let $F\colon \mathcal R\to \mathcal A$ be an $\mathcal R$-algebra, let $S\subseteq {\mathop{\mathrm{Mor}}}\mathcal R$ be a homogeneous multiplicative system, and let $S_\mathcal A\subseteq {\mathop{\mathrm{Mor}}}\mathcal A$ be its extension to $\mathcal A$. Assume that $F$ is a full functor whose image is replete. Then if $s\colon x\to y$ is a morphisms of $S_\mathcal A$ such that either $x$ or $y$ belongs to $F\mathcal R$, we must have $s\in FS$.
The statement only concerns the images of $\mathcal R$ and $S$ in $\mathcal A$, hence we may assume $F$ is the inclusion of a full and replete $\otimes$-subcategory $\mathcal R$ of invertible objects of $\mathcal A$. Let $(s \colon x\to g)\in S_\mathcal A$ with $g\in \mathcal R$ (the proof for the other case is dual and is omitted). By Lemma \[lemma:S\_A\] the morphism $s$ is equal to a composite $$\xymatrix{
x \ar[r]^-{u_0}_-\sim &
g_1a \ar[r]^-{s_1a} &
h_1 a \ar[r]^-{u_1}_-\sim &
\cdots
g_ia \ar[r]^-{s_i a} &
h_i a \ar[r]^-{u_i}_-\sim &
g_{i+1} a \cdots \ar[r]^-{ s_na} &
h_n a \ar[r]^-{u_n}_\sim &
g
}$$ where the maps $s_i$ belong to $S$, the maps $u_i$ are isomorphisms, and the objects $g_i,h_i$ belong to $\mathcal R$. We see immediately that $a \cong h_n^{-1}g$ lies in $\mathcal R$, and consequently so does $x\cong g_1a$. Also, since $a$ is in $\mathcal R$ each map $s_ia \colon g_ia\to h_ia$ belongs to $S$. By the fullness of $\mathcal R$, the isomorphisms $u_i$ belong to $\mathcal R$ and therefore to $S$. Hence the composition $s$ belongs to $S$, proving the lemma.
\[prop:comparison\_mult\_sys\] Let $F\colon \mathcal R\to \mathcal A $ be an $\mathcal R$-algebra and let $S\subseteq \mathcal R$ be a homogeneous multiplicative system. Then the unique canonical $\otimes$-functor $\overline F$ which makes the following square commute $$\xymatrix{ \mathcal R \ar[d] \ar[r]^-F & \mathcal A \ar[d] \\
S^{-1}\mathcal R \ar[r]^-{\overline F} &
S^{-1}_\mathcal A \mathcal A
}$$ is full (fully faithful) if $F$ is full (fully faithful).
We first assume that $F$ is fully faithful. Note that in this case we may further assume that $F$ is the inclusion of a full replete subcategory of $\mathcal A$, the multiplicative systems arising from $F\mathcal{R}$ and its replete closure being identical. Let $$\xymatrix{
g & \ar[l]_-s^-{\sim} x \ar[r]^-f & h
}$$ be a right fraction representing a morphism in $S^{-1}_\mathcal A\mathcal A$ such that $g,h\in \mathcal R$. Since $g\in \mathcal R$, Lemma \[lemma:S\_AvsA\] says that $s$ belongs to $S$, showing that the fraction $fs^{-1}$ defines a morphism $g\to h$ in $S^{-1}\mathcal R$ as well. This proves that the functor $\overline F$ is full. Next consider a fraction $$\xymatrix{
g & \ar[l]_-t^-\sim \ell \ar[r]^-r & h
}$$ (with $t\in S$) representing a morphism of $S^{-1}\mathcal R$ which is mapped to zero in $S_\mathcal
A^{-1}\mathcal A$. The latter means that there exists in $\mathcal A$ a commutative diagram $$\label{eq:zeros}
\xymatrix@R=5pt{
& \ell \ar[dl]_t \ar[dr]^r & \\
g && h \\
& x \ar[ul]^-s \ar[ur]_0 \ar[uu] &
}$$ for some $s\in S_\mathcal A$. By Lemma \[lemma:S\_AvsA\] once again, we must have $s\in S$. But then means precisely that $rt^{-1}=0$ in $S^{-1}\mathcal R$. So $\overline F$ is fully faithful.
The more general case, where $F$ is assumed to be full but possibly not faithful, can be reduced to the previous one as follows. By factoring $F$ through its image, localization induces the commutative diagram $$\xymatrix{
\mathcal R \ar[d] \ar@/^3ex/[rr]^-F \ar[r]_-{\textrm{full}} &
F\mathcal R \ar[r]_-{\textrm{faithful}} \ar[d] &
\mathcal A \ar[d] \\
S^{-1}\mathcal R \ar@/_3ex/[rr]_-{\overline F} \ar[r] &
(FS)^{-1}F\mathcal R \ar[r] &
S^{-1}_\mathcal A \mathcal A
}$$ (here $FS$ denotes the multiplicative system in $F\mathcal R$ generated by $S$). By Corollary \[cor:SvsI\] (with $\mathcal I=\ker(F)$) we have the identification $(FS)^{-1} F\mathcal R \cong S^{-1}\mathcal R/S^{-1}\ker(F)$, from which we see that $S^{-1}\mathcal R\to (FS)^{-1}F\mathcal R$ is full. Thus it remains only to verify that the functor $(FS)^{-1}F\mathcal R\to S^{-1}_\mathcal A\mathcal A$ is full, and this follows from what we have already proved.
Generalized comparison maps {#sec:comparison}
===========================
Central 2-rings of tensor triangulated categories {#subsec:central2rings}
-------------------------------------------------
From now on we work with an essentially small tensor triangulated category $\mathcal T$; thus $\mathcal T$ is essentially small, triangulated, and equipped with a symmetric tensor structure $(\mathcal T,\otimes,{\mathbf{1}},\alpha,\lambda,\rho, \gamma)$ such that $x\otimes-$ (and thus $-\otimes x$) preserves exact triangles for each object $x\in \mathcal T$.
By a *central 2-ring of $\mathcal T$* we mean any full tensor subcategory $\mathcal R$ of invertible objects of $\mathcal T$ which is closed under taking duals. Thus every central 2-ring of $\mathcal T$ is a graded commutative 2-ring, as studied in the previous section.
\[ex:central2rings\] At one extreme we find $\mathcal R=\{{\mathbf{1}}\}$, that is, the commutative endomorphism ring of the tensor unit, ${\mathrm{End}}_\mathcal T({\mathbf{1}})$. In Balmer’s notation this is the *central ring* $\mathrm R_\mathcal T$ of $\mathcal T$. At the other extreme we may choose $\mathcal R$ to be the full subcategory of all invertible objects in $\mathcal T$, which deserves the name of *total central 2-ring of $\mathcal T$*, written $\mathrm R^{\mathrm{tot}}_\mathcal T$. Between $\mathrm R_\mathcal T$ and $\mathrm R^{\mathrm{tot}}_\mathcal T$ we find a poset of central 2-rings, ordered by inclusion, which in fact is a lattice with meet $\mathcal R\wedge \mathcal R'=\mathcal R\cap \mathcal R'$ and join $\mathcal R\vee \mathcal R' =\bigcap \{\mathcal R'' \mid \mathcal R\cup \mathcal R'\subseteq \mathcal R''\}$. Every inclusion $\mathcal R\hookrightarrow \mathcal R'$ is a morphism of graded commutative 2-rings and so it induces a continuous map ${\mathop{\mathrm{Spec}}}\mathcal R'\to {\mathop{\mathrm{Spec}}}\mathcal R$.
The next theorem is the key point in allowing us to relate the geometry of a $\otimes$-triangulated category to that of its central rings.
\[thm:local\_Rtot\] If $\mathcal T$ is a local $\otimes$-triangulated category (, if its spectrum has a unique closed point [@balmer:spec3]), then every central 2-ring $\mathcal R$ of $\mathcal T$ is local as a graded commutative 2-ring, , it has a unique maximal homogeneous ideal. Moreover, this maximal ideal consists precisely of the non-invertible arrows of $\mathcal R$.
The proof is essentially the same as that of [@balmer:spec3]\*[Theorem 4.5]{}, but a few slight adjustments are required. We need a couple of lemmas concerning tensor nilpotent morphisms.
\[lemma:nilp\_iso\] Let $a\colon g\otimes x\to h\otimes x$ be a morphism in a ${\mathbb{Z}}$-linear symmetric $\otimes$-category, where $g$ and $h$ are two $\otimes$-invertible objects. If $a$ is both $\otimes$-nilpotent and an isomorphism, then $x$ is $\otimes$-nilpotent.
Suppose that $a^{\otimes n} = 0$. Then since for any $i\geq 1$ the map $a^{\otimes i}$ is an isomorphism we deduce that the object $g^{\otimes n} \otimes x^{\otimes n}$ is isomorphic to zero. Since $g$ is invertible it follows that $x^{\otimes n} \cong 0$ as claimed.
\[lemma:nilp\_sum\] Let $a,b\colon x\to y$ be two parallel maps in a ${\mathbb{Z}}$-linear symmetric $\otimes$-category. If both $a$ and $b$ are $\otimes$-nilpotent, then so is $a+b$.
Since the $\otimes$-product is ${\mathbb{Z}}$-linear in each variable we can write $(a+b)^{\otimes n}$ as a sum of morphisms of the form $$u_i (a^{\otimes n-i} \otimes b^{\otimes i}) v_i$$ where $u_i$ and $v_i$ are some composites (depending on $i\in \{0,\ldots,n\}$) of instances of $\alpha$, $\gamma$ and identity arrows tensored with each other. Since both $a$ and $b$ are $\otimes$-nilpotent we see that $(a+b)^{\otimes n}$ is zero for $n$ chosen sufficiently large i.e., $a+b$ is $\otimes$-nilpotent as claimed.
By Corollary \[cor:composite\_iso\], if we can show that the sum of any two non-invertible (parallel) maps is again non-invertible, then the collection of non-invertible maps in $\mathcal R$ is a homogeneous ideal which will necessarily be the unique maximal one, and we would be done.
Thus assume that $r+s: g \to h$ is invertible; we must prove that either $r$ or $s$ is also invertible. To this end, consider the following morphism of $\mathcal T$: $$t := (r+s)\otimes {\mathrm{id}}\otimes {\mathrm{id}}\; \colon \;
g \otimes {\mathop{\mathrm{cone}}}(r) \otimes {\mathop{\mathrm{cone}}}(s) \longrightarrow
h \otimes {\mathop{\mathrm{cone}}}(r) \otimes {\mathop{\mathrm{cone}}}(s)
\;.$$ Since $r+s$ is invertible, so is $t$. We claim that $t$ is also $\otimes$-nilpotent. By Lemma \[lemma:nilp\_sum\] it suffices to show that both $r \otimes {\mathrm{id}}_{{\mathop{\mathrm{cone}}}(r)} \otimes {\mathrm{id}}_{{\mathop{\mathrm{cone}}}(s)}$ and $s \otimes {\mathrm{id}}_{{\mathop{\mathrm{cone}}}(r)} \otimes {\mathrm{id}}_{{\mathop{\mathrm{cone}}}(s)}$ are $\otimes$-nilpotent, and clearly it suffices to show that $r \otimes {\mathrm{id}}_{{\mathop{\mathrm{cone}}}(r)}$ and $s \otimes {\mathrm{id}}_{{\mathop{\mathrm{cone}}}(s)}$ are $\otimes$-nilpotent. Since $g$ and $h$ are invertible objects of $\mathcal T$, this follows immediately from [@balmer:spec3]\*[Proposition 2.13]{}. Thus $t$ is both $\otimes$-nilpotent and invertible, and by Lemma \[lemma:nilp\_iso\] we must have that ${\mathop{\mathrm{cone}}}(r)\otimes {\mathop{\mathrm{cone}}}(s)$ is tensor nilpotent and hence zero, because – as a $\otimes$-product of cones of maps between invertible objects – it is dualizable. But $\mathcal T$ is local by assumption, so that ${\mathop{\mathrm{cone}}}(r)\otimes {\mathop{\mathrm{cone}}}(s)\cong 0$ implies that either ${\mathop{\mathrm{cone}}}(r)$ or ${\mathop{\mathrm{cone}}}(s)$ is $\otimes$-nilpotent. As we have already noted these cones are dualizable so either ${\mathop{\mathrm{cone}}}(r)\simeq 0$ or ${\mathop{\mathrm{cone}}}(s)\simeq 0$. We conclude that either $r$ or $s$ is already an isomorphism.
Central localization
--------------------
The following theorem is a generalization of Balmer’s procedure of central localization (see [@balmer:spec3]\*[§3]{}).
\[thm:central\_loc\] Let $\mathcal R$ be a central 2-ring of a $\otimes$-triangulated category $\mathcal T$, and let $S$ be a homogeneous multiplicative system in $\mathcal R$. Then localization induces a canonical isomorphism $$\xymatrix{
\mathcal T \ar[r]^-q \ar[d]_{\mathrm{loc}}& \mathcal T/\mathcal J_S \\
S_\mathcal T^{-1} \mathcal T \ar[ur]_\cong &
}$$ between $\mathcal T$ localized at $S$ as an $\mathcal R$-algebra (see Theorem \[thm:fractions\_general\]) and the Verdier quotient of $\mathcal T$ by the thick $\otimes$-ideal $\mathcal J_S:=\langle {\mathop{\mathrm{cone}}}(s)\mid s\in S \rangle_\otimes$ generated by the cones of maps in $S$. Moreover, the central 2-ring of these categories on the objects of $\mathcal R$ is canonically isomorphic to the localized graded commutative 2-ring $S^{-1}\mathcal R$.
We see in particular that $S^{-1}_\mathcal T\mathcal T$ inherits a canonical $\otimes$-triangulated structure. In order to prove the theorem we will need a couple of preliminary results.
\[prop:J\_S\] $\mathcal J_S=\{x\in \mathcal T\mid \exists s\in S \textrm{ such that } s\otimes {\mathrm{id}}_x=0 \}$.
The argument is almost precisely as in [@balmer:spec3]\*[Proposition 3.7]{}; we briefly recall it as it is easier and slightly more natural in this context. Write $\mathcal J'$ for the category on the right hand side. We have $\mathcal J'\subseteq \mathcal J_\mathcal S$ by [@balmer:spec3]\*[Proposition 2.14]{}. For the other inclusion, note that $S\otimes S\subseteq S$ implies that $\mathcal J'$ is equal to $\{x\in \mathcal T\mid \exists s\in S \textrm{ and } n\geq 1 \textrm{ such that } s^{\otimes n}\otimes {\mathrm{id}}_x =0 \}$. The latter is easily seen to be a thick $\otimes$-ideal of $\mathcal T$, where closure under taking cones is a consequence of [@balmer:spec3]\*[Lemma 2.11]{}. By [@balmer:spec3]\*[Proposition 2.13]{} $\mathcal J'$ contains ${\mathop{\mathrm{cone}}}(s)$ for all $s\in S$. Hence we conclude the other inclusion $\smash{\mathcal J_S = \langle {\mathop{\mathrm{cone}}}(s) \mid s\in S \rangle_\otimes \subseteq \mathcal J' }$ as well.
\[cor:cone\_in\_J\] Let $a\colon x\to y$ be any map of $\mathcal T$. Then ${\mathop{\mathrm{cone}}}(a)$ belongs to $ \mathcal J_S$ if and only if there exist a map $s\in S$ and two maps $b$ and $c$ as in the following square $$\xymatrix{
g\otimes x \ar[r]^-{{\mathrm{id}}_g \otimes a} \ar[d]_{s\otimes {\mathrm{id}}_x}
& g\otimes y \ar[d]^{s\otimes {\mathrm{id}}_y} \ar@<-.5ex>[dl]_b \ar@<.5ex>[dl]^c \\
h\otimes x \ar[r]_-{{\mathrm{id}}_h\otimes a} & h\otimes y
}$$ such that $b({\mathrm{id}}_g\otimes a) = s\otimes {\mathrm{id}}_x$ and $({\mathrm{id}}_h\otimes a)c=s\otimes {\mathrm{id}}_y$.
This proof is the same as [@balmer:spec3]\*[Lemma 3.8]{}, using Proposition \[prop:J\_S\] instead of [@balmer:spec3]\*[Proposition 3.7]{}.
The last assertion in the theorem follows immediately from Proposition \[prop:comparison\_mult\_sys\], with $F$ the fully faithful inclusion $\mathcal R\hookrightarrow \mathcal T$. To see the isomorphism of categories, note that for each $s\in S$ we have ${\mathop{\mathrm{cone}}}(s)\in \mathcal J_S$ by definition, hence the universal property of ${\mathrm{loc}}\colon \mathcal T\to S^{-1}_\mathcal T\mathcal T$ induces a unique functor $\tilde q\colon S^{-1}_\mathcal T\mathcal T\to \mathcal T/\mathcal J_S$ which is the identity on objects. We must show that $\tilde q$ is full and faithful. Let $$\xymatrix{
x \ar[r]^-a & z & \ar[l]_-t^-\sim y
}$$ be a fraction in $\mathcal T$ representing a morphism $x\to y$ in $\mathcal T/\mathcal J_S$. Thus ${\mathop{\mathrm{cone}}}(t)\in \mathcal J_S$ by construction, and by Corollary \[cor:cone\_in\_J\] there exist a map $s\colon g\to h$ in $S$ and some map $b\colon g\otimes z \to h \otimes y$ with $b(g \otimes t)= s\otimes y$ (we will not need the second map $c$). Build the following commutative diagram in $\mathcal T$. $$\xymatrix{
x \ar[r]^-a \ar@{..>}@/_6ex/[ddrr]_{a'\;:= } &
z &
y \ar[l]_-t^-\sim \ar@{..>}@/^8ex/[dd]^{=:\; t'} \\
&
g^\vee g z \ar[u]^\simeq \ar[dr]_{g^\vee b}
& g^\vee g y \ar[u]_\simeq \ar[l]_-{g^\vee g t} \ar[d]^{g^\vee s y} \\
&& g^\vee hy
}$$ Defining $a'$ and $t'$ as pictured, we see that $t^{-1}a=t'^{-1}a'$ in $\mathcal T/\mathcal J_S$. But $t'\in S_\mathcal T$, and therefore $t'^{-1}a'$ lies in the image of $\tilde q$. This shows that $\tilde q$ is full. To prove that it is faithful, consider a fraction $$\varphi \;\colon \;
\xymatrix{
x \ar[r]^-a & w & \ar[l]_-{s'}^-\sim y
}$$ representing a morphism $\varphi\colon x\to y$ in $S^{-1}_\mathcal T\mathcal T$ (thus $s'\in S_\mathcal T$), and assume that $\tilde q(s'^{-1}a)=0$. This means that there exists a commutative diagram $$\label{eq:other_zeros}
\xymatrix@R=5pt{
& w \ar[dd] & \\
x \ar[ur]^a \ar[dr]_0 && y \ar[ul]_{s'} \ar[dl]^t \\
& z &
}$$ with ${\mathop{\mathrm{cone}}}(t)\in \mathcal J_S$. Applying again Corollary \[cor:cone\_in\_J\] to $t$ we obtain $s \colon g\to h$ in $S$ and $b\colon gz\to hy$ such that $b(g\otimes t)=s\otimes y$ (precisely as above), and we may construct a commutative diagram as follows: $$\label{eq:faithful}
\xymatrix{
& w \ar[d] \ar@{..>}@/_12ex/[dddr]_{d \; := } & \\
x \ar[ur]^a \ar[r]^-0 & z &
y \ar[l]_-t \ar[ul]_{s'} \ar@{..>}@/^8ex/[dd]^{=:\; s''} \\
&
g^\vee g z \ar[u]^\simeq \ar[dr]_{g^\vee b}
& g^\vee g y \ar[u]^\simeq \ar[l]_-{g^\vee g t} \ar[d]^{g^\vee s y} \\
&& g^\vee hy
}$$ Note that $s''$, as defined in the diagram, belongs to $S_\mathcal T$. Thus setting $d$ as indicated we may deduce from the existence of a commutative diagram $$\xymatrix@R=5pt{
& w \ar[dd]_d & \\
x \ar[ur]^a \ar[dr]_0 && y \ar[ul]_{s'} \ar[dl]^{s''} \\
& z &
}$$ in $\mathcal T$, showing that $\varphi=0$. Hence $\tilde q$ is faithful, thus completing the proof of Theorem \[thm:central\_loc\].
Generalized comparison maps {#generalized-comparison-maps}
---------------------------
We now extend the definition and the basic properties of Paul Balmer’s comparison map $\rho$ from triangular to Zariski spectra.
As before, let $\mathcal T$ be an essentially small tensor triangulated category.
\[thm:rho\] For every central 2-ring $\mathcal R$ of $\mathcal T$ there is a continuous spectral map $\rho^\mathcal R_\mathcal T\colon {\mathop{\mathrm{Spc}}}\mathcal T\to {\mathop{\mathrm{Spec}}}\mathcal R$ which sends the prime thick $\otimes$-ideal $\mathcal P\subset \mathcal T$ to the prime ideal $$\rho^\mathcal R_\mathcal T (\mathcal P):=\{ r \in {\mathop{\mathrm{Mor}}}\mathcal R \mid {\mathop{\mathrm{cone}}}(r) \not\in \mathcal P \} \, .$$ Moreover, the map $\rho^\mathcal R_\mathcal T$ is natural in the following sense: if $F\colon \mathcal T\to \mathcal T'$ is a tensor-exact functor and $\mathcal R'$ is a central 2-ring of $\mathcal T'$ such that $F\mathcal R\subseteq \mathcal R'$, then the square of spectral continuous maps $$\xymatrix{
{\mathop{\mathrm{Spc}}}\mathcal T' \ar[d]_{\rho^{\mathcal R'}_{\mathcal T'}} \ar[r]^-{{\mathop{\mathrm{Spc}}}F} &
{\mathop{\mathrm{Spc}}}\mathcal T \ar[d]^{\rho^{\mathcal R}_{\mathcal T}} \\
{\mathop{\mathrm{Spec}}}\mathcal R' \ar[r]^-{{\mathop{\mathrm{Spec}}}F} &
{\mathop{\mathrm{Spec}}}\mathcal R
}$$ is commutative.
Let $\mathcal P\in {\mathop{\mathrm{Spc}}}\mathcal T$ and denote by $q_\mathcal P\colon \mathcal T\to \mathcal T/\mathcal P$ the Verdier quotient functor. The functor $q_\mathcal P$ is strong monoidal so the full subcategory $\mathcal R_\mathcal P:= \{ q_\mathcal P(g) \mid g\in \mathcal R\} \subseteq \mathcal T/\mathcal P$ is a central 2-ring of $\mathcal T/\mathcal P$. Since $\mathcal T/\mathcal P$ is a local tensor triangulated category its central 2-ring $\mathcal R_\mathcal P$ has a unique maximal ideal $\mathfrak m_\mathcal P$ consisting the non-invertible maps by Theorem \[thm:local\_Rtot\]. By thickness of $\mathcal P$ we have $\mathcal P = {\mathop{\mathsf{Ker}}}(q_\mathcal P)$ and therefore an equality of sets $$\rho^\mathcal R_\mathcal T(\mathcal P)= q_\mathcal P^{-1}(\mathfrak m_\mathcal P) \; \subseteq \; {\mathop{\mathrm{Mor}}}\mathcal R \,.$$ Thus $\rho^\mathcal R_\mathcal T(\mathcal P)$ is the preimage of the unique maximal ideal of $\mathcal R_\mathcal P$ under the morphism $q_\mathcal P\colon \mathcal R\to \mathcal R_\mathcal P$ of graded commutative 2-rings, and in particular it is a homogeneous prime. This shows that the resulting function $\rho^\mathcal R_\mathcal T \colon {\mathop{\mathrm{Spc}}}\mathcal T\to {\mathop{\mathrm{Spec}}}\mathcal R$ is well-defined. To see that it is a spectral continuous map it suffices to note that, by definition, $$(\rho^\mathcal R_\mathcal T)^{-1}(D_r) = U({\mathop{\mathrm{cone}}}(r))$$ for every $r\in \mathcal R$, where $D_r=\{ \mathfrak p \mid r\not\in \mathfrak p \}$ is a quasi-compact basic open for the Zariski topology of ${\mathop{\mathrm{Spec}}}\mathcal R$, and $U(x)=\{\mathcal P\mid x\in \mathcal P \}$ (for $x\in \mathcal T$) is a quasi-compact basic open for the Zariski topology of ${\mathop{\mathrm{Spc}}}\mathcal T$.
The naturality of $\rho^\mathcal R_\mathcal T$ in the pair $(\mathcal T,\mathcal R)$ can be checked immediately from the definitions.
A criterion for injectivity
---------------------------
As above let $\mathcal{R}$ be a central 2-ring in an essentially small tensor triangulated category $\mathcal T$, and let $\rho^\mathcal R_\mathcal T$ be the associated continuous map of Theorem \[thm:rho\]. We have the following topological condition which implies the injectivity of $\rho^\mathcal R_\mathcal T$.
\[prop:injectivity\] Suppose the collection of subsets $$\mathcal{B} = \{\operatorname{supp}({\mathop{\mathrm{cone}}}(r)) \; \vert \; r \in {\mathop{\mathrm{Mor}}}\mathcal R\}$$ gives a basis of closed subsets for the Zariski topology on ${\mathop{\mathrm{Spc}}}\mathcal{T}$. Then the comparison map $\rho^\mathcal R_\mathcal T$ is injective, and is furthermore a homeomorphism onto its image.
Suppose first that $\mathcal{B}$ is a basis of closed subsets. Let $\mathcal{P}, \mathcal{Q} \in {\mathop{\mathrm{Spc}}}{\mathcal{T}}$ be such that $\rho^\mathcal R_\mathcal T(\mathcal{P}) = \rho^\mathcal R_\mathcal T(\mathcal{Q})$, , ${\mathop{\mathrm{cone}}}(r)\notin \mathcal{P}$ if and only if ${\mathop{\mathrm{cone}}}(r) \notin \mathcal{Q}$ for every $r\in {\mathop{\mathrm{Mor}}}\mathcal R$. Using our basis $\mathcal{B}$ we see that $$\overline{\{\mathcal{P}\}} = \bigcap_{\substack{r\in {\mathop{\mathrm{Mor}}}\mathcal R \\ {\mathop{\mathrm{cone}}}(r)\notin \mathcal{P}}} \operatorname{supp}{({\mathop{\mathrm{cone}}}(r))} = \bigcap_{\substack{r\in {\mathop{\mathrm{Mor}}}\mathcal R \\ {\mathop{\mathrm{cone}}}(r)\notin \mathcal{Q}}} \operatorname{supp}{({\mathop{\mathrm{cone}}}(r))} = \overline{\{\mathcal{Q}\}}$$ where the middle equality follows from $\rho^\mathcal R_\mathcal T(\mathcal{P}) = \rho^\mathcal R_\mathcal T(\mathcal{Q})$. But then $\mathcal{P} = \mathcal{Q}$ since the space ${\mathop{\mathrm{Spc}}}{\mathcal{T}}$ is $T_0$, proving that $\rho^\mathcal{R}_\mathcal{T}$ is injective.
Recall that ${\mathop{\mathrm{Spec}}}\mathcal{R}$ has a basis of open subsets given by the $D_r$ for $r\in \mathcal{R}$. Now observe that, as was already noted in the proof of Theorem \[thm:rho\], we have by definition (and injectivity of $\rho^\mathcal{R}_\mathcal{T}$) $$\rho^\mathcal{R}_\mathcal{T}({\mathop{\mathrm{Spc}}}\mathcal{T}) \cap D_r = \rho_{\mathcal T}^\mathcal R\, U({\mathop{\mathrm{cone}}}(r)).$$ By hypothesis the $U({\mathop{\mathrm{cone}}}(r))$ are a basis of open subsets for ${\mathop{\mathrm{Spc}}}\mathcal{T}$ and so we see that $\rho^\mathcal{R}_\mathcal{T}$ is a homeomorphism onto its image.
Applications {#sec:examples}
============
Graded commutative rings {#subsec:grcomm}
------------------------
Let $G$ be an abelian group and let $R$ be an $\epsilon$-commutative $G$-graded ring. Let us denote by $R$-${\mathop{\mathrm{GrMod}}}$ the Grothendieck abelian tensor category of graded (left) $R$-modules with degree zero homomorphisms. As usual ${\mathrm{D}}(R)$ denotes the unbounded derived category of $R$-${\mathop{\mathrm{GrMod}}}$ and ${\mathrm{D}}^\mathrm{perf}(R)$ denotes the compact objects of ${\mathrm{D}}(R)$. We recall that the compact objects are precisely those complexes quasi-isomorphic to a bounded complex of finitely generated projective $R$-modules. Also, ${\mathrm{D}}^{\mathrm{perf}}(R)$ is a rigid tensor triangulated category for the derived tensor product $\otimes=\otimes^\mathbf L_R$ (this involves signs, see [@ivo_greg:graded]) with tensor unit $R$.
In [@ivo_greg:graded] we have given a classification of the thick tensor ideals of ${\mathrm{D}}^\mathrm{perf}(R)$ in the case that $R$ is noetherian. The aim of this section is to demonstrate how to use the generalised comparison map $\rho$ we have constructed to remove the noetherian hypothesis from this classification of thick tensor ideals. The result and the argument are similar in spirit to the work of Thomason [@Thomclass] on perfect complexes on quasi-compact and quasi-separated schemes. However, our approach is rather more formal - the main input from graded commutative algebra occurs almost exclusively in the results of [@ivo_greg:graded] and what remains here is an abstract argument belonging essentially to the realm of tensor triangular geometry.
Let us write $R \cong {\mathop{\mathrm{colim}}}_\Lambda R_\lambda$ where $\Lambda$ is a small filtered category and each $R_\lambda$ is a finitely generated $G$-graded subring of $R$ (i.e. consider $R$ as the union of all its finitely generated graded subrings). In particular each $R_\lambda$ is a noetherian $\epsilon$-commutative $G$-graded ring and so our classification theorem ([@ivo_greg:graded]\*[Theorem 5.1]{}) applies to give homeomorphisms $${\mathop{\mathrm{Spec^h}}}R_\lambda \stackrel{\sim}{\to} {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R_\lambda).$$ We will denote by $i_\lambda$ the inclusion $R_\lambda \to R$ and by ${\mathbf{L}}i^*_\lambda$ the associated functor ${\mathrm{D}}^\mathrm{perf}(R_\lambda) \to {\mathrm{D}}^\mathrm{perf}(R)$.
We let $\mathcal{R}$ be the graded commutative 2-ring given by taking the replete closure of the full subcategory $\{R(g) \; \vert \; g\in G\}$ of ${\mathrm{D}}^\mathrm{perf}(R)$. Similarly $\mathcal{R}_\lambda$ denotes the replete closure of the full subcategory $\{R_\lambda(g) \; \vert \; g\in G\}$ of ${\mathrm{D}}^\mathrm{perf}(R_\lambda)$. It is clear that ${\mathbf{L}}i^*_\lambda \mathcal{R}_\lambda$ is contained in $\mathcal{R}$ and there are similar containments coming from the left derived functors of extension of scalars along the structure maps in the directed system given by $\Lambda$.
We denote by $\rho$ (resp. $\rho_\lambda$) the comparison map from ${\mathop{\mathrm{Spc}}}{\mathrm{D}}^{\mathrm{perf}}(R)$ to ${\mathop{\mathrm{Spec}}}\mathcal{R}$ (resp. ${\mathop{\mathrm{Spc}}}{\mathrm{D}}^{\mathrm{perf}}(R_\lambda)$ to ${\mathop{\mathrm{Spec}}}\mathcal{R}_\lambda$) of Theorem \[thm:rho\]. We note that due to the naturality of the comparison maps there are commutative squares as follows for each $\lambda$. $$\xymatrix{
{\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R) \ar[rr]^-{{\mathop{\mathrm{Spc}}}{\mathbf{L}}i^*_\lambda} \ar[d]_\rho && {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R_\lambda) \ar[d]^{\rho_\lambda} \\
{\mathop{\mathrm{Spec}}}\mathcal{R} \ar[rr]_-{{\mathop{\mathrm{Spec}}}i^*_\lambda} && {\mathop{\mathrm{Spec}}}\mathcal{R}_\lambda
}$$
Next let us make some observations about compatibilities between the homogeneous spectra of $R$, $\mathcal{R}$, and the companion category $\mathcal{C}_R$ (of course all of these observations are equally valid for the $R_\lambda$ and will be used below, together with the obvious compatibility conditions coming from the directed system of subrings). By construction the companion category $\mathcal{C}_R$ is canonically monoidally equivalent to $\mathcal{R}$ (see [@ivo_greg:graded]\*[Definition 2.1]{} for details on the companion category and *loc*. *cit*. Proposition 2.14 concerning the equivalence being symmetric). Thus, since (prime) ideals in a graded commutative 2-ring are closed under composition with isomorphisms, the topological spaces ${\mathop{\mathrm{Spec}}}\mathcal{C}_R$ and ${\mathop{\mathrm{Spec}}}\mathcal{R}$ are canonically homeomorphic.
There is a canonical homeomorphism $${\mathop{\mathrm{Spec^h}}}R \stackrel{\sim}{\to} {\mathop{\mathrm{Spec}}}\mathcal{C}_R.$$
Given any homogeneous ideal $I$ of $R$ we can associate to it a unique ideal $\mathcal{I}$ of $\mathcal{C}_R$ by closing the elements of $I$, viewed as morphisms out of the tensor unit $0$ in $\mathcal{C}_R$, under tensoring with objects. On the other hand any ideal $\mathcal{I}$ of $\mathcal{C}_R$ gives an ideal of $R$ consisting of all the morphisms of $\mathcal{C}_R(0,g)$ in $\mathcal{I}$ as $g$ varies over $G$. It is not hard to check that these assignments are inverse to one another.
Let us also note that $\mathcal{C}_R$ is, essentially by definition, the union of the subcategories $\mathcal{C}_{R_\lambda}$. Combining this with the observations we have already made shows that we may equally well speak of graded rings, companion categories, or the central 2-rings we have defined inside perfect complexes without introducing any ambiguity. We shall switch between these different points of view whenever it is convenient.
Our aim is to prove that the comparison map $\rho$ with target ${\mathop{\mathrm{Spec}}}\mathcal{R} \cong {\mathop{\mathrm{Spec^h}}}R$ is a homeomorphism. The bulk of the proof will be contained in a string of rather simple lemmas and observations. We begin by showing that it is injective by applying the criterion of Proposition \[prop:injectivity\].
We denote the support on ${\mathrm{D}}^\mathrm{perf}(R)$, in the sense of Balmer, by $\operatorname{supp}_R$ and the homological (or small) support by $\operatorname{ssupp}_R$. Strictly speaking $\operatorname{ssupp}_R$ takes values in ${\mathop{\mathrm{Spec^h}}}R$ but we will abuse notation slightly by considering it also to take values in ${\mathop{\mathrm{Spec}}}\mathcal{R}$ via the canonical identification. We use similar notation for the $R_\lambda$ but note that the distinction for these noetherian rings is not so important due to the following lemma.
\[lem:noeth\_homeo\] Suppose that $R$ is noetherian. Then, with $\mathcal{R}$ as above, $\rho$ is a support preserving homeomorphism.
By [@ivo_greg:graded]\*[Theorem 5.1]{} the morphism $\sigma\colon {\mathop{\mathrm{Spec}}}\mathcal R\to {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R)$ given by $$\sigma(\mathfrak{p}) = \{E\in {\mathrm{D}}^\mathrm{perf}(R)\; \vert \; \mathfrak{p}\notin \operatorname{ssupp}_R E\}$$ is a homeomorphism identifying $\operatorname{supp}_R$ and $\operatorname{ssupp}_R$. It is then easily checked that $\rho$ is inverse to $\sigma$. For instance we have, for $\mathfrak{p}\in {\mathop{\mathrm{Spec}}}\mathcal{R}$ $$\begin{aligned}
\rho\sigma(\mathfrak{p}) &= \rho\{E\in {\mathrm{D}}^\mathrm{perf}(R)\; \vert \; \mathfrak{p}\notin \operatorname{ssupp}_R E\} \\
&= \rho\{E\in {\mathrm{D}}^\mathrm{perf}(R)\; \vert \; \sigma(\mathfrak{p})\notin \operatorname{supp}_R E\}\\
&= \{r\in \mathcal{R} \; \vert \; \sigma(\mathfrak{p})\in \operatorname{supp}_R{\mathop{\mathrm{cone}}}(r)\} \\
&= \{r\in \mathcal{R} \; \vert \; \mathfrak{p}\in \operatorname{ssupp}_R{\mathop{\mathrm{cone}}}(r)\} \\
&= \mathfrak{p} \,.\end{aligned}$$
\[lem:supports\_agree\] For every object $E$ of ${\mathrm{D}}^\mathrm{perf}(R)$ there is an equality $$\rho^{-1} \operatorname{ssupp}_R E = \operatorname{supp}_R E.$$
Let $E$ be a perfect complex over $R$ as in the statement of the lemma. We may assume, by choosing an isomorphic object if necessary, that $E$ is in fact a bounded complex of finitely generated free $R$-modules. As $E$ is determined by finitely many matrices with coefficients in $R$ we can find some $\lambda$ and a perfect complex $E_\lambda \in {\mathrm{D}}^{\mathrm{perf}}(R_\lambda)$ so that $$E \cong R \otimes_{R_\lambda} E_\lambda = {\mathbf{L}}i^*_\lambda E_\lambda.$$ Naturality of the comparison map, together with the canonical isomorphisms we have observed, gives a commutative diagram $$\xymatrix{
{\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R) \ar@{..>}@/_6ex/[dd]_{\rho'\, :=}
\ar[rr]^-{{\mathop{\mathrm{Spc}}}{\mathbf{L}}i^*_\lambda} \ar[d]_\rho &&
\ar@{..>}@/^6ex/[dd]^{=:\, \rho'_\lambda}
{\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R_\lambda) \ar[d]^{\rho_\lambda} \\
{\mathop{\mathrm{Spec}}}\mathcal{R} \ar[rr]_-{{\mathop{\mathrm{Spec}}}i^*_\lambda} \ar[d]_{\simeq} && {\mathop{\mathrm{Spec}}}\mathcal{R}_\lambda \ar[d]^{\simeq} \\
{\mathop{\mathrm{Spec^h}}}R \ar[rr]_{{\mathop{\mathrm{Spec^h}}}i_\lambda} && {\mathop{\mathrm{Spec^h}}}R_\lambda
}$$
Thus we deduce equalities $$\begin{aligned}
(\rho')^{-1}\operatorname{ssupp}_R(E) &= (\rho')^{-1}({\mathop{\mathrm{Spec^h}}}i_\lambda)^{-1} \operatorname{ssupp}_{R_\lambda}(E_\lambda) \\
&= ({\mathop{\mathrm{Spc}}}{\mathbf{L}}i^*_\lambda)^{-1}(\rho_\lambda ')^{-1} \operatorname{ssupp}_{R_\lambda}(E_\lambda) \\
&= ({\mathop{\mathrm{Spc}}}{\mathbf{L}}i^*_\lambda)^{-1} \operatorname{supp}_{R_\lambda}(E_\lambda) \\
&= \operatorname{supp}_R ({\mathbf{L}}i^*_\lambda E_\lambda) \\
&= \operatorname{supp}_R E\end{aligned}$$ where the first equality is standard, the third equality follows from [@ivo_greg:graded]\*[Theorem 5.1]{} which applies as $R_\lambda$ is noetherian, the fourth equality is [@balmer:prime]\*[Proposition 3.6]{}, the final equality holds by the definition of $E_\lambda$, and the other two follow from commutativity of the diagram of comparison maps.
\[lem:graded\_basis\] The collection of subsets $$\mathcal{B} = \{\operatorname{supp}_R ({\mathop{\mathrm{cone}}}(r)) \; \vert \; r \in {\mathop{\mathrm{Mor}}}\mathcal{R}\}$$ is a basis for the Zariski topology on ${\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R)$. Thus $\rho$ is a homeomorphism onto its image.
Let $E$ be an object of ${\mathrm{D}}^\mathrm{perf}(R)$ and let $\mathcal{P}$ be a prime ideal not in $\operatorname{supp}_R E$. We need to show that there exists a map $r$ in $\mathcal{R}$ such that $\operatorname{supp}_R E \subseteq \operatorname{supp}_R {\mathop{\mathrm{cone}}}(r)$ and $\mathcal{P}\notin \operatorname{supp}_R {\mathop{\mathrm{cone}}}(r)$. Since the subsets $\operatorname{ssupp}_R {\mathop{\mathrm{cone}}}(s)$ as $s$ varies over the morphisms in $\mathcal{R}$ form a basis for the Zariski topology on ${\mathop{\mathrm{Spec}}}\mathcal{R}$ we can find an $r$ such that $\operatorname{ssupp}_R E \subseteq \operatorname{ssupp}_R {\mathop{\mathrm{cone}}}(r)$ and $\rho(\mathcal{P})\notin \operatorname{ssupp}_R {\mathop{\mathrm{cone}}}(r)$. Applying $\rho^{-1}$ and using the last lemma shows that $\operatorname{supp}_R {\mathop{\mathrm{cone}}}(r)$ is the desired subset.
The result then follows from Proposition \[prop:injectivity\].
Let us now show that $\rho$ is also surjective. In fact we will show the equivalent statement that the composite $$\rho'\colon {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R) \to {\mathop{\mathrm{Spec}}}\mathcal{R} \stackrel{\sim}{\to} {\mathop{\mathrm{Spec^h}}}R$$ is surjective.
The comparison map $\rho'$ is surjective.
Let ${\mathfrak{p}}$ be a homogeneous prime ideal of ${\mathop{\mathrm{Spec^h}}}R$. Then, setting ${\mathfrak{p}}_\lambda = {\mathfrak{p}}~\cap~R_\lambda$, we can write ${\mathfrak{p}}$ as the filtered colimit ${\mathfrak{p}}= {\mathop{\mathrm{colim}}}_\Lambda {\mathfrak{p}}_\lambda$. As each ${\mathfrak{p}}_\lambda$ is prime in $R_\lambda$ we can find, by Lemma \[lem:noeth\_homeo\], a unique $\mathcal{P}_\lambda \in {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R)$ such that $\rho'_\lambda (\mathcal{P}_\lambda) = {\mathfrak{p}}_\lambda$.
We define a full subcategory $\mathcal{P} := \bigcup_\Lambda {\mathbf{L}}i^*_\lambda \mathcal{P}_\lambda$ of ${\mathrm{D}}^\mathrm{perf}(R)$. We claim that $\mathcal{P}$ is a prime $\otimes$-ideal. In order to see this first note that if $R_{\lambda_1} \subseteq R_{\lambda_2}$ then, since $\mathfrak{p}_{\lambda_1} = R_{\lambda_1}\cap \mathfrak{p}_{\lambda_2}$, if $r\in R_{\lambda_1}$ is not in $\mathfrak{p}_{\lambda_1}$ it is not in $\mathfrak{p}_{\lambda_2}$. Thus, letting $j\colon R_{\lambda_1} \to R_{\lambda_2}$ denote the inclusion, this observation together with the fact that the derived pullback sends cones to cones yields $${\mathbf{L}}j^*\mathcal{P}_{\lambda_1} \subseteq \langle {\mathbf{L}}j^* {\mathop{\mathrm{cone}}}(r)\; \vert \; r\in (R_{\lambda_1}\smallsetminus \mathfrak{p}_{\lambda_1})\rangle_\otimes \subseteq \mathcal{P}_{\lambda_2}.$$ So $\mathcal{P}$ is an increasing filtered union and is thus a prime $\otimes$-ideal as any perfect complex over $R$ can be obtained from a perfect complex over some $R_\lambda$ via ${\mathbf{L}}i^*_\lambda$.
We now show that $\rho' (\mathcal{P}) = {\mathfrak{p}}$. Let $r$ be a homogeneous element of $R$ and let $\lambda \in \Lambda$ be such that $r\in R_\lambda$. Then $r$ lies in ${\mathfrak{p}}$ if and only if $r$ lies in ${\mathfrak{p}}_\lambda$, if and only if ${\mathop{\mathrm{cone}}}_{R_\lambda}(r)$ is not in $\mathcal{P}_\lambda$, if and only if ${\mathbf{L}}i^*_\lambda {\mathop{\mathrm{cone}}}_{R_\lambda}(r) \cong {\mathop{\mathrm{cone}}}_R(r)$ is not in $\mathcal{P}$. Thus $\rho' (\mathcal{P}) = {\mathfrak{p}}$ and we see that $\rho'$ is surjective as claimed.
Combining the previous lemmas, we obtain the following theorem.
\[thm:grcommrings\] Let $R$ be a $G$-graded $\epsilon$-commutative ring for some abelian group $G$ and some bilinear form $\epsilon\colon G\times G\to {\mathbb{Z}}/2$. Then there is a (unique) homeomorphism $${\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R) \cong {\mathop{\mathrm{Spec}}}\mathcal{R} \cong {\mathop{\mathrm{Spec^h}}}R$$ which identifies the support in the sense of Balmer with the usual homological support.
Schemes with an ample family of line bundles
--------------------------------------------
Throughout this section all schemes $X$ are quasi-compact and quasi-separated. This is the necessary and sufficient hypothesis for Balmer’s reconstruction theorem $X\cong {\mathop{\mathrm{Spc}}}{\mathrm{D}}^{\mathrm{perf}}(X)$ to apply (see [@balmer:icm]\*[Theorem 54]{}). We will show that every ample family of line bundles on $X$ gives rise to an injective comparison map from the spectrum of ${\mathrm{D}}^\mathrm{perf}(X)$ to that of a symmetric 2-ring associated with the family. Let us begin by recalling what it means for a collection of line bundles on $X$ to be ample.
Let $\{\mathcal{L}_{\lambda}\}_{\lambda\in \Lambda}$ be a non-empty collection of line bundles on $X$. We say that $\{\mathcal{L}_{\lambda}\}_{\lambda\in \Lambda}$ is an *ample family of line bundles* if there is a family of sections $f\in H^0(X,\mathcal{L}_\lambda^n)$ for $\lambda \in \Lambda$ and $n\geq 0$ such that the open sets $$X_f = \{x\in X \; \vert \; f_x\notin \mathfrak{m}_x(\mathcal{L}_\lambda^n)_x\}$$ form a basis for $X$. Here of course $\mathcal L^n_\lambda$ denotes the $n$th tensor power $\mathcal (\mathcal L_\lambda)^{\otimes n}$.
Given a section $f$ of a line bundle $\mathcal{L}$ on $X$ we shall denote by $Z(f)$ the closed complement of $X_f$.
\[lemma:bundlesupport\] Let $X$ be a scheme, $\mathcal{L}$ a line bundle on $X$, and $f$ a section of $\mathcal{L}$. Then, via the homeomorphism $X\to {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(X)$, we have $$\operatorname{supp}{\mathop{\mathrm{cone}}}(\mathcal{O}_X \stackrel{f}{\to} \mathcal{L}) = Z(f).$$
This is essentially immediate as the homeomorphism $X\cong {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(X)$ of [@balmer:icm]\*[Theorem 54]{} identifies the support in the sense of Balmer with the homological support.
Let $X$ be a scheme and suppose that we are given some non-empty collection $\underline{\mathcal{L}} = \{\mathcal{L}_{\lambda}\}_{\lambda\in \Lambda}$ of line bundles on $X$. We denote by $\mathcal{R}(\underline{\mathcal{L}})$ the associated central 2-ring of ${\mathrm{D}}^\mathrm{perf}(X)$, i.e., the replete closure of the full subcategory of ${\mathrm{D}}^\mathrm{perf}(X)$ whose objects are $$\{\mathcal{L}_{\lambda_1}^{m_1} \otimes \cdots \otimes \mathcal{L}_{\lambda_n}^{m_n}\; \vert \; n\geq 1, \lambda_i\in \Lambda, m_i\in \mathbb{Z}\}.$$
\[thm:ample\] Let $X$ be a quasi-compact and quasi-separated scheme with an ample family of line bundles $\underline{\mathcal{L}} = \{\mathcal{L}_\lambda\}_{\lambda \in \Lambda}$. Then the comparison map $$\rho \colon {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(X) \to {\mathop{\mathrm{Spec}}}\mathcal{R}(\underline{\mathcal{L}})$$ is a homeomorphism onto its image. In particular, there is an injective morphism $\rho_X^{\underline{\mathcal L}} \colon X\to {\mathop{\mathrm{Spec}}}\mathcal{R}(\underline{\mathcal{L}})$ and $X$ has the subspace topology relative to this injection.
By Proposition \[prop:injectivity\] it is enough to check that the supports of cones on morphisms in $\mathcal{R}(\underline{\mathcal{L}})$ give a basis of closed subsets for the Zariski topology on ${\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(X) \cong X$. From Lemma \[lemma:bundlesupport\] we know that, by taking the support of the cone of the map associated to a global section $f$ of $\mathcal{L}_\lambda^n$, we obtain the subset corresponding to $Z(f)$. Since the family of line bundles is ample these subsets form a basis of closed subsets and so we are done.
\[rem:ffs\] It is natural to compare this result with work of Brenner and Schröer [@brenner-schroer]. They define a notion of ${\mathop{\mathrm{Proj}}}$ for multihomogeneous rings, and in their Theorem 4.4 characterise ampleness of a family of line bundles $\mathcal{L}_1,\ldots,\mathcal{L}_n$ on a quasi-compact and quasi-separated scheme $X$ in terms of the canonical rational map $$\xymatrix{
X \ar@{-->}[r] & {\mathop{\mathrm{Proj}}}\Gamma(X, \bigoplus_{d\in \mathbb{N}^n} \mathcal{L}_1^{d_1} \otimes \cdots \otimes \mathcal{L}_n^{d_n}).
}$$ In the case that $X$ is divisorial one can verify that, identifying ${\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(X)$ with $X$, the map of Theorem \[thm:ample\] can be used to recover the morphism of Brenner and Schröer at the level of topological spaces.
\[ex:projective\] Consider $X= {\mathop{\mathrm{Proj}}}(R)$, where $R$ is a commutative non-negatively ${\mathbb{Z}}$-graded $k$-algebra over a field $k$, such that $R_0=k$ and such that $R_1$ generates $R$ over $k$. Then $\mathcal L := \mathcal O(1)$ is an ample line bundle, so Theorem \[thm:ample\] yields a topological embedding $\rho^{\mathcal L}_X\colon X\hookrightarrow {\mathop{\mathrm{Spec}}}\mathcal R(\mathcal L)$. Now it is not hard to see that we have an isomorphism ${\mathop{\mathrm{Spec}}}\mathcal R(\mathcal L)\cong {\mathop{\mathrm{Spec^h}}}R$ which identifies $\rho^{\mathcal L}_X$ with the defining embedding of $X$ into ${\mathop{\mathrm{Spec^h}}}R$ (and therefore also with Balmer’s graded comparison map $\rho^{*,\mathcal O(1)}\colon X={\mathop{\mathrm{Spc}}}{\mathrm{D}}^{\mathrm{perf}}(X)\to {\mathop{\mathrm{Spec^h}}}{\mathrm{End}}^{*,\mathcal O(1)}({\mathbf{1}})$, [@balmer:spec3]\*[Remark 8.2]{}).
Since ${\mathop{\mathrm{Spec^h}}}R$ consists of $X$ plus a unique closed point, this example shows that, in general, the map $\rho^{\underline{\mathcal L}}_X$ need not be surjective, nor must it have closed image.
Let $R$ be a ring such that the Picard group, ${\mathop{\mathrm{Pic}}}(R)$, is torsion. Denote by $\mathcal{R}\subseteq {\mathrm{D}}^\mathrm{perf}(R)$ the central 2-ring consisting of all line bundles in degree zero. In this case, as in the case where one considers Balmer’s $\rho$, the map $\rho_{{\mathrm{D}}^\mathrm{perf}(R)}^\mathcal{R}$ is a homeomorphism. Let us sketch the argument.
Consider the composite $$f:=({\mathop{\mathrm{Spec}}}R \stackrel{\sim}{\to} {\mathop{\mathrm{Spc}}}{\mathrm{D}}^\mathrm{perf}(R) \to {\mathop{\mathrm{Spec}}}\mathcal{R})$$ which, unwinding the definitions, sends $\mathfrak{p}\in {\mathop{\mathrm{Spec}}}R$ to the prime ideal $\{s\in \mathcal{R} \; \vert \; \mathfrak{p} \in \operatorname{ssupp}{\mathop{\mathrm{cone}}}(s)\}$. There is also a map $g\colon {\mathop{\mathrm{Spec}}}\mathcal{R} \to {\mathop{\mathrm{Spec}}}R$ sending a prime ideal $P$ to $P(R,R)$. It is clear that $gf = {\mathrm{id}}_{{\mathop{\mathrm{Spec}}}R}$.
Now suppose that $P\in {\mathop{\mathrm{Spec}}}\mathcal{R}$ and consider $fg(P)$. An element $s\in \mathcal{R}(\mathcal{L},\mathcal{M})$ is in a given ideal if and only if the translate $\tilde{s}\colon R\to \mathcal{L}^{-1}\otimes \mathcal{M}$ is. Let $n$ be the order of $\mathcal{L}^{-1}\otimes \mathcal{M}$ and consider a composite $$t:=(R \stackrel{\sim}{\to} R^{\otimes n} \stackrel{\tilde{s}^{\otimes n}}{\to} (\mathcal{L}^{-1}\otimes \mathcal{M})^{\otimes n} \stackrel{\sim}{\to} R).$$ Regardless of the choice of isomorphisms we note that $t$ lies in a given prime ideal if and only if $\tilde{s}$ does and hence if and only if $s$ does. The final observation is that the homological supports of the cones on $s$, $\tilde{s}$, $\tilde{s}^{\otimes n}$, and $t$ agree. Thus $s\in fg(P)$ iff $P(R,R)\in \operatorname{ssupp}{\mathop{\mathrm{cone}}}(s)$ iff $P(R,R)\in \operatorname{ssupp}{\mathop{\mathrm{cone}}}(t)$ iff $t\in P(R,R)$ iff $s\in P$. Hence $fg$ is the identity on ${\mathop{\mathrm{Spec}}}\mathcal{R}$ so we have the claimed homeomorphism.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The phenomena of the spin-Hall effect, initially proposed over three decades ago in the context of asymmetric Mott skew scattering, was revived recently by the proposal of a possible intrinsic spin-Hall effect originating from a strongly spin-orbit coupled band structure. This new proposal has generated an extensive debate and controversy over the past two years. The purpose of this workshop, held at the Asian Pacific Center for Theoretical Physics, was to bring together many of the leading groups in this field to resolve such issues and identify future challenges. We offer this short summary to clarify the now settled issues on some of the more controversial aspects of the debate and help refocus the research efforts in new and important avenues. [ I. Adagideli, G. Bauer, M.-S. Choi, Zhong Fang, B. I. Halperin, N. V. Hieu, Jiang-Ping Hu, J. Inoue, H.W. Lee, Minchul Lee, E. Mishchenko, L. Molenkamp, S. Murakami, B. Nikolic, Qian Niu, Junsaku Nitta, M. Onoda, J. Orenstein, C. H. Park, Y.S. Kim, Shun-Qing Shen, D. Sheng, A. Silov, J. Sinova, S. Souma, J. Wunderlich, X. C. Xie, L. P. Zarbo, S.-C. Zhang, Fu-Chun Zhang ]{}'
author:
- Jairo Sinova
- Shuichi Murakami
- 'Shun-Qing Shen'
- 'Mahn-Soo Choi'
title: |
Spin-Hall effect: Back to the Beginning on a Higher Level\
[Summary of the APCTP Workshop on the Spin-Hall Effect and Related Issues]{}\
[Asian Pacific Center for Theoretical Physics, Pohang, South Korea]{}
---
Introduction
============
The spin Hall effect (SHE) is the generation in a paramagnetic system of a spin current perpendicular to an applied charge current leading to a spin accumulation with opposite magnetization at each edge. This effect was first predicted over three decades ago by invoking the phenomenology of the earlier theories of the anomalous Hall effect in ferromagnets, which associated its origin to asymmetric Mott-skew and side-jump scattering from impurities due to spin-orbit coupling.[@Dyakonov:1971_a; @Hirsch:1999_a]
Recently the possibility of an intrinsic (dependent only on the electronic structure) SHE has been put forward [@Murakami:2003_a; @Sinova:2004_a] predicting the presence of a spin current generated perpendicular to an applied electric field in semiconducting systems with strong spin-orbit coupling, with scattering playing a minor role. This proposal has generated an extensive theoretical debate in a very short time motivated by its novel physical concept and potential as a spin injection tool.[@LANL] The interest has also been dramatically enhanced by recent experiments by two groups reporting the first observations of the SHE in n-doped semiconductors[@Kato:2004_d; @Sih:2005_a] and in 2D hole gases (2DHG).[@Wunderlich:2004_a]
These experiments measure directly the spin accumulation induced at the edges of the examples through different optical techniques. On the other hand, most of the early theory has focused on the spin-current generated by an electric field which would drive such spin-accumulation. In most studies this spin current and its associated conductivity has been defined as $j_y^z\equiv\{v_y,s_z\}/2=\sigma^{SHE} E_x$. This choice is a natural one but not a unique one in the presence of spin-orbit coupling since there is no continuity equation for spin density as is the case for charge density. The actual connection between the spin-accumulation and the induced spin-current is [*not*]{} straight forward in the situations where spin-orbit coupling is strong and this relation is the focus of current research and one of the key challenges ahead.
Although two model Hamiltonians with strong spin-orbit coupling have been considered initially, the p-doped 3D valence band system[@Murakami:2003_a] and the 2DEG with Rashba coupling,[@Sinova:2004_a] the one that has attracted the most attention, perhaps due to its simplicity, is the latter one which has the form $H_{\rm R-SO}=\lambda(\sigma_x k_y-\sigma_y k_x)$. In such systems, in a clean sample, where the transport scattering rate $\tau^{-1}$ is small compared to the spin-orbit splitting $\lambda k_F /\hbar$, one finds an intrinsic value $e/8\pi$for the spin Hall conductivity, which is valid at finite frequencies in the range $\tau^{-1} < \omega < \lambda k_F / \hbar $, independent of details of the impurity scattering, in the usual case where both spin-orbit split bands are occupied. The prediction for the dc spin Hall effect in this model has been examined and debated extensively. It was first noticed that contributions to the spin-current from impurity scattering, even in the limit of weak disorder, seemed to cancel exactly the intrinsic contribution.[@Inoue:2004_a; @Mishchenko:2004_a] This lead to speculation that this cancelation destroys the effect in other model as well. On the other hand, it is now understood through recent efforts, culminating in this workshop, that such cancelation only occurs for this *very particular model*, due to the linearity of the spin-orbit coupling and the parabolic dispersion.[@Dimitrova:2004_a; @Chalaev:2004_a]
This motivates the title of this summary: After our initial excitement and our initial worries that such a beautiful effect may not exist, we are back to the original proposal but at a higher level of understanding: that an intrinsic contribution to the SHE in many systems with strong enough spin-orbit coupling is present in general.[@Murakami:2003_a; @Sinova:2004_a] What follows is a summary of the issues agreed upon and debated during the open discussion sessions of the workshop; it is not meant as a summary of all the topics presented in the workshop. Even though feedback from all the speakers in the workshop has been solicited in composing this summary, any ommisions or unnintentional unbalance is ultimately the responsability of the organizers. For further information on this workshop and to view the slides of the talks given and other topics discussed which are not mentioned here we encourage the reader to visit the workshop website.[@site]
Agreement and consensus
=======================
Within the open sessions of this workshop, several key points were discussed and agreement was reached on their conclusions. This is an important and intended result of this workshop, to bring together several of the leading researchers in the field to clarify the now extensive debate in the literature which can be overwhelming to a newcomer.
The agreed upon statements are as follows:
- *The dc spin Hall conductivity, defined through $j_y^z\equiv\{v_y,s_z\}/2=\sigma^{SHE} E_x$, does not vanish in general and it includes both intrinsic and non-intrinsic contributions.*
- *The dc spin Hall conductivity for the model Hamiltonian, ${\cal H_{\rm R}}=\hbar^2 k^2/2m+\lambda (\sigma_x k_y-\sigma_y k_x)$, vanishes in the absence of a magnetic field and spin-dependent scattering, even in the limit of weak scattering. This cancellation is due to the particlar relation in this model between the spin dynamics $d s_y/dt$ and the induced spin-Hall current, i.e. $d s_y/dt=i[{\cal H_R},s_y] \propto j_y^z$, which in a steady state situation indicates a vanishing spin-Hall current. No such relation exists in more complicated models, where the spin-orbit coupling is not simply linear in the carrier momentum.*
The effects of disorder on the induced spin-current, within linear response, come in the form of self-energy lifetime corrections and vertex corrections. The life time corrections only reduce this induced current through a broadening of the bands without affecting its nature. On the other hand, vertex corrections have been the source of important debate since they make the intrinsic SHE vanish in the Rashba 2DEG system for any arbitrary amount of scattering.[@Inoue:2004_a; @Mishchenko:2004_a; @Chalaev:2004_a] For p-type doping in both 3D and 2D hole gases the vertex corrections vanish in the case of isotropic impurity scattering.[@Murakam:2004_a; @Bernevig:2004_c; @Shytov:2005_a; @Khaetskii:2005_a] This result is now understood in the context of the specific relation of the spin-dynamics within this particular model as stated above.[@Dimitrova:2004_a; @Chalaev:2004_a] This spin-dynamics are linked to the magneto-electric effect producing a homogeneous in-plane spin polarization by an electric field in a Rashba 2DEG.[@Edelstein:1990_a; @Inoue:2003_a] These results have recently been found to be consistent with numerical treatments of the disorder through exact diagonalization finite size scaling calculations.[@Nomura:2005_b; @Nomura:2005_a; @Sheng:2005_a]
It is important to point out however that in the mesoscopic regime, where spin Hall conductance of finite size systems rather than conductivity of infinite size systems is considered and the finite width can lead to spin-Hall edge states,[@Adagideli:2005_a] the SHE seems to also be present and robust against disorder even in the 2DEG Rashba system although its link to the bulk regime is still unclear. [@Hankiewicz:2004_b; @Nikolic:2004_a; @Sheng:2004_a; @Adagideli:2005_a]
Semantics
=========
Given the extensive literature it was deemed useful to agree upon several semantics and notations in order not to create confusion from a lack of communication. With this in mind it was agreed that:
- The spin Hall effect is the antisymmetric spin accumulation in a finite width system driven by an applied electric field.
- The word *intrinsic* is reserved for the intrinsic contribution to the spin-current generated in the absence of scattering. This contribution can be calculated through the single bubble diagram within the diagrammatic technique and corresponds to the ac-limit of $\omega \tau
\rightarrow \infty$ where scattering does not play a role. For example, the intrinsic spin Hall conductivity of the Rashba model is $e/(8\pi)$ and for the p-doped valence system it is $(e/6\pi^2)(k_F^{h.h}-k_F^{l.h.})(1+\gamma_1/(2\gamma_2))$.
Future challenges
=================
Theoretical
-----------
Although there is wide agreement within the theoretical community that a spin Hall effect similar in magnitude to the predicted intrinsic contribution should occur in p-doped and in mesoscopic samples, there are still many remaining challenges in order to fully understand this novel effect and related effects in spintronics within strongly spin-orbit coupled systems. At the top of the agenda seems to be a need to better understand the spin-accumulation induced by the spin-Hall effect at a more quantitative level and its relation to the spin-current generated. Some of the issues raised during these open session were:
- What is the effect of the scattering on the induced spin-currents and spin coherence in a strongly spin-orbit coupled system in general and in specific model at a quantitative level (including the sign of the effect in the several experimental set-ups)?
- Can the spin-current density seemingly arising from the Fermi sea lead to spin-accumulation and/or spin transport?
- A clearer understanding of the different contributions and their scaling with respect to disorder (strength, types, range, etc.) to the induced spin current is needed.
- How does spin relax in relation to scattering and to the fact that spin is not a conserved quantity in the strongly spin-orbit coupled regime? How does spin relax near the baoundry?
- Is the effect more readily observable at mesoscopic scales and is there a relation between the mesoscopic and bulk regime?
- Are there other spin-current definitions which give a clearer picture and can be more readily connected to spin-accumulation?
- There is a need for a full theory of spin-accumulation (and detection) in strongly spin-orbit coupled systems.
These are some of the key issues and questions raised but not by all means the only ones that are being considered in current research. It is important to realize that besides the SHE, there is a plethora of effects, linked to spin-transport dynamics in semiconductors, which are important to understand in the context of strongly spin-orbit coupled systems. One in particular is the spin Coulomb drag,[@Damico:2002_a] which is an intrinsic friction mechanism between opposite spin populations studied in non-spin-orbit coupled systems, and is important in degenerate systems where electron-electron interactions are relevant.
Experimental
------------
On of the clear achievements on the spintronics in recent years has been the experimental observation of this novel effect through optical means. Spin transport in spin-orbit coupled systems is governed by characteristic length scales (mean free path, $l=v_F \tau$, spin precession length $l_{so}=\hbar v_F/\Delta_{so}$), time scales (lifetime,$\tau$ , spin coherence time, $\tau_s$) and by the relative strength of spin-orbit coupling, $\Delta_{so}$ and disorder. From these scales it is generally believed that the SHE observed by Awschalom et al. [@Kato:2004_d] is in the extrinsic regime and the one observed by Wunderlich et al. [@Wunderlich:2004_a] in 2DHG is in the intrinsic regime.
Some of the experimental issues raised during the open dicussion session were:
- A key remaining experimental challenge is the detection of the effect through electrical means which could lead to actual useful devices. This detection has to be done in coordination with careful realistic theoretical modeling of particular devices.
- It is important to understand and model in further detail the effects of edge electric field induced spin-polarization vs. the spin-Hall effect, and the angle dependence of the luminescence induced in the present set-ups and their relation to the spin magnetization.
- Is it possible to measure spin current in the bulk; i.e. not indirectly through spin accumulation?
Outlook
=======
The past two years have seen a tremendous amount of research achievements and advances in the area of spintronics which continuous to generate many novel ideas and phenomena. Besides a good and healthy competitiveness in the field, it has been a field, as it is demonstrated by organizing this conference, which moves forward in unison to clarify debates rather than allow them to linger for many years, helping it to move forward to explore interesting new physics.
As illustrated by the topics debated throughout the workshop, there are many remaining challenges and a very healthy outlook of the field, and not just simply of the spin-Hall effect which is a very small part of the whole of the spintronics field. The organizers are grateful for the sponsorship of the Asian Pacific Center for Theoretical Physics and the National Science Foundation (OISE-0527227) which have made this workshop possible.
[27]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, p. ().
, ****, ().
, , , ****, (), .
, , , , , , ****, (), .
, , , , ****, ().
, , , , , , ****, (), .
, , , , ****, (), .
, , , ****, (), .
, , , ****, (), .
, ****, (), .
, ****, (), .
.
, ****, (), .
, ****, (), .
, , (), .
(), .
, ****, ().
, , , ****, ().
, , , , , (), .
, , , , ****, (), .
, , , , ****, (), .
(), .
, , , , ****, (), .
, , , ****, (), .
, , , ****, (), .
, ****, ().
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
[Literature analyzes the way in which Newton’s second law can be used when non-inertial rotating systems are used. However, the treatment of the work and energy theorem in rotating systems is not considered in textbooks. In this paper, we show that the work and energy theorem can still be applied to a closed system of particles in a rotating system, as long as the work of fictitious forces is properly included in the formalism. The coriolis force does not contribute to the work coming from fictitious forces. It worths remarking that real forces that do not do work in an inertial reference frame can do work in the rotating reference frame and viceversa. The combined effects of acceleration of the origin and rotation of the non-inertial system are also studied.]{}
[Fundamental theorem of work and energy, fictitious forces, rotating reference frames.]{}
[01.55.+b, 45.20.D-, 45.20.D-, 45.20.dg, 45.50.-j, ]{}
author:
- |
Diego A. Manjarrés[^1], William J. Herrera[^2], Rodolfo A. Díaz[^3].\
Universidad Nacional de Colombia,\
Departamento de Física. Bogotá, Colombia.
title: Work and energy in rotating systems
---
Introduction.
=============
Most of our laboratory reference frames are non-inertial, for instance, the dynamics of air or water clusters is usually studied from a reference frame attached to the earth, this dynamics is strongly determined by the presence of the coriolis term[@Kleppner; @Goldstein]. Meteorological and oceanographic phenomena are also influenced by fictitious forces generated by considering the earth as a rotating system. Further, the work and energy formalism could have some advantages in rotating systems, similar to the case of inertial reference frames. Consequently, it is important to study the transformation properties of work and energy quantities from an inertial reference frame to a frame in relative rotation with respect to the inertial one.
The transformation properties of work and energy between reference frames in relative translation have been studied [@AJPKap; @EJPCamarca; @American]. On the other hand, the transformation of forces between an inertial and a rotating reference frames is quite well studied in most of the literature (e.g. Ref. [@Kleppner; @Goldstein]) . Nevertheless, the transformation properties of work and energy quantities between reference frames in relative rotation have not been considered. The latter is important since fictitious forces arising from rotating systems could have potential energies associated, and real forces that do not do work in a given inertial frame can do work in the rotating frame.
In Ref. [@American], it was shown that the work and energy theorem is covariant between inertial reference frames, and that the theorem still holds in non-inertial translational frames if the works done by fictitious forces are included appropriately. Further, it was shown that fictitious forces can do work, and that even in transformations between inertial frames forces that do not do work in an inertial frame can do work in another inertial frame obtaining a non-trivial potential energy (see also Ref. [Wolfram]{}). In this work we extend the study of the work and energy formalism for non-inertial systems that combine accelerated translation with rotation with respect to an inertial frame. The main results are illustrated with a pedagogical example, where we solve the problem in both an inertial and a non-inertial frames, and we show a force that does do work in the inertial frame, but does not do work in the non-inertial frame. The latter fact simplifies the problem considerably when treated in the non-inertial frame.
Formulation of the problem.
===========================
The system under study consists of $N$ interacting particles. The system is closed, i.e. there is not any flux of particles from or toward the system and the number of particles is conserved so that not processes of creation or destruction of particles are considered. However, the system could be under the influence of time dependent (or time independent) external forces. Our description is non-relativistic such that time and the mass of the particles are independent of the reference frame.
![*Position of a particle from the point of view of (1) the inertial frame* $\Sigma $*, (2) the non-inertial frame* $\Sigma
^{\prime \prime }$* that rotates with angular velocity* $\Omega $* with respect to* $\Sigma $*. The vector*$\ \Omega $* is defined with a right-hand convention.*[]{data-label="sistemas"}](SIS.eps)
Let us define an inertial system $\Sigma $, and a non-inertial rotating system $\Sigma ^{\prime \prime }$ with constant angular velocity $\mathbf{\Omega }$ with respect to $\Sigma $ and a common origin between them. With respect to $\Sigma $, a given $j-th\ $particle has a position, velocity and acceleration $\mathbf{r}_{j},\mathbf{v}_{j},\mathbf{a}_{j}$. These variables measured by $\Sigma ^{\prime \prime }$ are denoted by $\mathbf{r}_{j}^{\prime \prime },\mathbf{v}_{j}^{\prime \prime },\mathbf{a}_{j}^{\prime
\prime }$. Since $\Sigma $ and $\Sigma ^{\prime \prime }$ have a common origin, we see that $$\mathbf{r}_{j}^{\prime \prime }=\mathbf{r}_{j}.$$Figure \[sistemas\], illustrates these statements. We shall analyze the way in which work and energy transform when we pass from the inertial system $\Sigma $, to the non-inertial system $\Sigma ^{\prime \prime }$. The relationship between velocities and accelerations in $\Sigma $ and $\Sigma
^{\prime \prime }$ is well known from the literature [@Kleppner]
$$\begin{aligned}
\mathbf{v}_{j}^{\prime\prime} &=&\mathbf{v}_{j}-\mathbf{\Omega}\times\mathbf{r}_{j}, \label{vpjsimple} \\
\mathbf{a}_{j}^{\prime\prime} &=&\mathbf{a}_{j}-2\mathbf{\Omega}\times\mathbf{v}_{j}^{\prime\prime}-\mathbf{\Omega}\times\left(\mathbf{\Omega}\times\mathbf{r}_{j}\right), \label{apjsimple}\end{aligned}$$
multiplying Eq. (\[apjsimple\]) by the mass $m_{j}$ we find
$$\begin{aligned}
m_{j}\frac{d\mathbf{v}_{j}^{\prime \prime }}{dt} &=&\mathbf{F}_{j}+\mathbf{F}_{j,fict}, \label{mjvppjsimple} \\
\mathbf{F}_{j,fict} &=&\mathbf{F}_{j,cor}+\mathbf{F}_{j,cent},
\label{mjvppj1simple} \\
\mathbf{F}_{j,cor} &\equiv &-2m_{j}\mathbf{\Omega }\times \mathbf{v}_{j}^{\prime \prime }, \label{coriolissimple} \\
\mathbf{F}_{j,cent} &\equiv &-m_{j}\mathbf{\Omega }\times \left( \mathbf{\Omega }\times \mathbf{r}_{j}\right) , \label{mjvppj2simple}\end{aligned}$$
where $\mathbf{F}_{j}$ represents the total real force on the $j-th$ particle (summation of internal and external forces on $j$), $\mathbf{F}_{cor}$ and $\mathbf{F}_{cent}$ are the well-known coriolis and centrifugal forces. Taking the dot product on both sides of Eq. (\[mjvppjsimple\]) by $\mathbf{v}_{j}^{\prime \prime }dt$ on the left and by $d\mathbf{r}_{j}^{\prime \prime }$ on the right, and summing over all particles of the system we have
$$\begin{aligned}
\sum_{j}d\left(\frac{1}{2}m_{j}\mathbf{v}_{j}^{\prime\prime2}\right)&=&\sum_{j} \left(\mathbf{F}_{j}+\mathbf{F}_{fict}\right)\cdot d\mathbf{r}_{j}^{\prime\prime}, \notag \\
dK^{\prime\prime}&=&dW^{\prime\prime}. \label{WErotsimple}\end{aligned}$$
Where $dK^{\prime \prime }$ and $dW^{\prime \prime }$ are the differentials of kinetic energy and work when an infinitesimal path $d\mathbf{r}_{j}^{\prime \prime }$ is taken for each particle. This equation shows the covariance of the fundamental theorem of work and energy between $\Sigma $ and $\Sigma ^{\prime \prime }$. In relating work and energy observables between $\Sigma $ and $\Sigma ^{\prime \prime }$ it is important to write $dK_{j}^{\prime \prime }$ and $dW_{j}^{\prime \prime }$ (differential of kinetic energy and work associated with the $j-$th particle in the system $\Sigma ^{\prime \prime }$) in terms of quantities measured by $\Sigma $. For $dK_{j}^{\prime \prime }$, we use Eq. (\[vpjsimple\]) to get
$$\begin{aligned}
dK_{j}^{\prime \prime } &=&m_{j}\mathbf{v}_{j}^{\prime \prime }\cdot d\mathbf{v}_{j}^{\prime \prime } \notag \\
&=&m_{j}\left\{ \mathbf{v}_{j}-\mathbf{\Omega }\times \mathbf{r}_{j}\right\}
\cdot \left\{ d\mathbf{v}_{j}-\mathbf{\Omega }\times d\mathbf{r}_{j}\right\}
\notag \\
dK_{j}^{\prime \prime } &=&dK_{j}+dZ_{j}, \label{dKppdzsimple}\end{aligned}$$
with$$dZ_{j}\equiv -(\mathbf{\Omega }\times \mathbf{r}_{j})\cdot d\mathbf{P}_{j}-m_{j}[\mathbf{\Omega }\times (\mathbf{\Omega }\times \mathbf{r}_{j})]\cdot d\mathbf{r}_{j},$$where $d\mathbf{P}_{j}$ denotes the differential of linear momentum associated with the $j-$th particle measured by $\Sigma $. The coriolis force given by Eq. (\[coriolissimple\]) does not do work with respect to $\Sigma ^{\prime \prime }$. To obtain $dW_{j}^{\prime \prime }$ in terms of variables measured by $\Sigma $, we use Eqs. (\[vpjsimple\], [mjvppj1simple]{}-\[mjvppj2simple\])
$$\begin{aligned}
\mathbf{F}_{j}^{\prime \prime } &=&\mathbf{F}_{j}-m_{j}[2\mathbf{\Omega }\times \mathbf{v}_{j}^{\prime \prime }+\mathbf{\Omega }\times (\mathbf{\Omega }\times \mathbf{r}_{j})], \\
d\mathbf{r}_{j}^{\prime \prime } &=&\mathbf{v}_{j}^{\prime \prime }dt=d\mathbf{r}_{j}-(\mathbf{\Omega }\times \mathbf{r}_{j})dt, \notag \\
dW_{j}^{\prime \prime } &=&\mathbf{F}_{j}^{\prime \prime }\cdot d\mathbf{r}_{j}^{\prime \prime }=\left( \mathbf{F}_{j}+\mathbf{F}_{j,fict}\right) \cdot
d\mathbf{r}_{j}^{\prime \prime } \notag \\
&=&\{\mathbf{F}_{j}-m_{j}\mathbf{\Omega }\times (\mathbf{\Omega }\times
\mathbf{r}_{j})\}\cdot \{d\mathbf{r}_{j}-\mathbf{\Omega }\times \mathbf{r}_{j}~dt\}, \notag \\
dW_{j}^{\prime \prime } &=&dW_{j}+dZ_{j}, \label{dWppdzsimple}\end{aligned}$$
so the covariance of the fundamental theorem of work and energy is expressed by Eq. (\[WErotsimple\]), or equivalently by (\[dKppdzsimple\], [dWppdzsimple]{}).
For pedagogical reasons the covariance of the work and energy theorem for the pure rotation case is realized first, for the reader to assimilate the formalism in a simple way. The additional subtleties that involve the combination of translation and rotation are introduced in appendix [apendicegeneral]{}, in which we consider a non-inertial system $\Sigma
^{\prime \prime }$, that possesses a relative rotation with time-dependent angular velocity and translation with respect to $\Sigma $.
Fictitious work
===============
We shall consider the general case in which $\Sigma ^{\prime \prime }$ rotates and translates with respect to $\Sigma $ (see appendix [apendicegeneral]{}). The work observed by $\Sigma ^{\prime \prime }$ can be separated in the work coming from real forces and those coming from fictitious forces
$$\begin{aligned}
dW^{\prime\prime}&=&dW_{real}+dW_{fict}, \label{separaciontrabajos2} \\
dW_{fict}&=&-\sum_{j}m_{j}[\boldsymbol{\Omega}\times(\boldsymbol{\Omega}\times\mathbf{r}_{j}^{\prime\prime})+\boldsymbol{\dot{\Omega}}\times\mathbf{r}_{j}^{\prime\prime} \notag \\
&&+\mathbf{A}(t) ]\cdot d\mathbf{r}_{j}^{\prime\prime},
\label{trabajoficticio2} \\
dW_{real}&=&\sum_{j}[\mathbf{F}_{j}^{\prime\prime}+m_{j}\{\mathbf{\Omega}\times(\mathbf{\Omega}\times\mathbf{r}_{j}^{\prime\prime})+\dot{\mathbf{\Omega}}\times\mathbf{r}_{j}^{\prime\prime} \notag \\
&&+\mathbf{A}(t)\}]\cdot d\mathbf{r}_{j}^{\prime\prime}.
\label{trabajoreal22}\end{aligned}$$
Eqs. (\[trabajoficticio2\], \[trabajoreal22\]) are written is terms of observables measured by $\Sigma ^{\prime \prime }$ (except for $\mathbf{\Omega }$ and $\mathbf{\dot{\Omega}}$ which are measured with respect to $\Sigma $). It is because in most of the problems involving non-inertial systems, experiments are done on the non-inertial system and measured with respect to it. In particular, real forces that do not do work in $\Sigma $ can do work in $\Sigma ^{\prime \prime }$, or real forces that do work in $\Sigma $ could do no work in $\Sigma ^{\prime \prime }$.
For the particular case $\boldsymbol{\Omega }=0$ it can be proved that if $\Sigma^{\prime\prime}$ is attached to the center of mass of the system of particles (CM), the total fictitious work is null [@American],
$$dW_{fict}^{CM}=-M\mathbf{A}_{C}(t)\cdot d\left( \frac{\sum_{j}m_{j}\mathbf{r}_{j}^{\prime \prime }}{M}\right) =0, \label{WfictCM}$$
where $\mathbf{A}_{C}$ is the acceleration of the CM. In the most general case with $\boldsymbol{\Omega }\neq 0$, the total fictitious work measured by $\Sigma ^{\prime \prime }$ can be different from zero even if its origin is attached to the CM.
These results are in close analogy with the case of torques analyzed from systems attached to the CM. For non-rotating systems (with respect to $\Sigma $) attached to the CM, the total fictitious torque and work are null [@American; @Mexicana]. On the other hand, for rotating systems attached to the CM both the total fictitious torque and the total fictitious work are in general non-vanishing [@Mexicana]. It worths saying that even in non-rotating systems, the fictitious torque and work on each particle are in general different from zero.
In addition, it is also possible to find non-inertial systems for which fictitious work is null and fictitious torque is non-null or vice versa. However, these features depend no only on the reference frame, but also on the system of particles involved.
Pedagogical example.\[sec:example\]
===================================
![*A block of mass* $m$* slides on a table along a groove, which rotates with constant angular velocity* $\boldsymbol{\protect\omega }$*. The block is attached to other block of mass* $M$*. The Normal forces* $N_{1}$* and* $N_{2}$* on the block, are illustrated.*[]{data-label="ejemplo"}](EJ2.eps)
Let us consider a block of mass $m$ sliding without friction on a horizontal table which rotates with constant angular velocity $\boldsymbol{\omega }$. The block is constrained to move on a groove that is radially directed. The block is attached through a rope of negligible mass to other block of mass $M $, which hangs from the table through its center (see Fig. \[ejemplo\]). Our physical system of interest are the two blocks, and their description will be made from the point of view of an inertial reference frame $\Sigma $ whose origin is located at the center of the table, and other non-inertial system $\Sigma ^{\prime \prime }$ fixed to the table, with a common origin with $\Sigma $, and with constant angular velocity $\boldsymbol{\omega }$. The dimensions of the two blocks are neglected, so that we regard the two blocks as point-like particles.
Our aim is to apply the fundamental work-energy theorem, therefore the work done by the resultant force due to all applied forces on the system is calculated. We take in mind that the block of mass $m$, is constrained to move on a groove. $\mathbf{N}_{2}\ $is a normal force from the wall of the groove on the mass $m$, which corresponds to the reaction of the table over the block due to rotation. This force is tangential, and it is different from the Normal force $\mathbf{N}_{1}$ which is the force on $m$ due to the vertical contact with the table. It is clear that $\mathbf{N}_{1}$ does not do work with respect to $\Sigma $ or $\Sigma ^{\prime \prime }$.
We remark the fact that from the point of view of $\Sigma $, the normal force $\mathbf{N}_{2}$ does work, since there are few examples in the literature in which normal forces do work [@American]. For an observer in $\Sigma $, the work done by $\mathbf{N}_{2}$ is given by
$$W_{N}=\int_{\theta _{0}}^{\theta _{f}}N_{2}rd\theta .$$
from Newton’s second law the normal force is given by
$$\mathbf{N}_{2}=m(r\ddot{\theta}+2\dot{r}\dot{\theta})\mathbf{u}_{\theta
}=2m\omega \dot{r}\mathbf{u}_{\theta }\ ,$$
so that,
$$\begin{aligned}
W_{N} &=&m\omega \int_{\theta _{0}}^{\theta _{f}}2r\frac{dr}{dt}d\theta
=m\omega \int_{r_{0}}^{r_{f}}2r~dr\frac{d\theta }{dt} \\
W_{N} &=&m\omega ^{2}\int_{r_{0}}^{r_{f}}2r~dr\end{aligned}$$
hence, the work done by $\mathbf{N}_{2}$ and the total external work (with respect to $\Sigma $) are given by
$$W_{N}=m\omega ^{2}(r_{f}^{2}-r_{0}^{2})\ \ ,\ \ W=m\omega ^{2}({r_{f}^{2}}-{r_{0}^{2}})-Mg(z_{f}-z_{0}).$$
From the point of view of $\Sigma ^{\prime \prime }$, the centrifugal force on the block of mass $M$ is zero, since $\boldsymbol{\Omega \ }$is parallel to the position of $M$. Furthermore, the coriolis force does not do work. Therefore, the total fictitious force on $M\ $does not do work. In $\Sigma
^{\prime \prime }$ the displacement of the block of mass $m$ is radial so that we have$$\boldsymbol{\omega }=\omega ~\mathbf{u}_{z}\ \ ,\ \ \mathbf{r}^{\prime
\prime }=\mathbf{r}=r\mathbf{u}_{r}\ ,\ \ d\mathbf{r}^{\prime \prime
}=dr^{\prime \prime }\mathbf{u}_{r}=dr\ \mathbf{u}_{r}$$hence, the work done by $\mathbf{N}_{2}$ is zero in $\Sigma ^{\prime \prime }
$. Therefore, $\mathbf{N}_{2}$ does work in $\Sigma $ but does not do work in $\Sigma ^{\prime \prime }$. On the other hand, the work done by the fictitious forces in $\Sigma ^{\prime \prime }\ $is
$$dW^{\prime \prime }=-m\boldsymbol{\omega }\times \left( \boldsymbol{\omega }\times \mathbf{r}^{\prime \prime }\right) \cdot d\mathbf{r}^{\prime \prime
}=m\omega ^{2}r^{\prime \prime }dr^{\prime \prime }.$$
The total fictitious work seen by $\Sigma ^{\prime \prime }$ is
$$W^{\prime \prime }=\frac{m\omega ^{2}}{2}({r_{f}^{\prime \prime }}^{2}-{r_{0}^{\prime \prime }}^{2})=\frac{m\omega ^{2}}{2}({r_{f}^{2}}-{r_{0}^{2}}).$$
The work-energy theorem applied on both $\Sigma $ and $\Sigma ^{\prime
\prime }$, yields
$$\begin{aligned}
W &=&m\omega ^{2}({r_{f}^{2}}-{r_{0}^{2}})-Mg(z_{f}-z_{0}) \notag \\
&=&\frac{m\omega ^{2}}{2}({r_{f}^{2}}-{r_{0}^{2}})+\frac{\left( m+M\right) }{2}({v_{r,f}^{2}}-{v_{r,0}^{2}}), \label{trabajoinercialej} \\
W^{\prime \prime } &=&\frac{m\omega ^{2}}{2}({r_{f}^{2}}-{r_{0}^{2}})-Mg(z_{f}-z_{0}) \notag \\
&=&\frac{\left( m+M\right) }{2}({v_{r,f}^{2}}-{v_{r,0}^{2}}).
\label{trabajorotanteej}\end{aligned}$$
We have taken into account that $\mathbf{v}_{M}^{2}=\mathbf{v}_{r}^{2}$ where $\mathbf{v}_{M},~\mathbf{v}_{r}$ are the velocity of $M$ and the radial velocity of $m$ respectively. The change of kinetic energy of the system has been separated in tangential and radial parts. For convenience, we define the radial kinetic energy of the system as the sum of radial kinetic energy of $m\ $plus the kinetic energy of $M$. We can see from ([trabajoinercialej]{}, \[trabajorotanteej\]), that the change in radial kinetic energy is the same in both reference frames. As a consequence, it could be useful to define an effective work in $\Sigma $, so that the transversal component of the kinetic energy is included in the work, and therefore an effective work-energy theorem can be established.
$$\begin{aligned}
W &=&\frac{m\omega ^{2}}{2}({r_{f}^{2}}-{r_{0}^{2}})+W_{ef}=\Delta K_{\theta
}+W_{ef} \notag \\
W_{ef} &=&\frac{m\omega ^{2}}{2}({r_{f}^{2}}-{r_{0}^{2}})-Mg(z_{f}-z_{0})
\notag \\
&=&\frac{\left( m+M\right) }{2}({v_{r,f}^{2}}-{v_{r,0}^{2}})
\label{trabajoeffej}\end{aligned}$$
where $\Delta K_{\theta }$ denotes the change in transversal kinetic energy. The effective work $W_{ef}$ defined in $\Sigma \ $in this way, is equal to the work $W^{\prime \prime }$ seen by $\Sigma ^{\prime \prime }$ and both are equal to the change in radial kinetic energy seen by either frame. From $W_{ef}$ we can define a effective potential as it is done in the problem of a central force [@Kleppner],
$$V_{ef}=-\frac{m\omega ^{2}}{2}{r}^{2}+Mgz$$
Equations (\[trabajorotanteej\], \[trabajoeffej\]) can be rewritten by taking into account that $z_{f}-z_{0}=r_{f}-r_{0}$
$$\begin{aligned}
W_{ef} &=&W^{\prime \prime }=m\omega _{c}^{2}\left[ \left( \frac{\omega }{\omega _{c}}\right) ^{2}\overline{r}-r_{0}\right] (z_{f}-z_{0}),
\label{trabajointer} \\
\omega _{c} &\equiv &\left( \frac{Mg}{mr_{0}}\right) ^{1/2}\ \ ,\ \
\overline{r}\equiv \frac{r_{f}+r_{0}}{2}. \notag\end{aligned}$$
From Eq. (\[trabajointer\]) we can give a physical interpretation of the problem where $\omega _{c}$ is the critical frequency that determines the sense of motion[^4]. For the particular case in which the initial radial velocity of the block $m$ vanishes ($v_{r,0}=0$), there are three possible situations: i) For $\omega <\omega
_{c} $, the block $m$ moves toward the center of the table and the block $M$ descends, hence we get$\ \overline{r}<r_{0}$ and $z_{f}-z_{0}<0$, so that a positive work $W^{\prime \prime }\ $is done on the system; ii) For $\omega
>\omega _{c}$, the block $m$ moves away from the center of the table and the block $M$ ascends, therefore we$\ $have$\ \overline{r}>r_{0}$ and $z_{f}-z_{0}>0$, and a positive work $W^{\prime \prime }\ $is done on the system; iii) If $\omega =\omega _{c}$, the block $M$ remains at rest and the radial velocity of $m$ vanishes, then$\ \overline{r}=r_{0}$ and $z_{f}-z_{0}=0$, and the work done is null. We suggest for the reader to interpret the equation (\[trabajointer\]) for the case in which the block of mass $m$ possesses an initial radial velocity different from zero.
Conclusions.
============
We have shown the covariance of the work and energy theorem for a non-uniform rotational frame, where the effects of acceleration of the origin and rotation of the non-inertial system are included. This covariance is complied when the work done for the fictitious forces is properly included. The coriolis force does not contribute to the work coming from fictitious forces. We generalize the properties when the reference frame is attached to the center of mass, where the fictitious work on each particle is in general different from zero but the total fictitious work is zero for non-rotating non-inertial systems.
The fact that the work done for a force depends on the reference frame, is illustrated by means of an example. We solve the problem in an inertial frame and in a rotating non-inertial frame. By defining an effective work in the inertial frame we can equate such effective work with the one measured by the non-inertial rotational frame.
**Acknowledgments**: This work was supported by División de Investigaciones de la Universidad Nacional de Colombia (DIB), sede Bogotá.
General case. {#apendicegeneral}
=============
![*Position of a particle from the point of view of three reference frames (1) the inertial frame* $\Sigma $*, (2) the non-inertial frame* $\Sigma ^{\prime }$* that is in pure translation with respect to* $\Sigma $*, and (3) The system* $\Sigma ^{\prime
\prime }$* that rotates with angular velocity* $\Omega $* with respect to* $\Sigma ^{\prime }$*.*[]{data-label="sistemasap"}](SIS2.eps)
Let us define an inertial system $\Sigma $, a non-inertial translational and non-rotating system $\Sigma ^{\prime }$ (with respect to $\Sigma $), and a rotating system $\Sigma ^{\prime \prime }$ with origin common with $\Sigma
^{\prime }$ and with angular velocity $\mathbf{\Omega }$. The position, velocity and acceleration of the origin of $\Sigma ^{\prime }$ and $\Sigma
^{\prime \prime }$ with respect to $\Sigma $ are called $\mathbf{R}\left(
t\right) ,\mathbf{V}\left( t\right) ,\mathbf{A}\left( t\right) $. With respect to $\Sigma $, a given $j-th\ $particle has a position, velocity and acceleration $\mathbf{r}_{j},\mathbf{v}_{j},\mathbf{a}_{j}$. These variables measured by $\Sigma ^{\prime }$ are denoted by $\mathbf{r}_{j}^{\prime },\mathbf{v}_{j}^{\prime },\mathbf{a}_{j}^{\prime }$ and measured by $\Sigma
^{\prime \prime }$ are denoted by $\mathbf{r}_{j}^{\prime \prime },\mathbf{v}_{j}^{\prime \prime },\mathbf{a}_{j}^{\prime \prime }$. Since $\Sigma
^{\prime }$ and $\Sigma ^{\prime \prime }$ have a common origin, we see that $\mathbf{r}_{j}^{\prime }=\mathbf{r}_{j}^{\prime \prime }$. The axes of $\Sigma $ and $\Sigma ^{\prime }$ are parallel each other at all times. Figure \[sistemasap\], illustrates these statements, this figure also shows that
$$\begin{aligned}
\mathbf{r}_{j}^{\prime\prime}&=&\mathbf{r}_{j}^{\prime}=\mathbf{r}_{j}-\mathbf{R}\left(t\right) , \label{rpp=rp} \\
\mathbf{v}_{j}^{\prime}&=&\mathbf{v}_{j}-\mathbf{V}\left(t\right),
\label{vpjvj}\end{aligned}$$
the relationship between velocities and accelerations in $\Sigma^{\prime}$ and $\Sigma^{\prime\prime}$ is well known from the literature
$$\begin{aligned}
\mathbf{v}_{j}^{\prime\prime} &=&\mathbf{v}_{j}^{\prime}-\mathbf{\Omega}\times \mathbf{r}_{j}^{\prime}, \label{vpj} \\
\mathbf{a}_{j}^{\prime\prime}&=&\mathbf{a}_{j}^{\prime}-2\mathbf{\Omega}\times\mathbf{v}_{j}^{\prime\prime}-\mathbf{\Omega} \times \left(\mathbf{\Omega} \times \mathbf{r}_{j}^{\prime}\right)-\dot{\mathbf{\Omega}}\times
\mathbf{r}_{j}^{\prime}, \label{apj}\end{aligned}$$
where we have included the term corresponding to the variation in time of $\mathbf{\Omega}$. Combining (\[vpjvj\], \[vpj\]) we have
$$\mathbf{v}_{j}^{\prime \prime }=\mathbf{v}_{j}-\mathbf{\Omega }\times
\mathbf{r}_{j}^{\prime}-\mathbf{V}\left( t\right), \label{vppj}$$
deriving Eq. (\[vpjvj\]) with respect to time we find
$$\mathbf{a}_{j}^{\prime }=\mathbf{a}_{j}-\mathbf{A}\left( t\right),
\label{apj2}$$
substituting Eq. (\[apj2\]) in Eq. (\[apj\]), we obtain
$$\mathbf{a}_{j}^{\prime \prime }=\mathbf{a}_{j}-2\mathbf{\Omega} \times
\mathbf{v}_{j}^{\prime \prime }-\mathbf{\Omega} \times \left(\mathbf{\Omega}
\times \mathbf{r}_{j}^{\prime }\right) -\dot{\mathbf{\Omega}}\times \mathbf{r}_{j}^{\prime }-\mathbf{A}\left(t\right), \label{appj3}$$
multiplying Eq. (\[appj3\]) by the mass $m_{j}$ we find
$$\begin{aligned}
m_{j}\frac{d\mathbf{v}_{j}^{\prime\prime}}{dt}&=&\mathbf{F}_{j}+\mathbf{F}_{j,fict}, \label{mjvppj} \\
\mathbf{F}_{j,fict}&=&\mathbf{F}_{j,cor}+\mathbf{F}_{j,cent}+\mathbf{F}_{j,azim}+\mathbf{F}_{j,tras}, \label{mjvppj1} \\
\mathbf{F}_{j,cor}&\equiv&-2m_{j}\mathbf{\Omega}\times\mathbf{v}_{j}^{\prime\prime}, \label{coriolis} \\
\mathbf{F}_{j,cent}&\equiv&-m_{j}\mathbf{\Omega}\times\left(\mathbf{\Omega}\times\mathbf{r}_{j}^{\prime}\right), \\
\mathbf{F}_{j,azim}&\equiv&-m_{j}\dot{\mathbf{\Omega}}\times\mathbf{r}_{j}^{\prime}, \\
\mathbf{F}_{j,tras}&\equiv&-m_{j}\mathbf{A}\left(t\right). \label{mjvppj2}\end{aligned}$$
In comparison with Eqs. (\[mjvppjsimple\]-\[mjvppj2simple\]), two additional terms appear, $\mathbf{F}_{azim}$ is the fictitious force coming from the angular acceleration of $\Sigma ^{\prime \prime }$ with respect to $\Sigma $, and $\mathbf{F}_{tras}$ is the term coming from the linear acceleration of the origin of $\Sigma ^{\prime \prime }$ with respect to $\Sigma $. Taking the dot product on both sides of Eq. (\[mjvppj\]) by $\mathbf{v}_{j}^{\prime \prime }dt$ on the left and by $d\mathbf{r}_{j}^{\prime \prime }$ on the right, and summing over all particles of the system, we obtain the covariance of the fundamental work energy theorem expressed by Eq. (\[WErotsimple\]), in the general case.
The Coriolis force given by equation (\[coriolis\]) does not do work with respect to $\Sigma ^{\prime \prime }$. The differentials of kinetic energy and work, can be written in terms of quantities measured by $\Sigma $, so
$$\begin{aligned}
dK_{j}^{\prime\prime}=m_{j}\left\{\mathbf{v}_{j}-\mathbf{\Omega}\times\left[\mathbf{r}_{j}-\mathbf{R}\left(t\right)\right] -\mathbf{V}\left(t\right)\right\}\cdot \notag \\
\left\{d\mathbf{v}_{j}-\mathbf{\Omega}\times\left[d\mathbf{r}_{j}-\mathbf{V}\left(t\right)dt\right]-d\mathbf{\Omega} \times\left[\mathbf{r}_{j}-\mathbf{R}\left(t\right)\right]-\mathbf{A} \left(t\right)dt\right\},\end{aligned}$$
$$\begin{aligned}
dW_{j}^{\prime\prime}=\mathbf{F}_{j}^{\prime\prime}\cdot d\mathbf{r}_{j}^{\prime\prime}=(\mathbf{F}_{j}+\mathbf{F}_{j,fict})\cdot\mathbf{v}_{j}^{\prime\prime}dt, \\
=\{\mathbf{F}_{j}-m_{j}[\mathbf{\Omega}\times[\mathbf{\Omega}\times( \mathbf{r}_{j}-\mathbf{R}(t))]+\dot{\mathbf{\Omega}}\times(\mathbf{r}_{j}-\mathbf{R}(t)) \\
+\mathbf{A}(t)]\}\cdot \{d\mathbf{r}_{j}-[\mathbf{\Omega}\times(\mathbf{r}_{j}-\mathbf{R}(t))+\mathbf{V}(t)]dt\}.\end{aligned}$$
The covariance of the fundamental theorem of work and energy can be expressed as
$$\begin{aligned}
dK_{j}^{\prime\prime} &=&dK_{j}+dZ_{j}, \label{dKppdz} \\
dW_{j}^{\prime\prime} &=&dW_{j}+dZ_{j}, \label{dWppdz}\end{aligned}$$
$$\begin{aligned}
&&dZ_{j} \equiv -[\mathbf{\Omega }\times (\mathbf{r}_{j}-\mathbf{R}(t))+\mathbf{V}(t)]\cdot d\mathbf{P}_{j} \\
&&-m_{j}\left\{\mathbf{\Omega}\times\lbrack \mathbf{\Omega}\times (\mathbf{r}_{j}-\mathbf{R}(t))]+\dot{\mathbf{\Omega }}\times (\mathbf{r}_{j}-\mathbf{R}(t))+\mathbf{A}(t)\right\} \\
&&\cdot \left\{d\mathbf{r}_{j}-[\mathbf{\Omega }\times (\mathbf{r}_{j}-\mathbf{R}(t))+\mathbf{V}(t)]dt\right\} .\end{aligned}$$
Dynamics.\[ap:dynamics\]
========================
In the example exposed in Sec. \[sec:example\], we can determine the condition for the angular velocity so that if the block $m$ starts with null radial velocity ($v_{r,0}=0$), its radial velocity remains null at all times. From the point of view of $\Sigma $, we analyze the dynamics of the system composed by the two blocks. In polar coordinates the equations of motion for the block of mass $m$ are
$$\begin{aligned}
-T=m(\ddot{r}-r\omega^{2}), \label{eqradialm} \\
N_{2}=2m\omega\dot{r}. \label{eqtangencialm}\end{aligned}$$
For the block of mass $M$ we have
$$\begin{aligned}
T-Mg=M\ddot{z}. \label{eqM}\end{aligned}$$
The motion of the system is constrained by
$$r-z=\text{constant} \label{ligadura}$$
We combine (\[eqradialm\], \[eqM\]), and employ (\[ligadura\]), which leads to
$$\ddot{r}-\frac{m\omega ^{2}}{M+m}r=-\frac{Mg}{M+m} \label{eqmov}$$
The solution of Eq. (\[eqmov\]), is obtained from the solutions of the homogeneous equation, hence
$$\begin{aligned}
r &=&C+\left( r_{0}-C\right) \cosh {(\kappa t)}+\frac{v_{r,0}}{\kappa }\sinh
{(\kappa t)}, \\
\kappa &=&\omega \sqrt{\frac{m}{m+M}}\ \ \ \ ,\ \ \ C=\frac{Mg}{m\omega ^{2}}.\end{aligned}$$
For $v_{r,0}=0$, we can see that if $\omega =\omega _{c}=\left( \frac{Mg}{mr_{0}}\right) ^{1/2}$, then the block remains at rest.
[9]{}
[^1]: damanjarrnsg@unal.edu.co
[^2]: jherreraw@unal.edu.co
[^3]: radiazs@unal.edu.co
[^4]: The solution from the dynamical point of view to obtain the critical frecuency is obtained in appendix \[ap:dynamics\].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Calcium imaging has revolutionized systems neuroscience, providing the ability to image large neural populations with single-cell resolution. The resulting datasets are quite large (with scales of TB/hour in some cases), which has presented a barrier to routine open sharing of this data, slowing progress in reproducible research. State of the art methods for analyzing this data are based on non-negative matrix factorization (NMF); these approaches solve a non-convex optimization problem, and are highly effective when good initializations are available, but can break down e.g. in low-SNR settings where common initialization approaches fail.
Here we introduce an improved approach to compressing and denoising functional imaging data. The method is based on a spatially-localized penalized matrix decomposition (PMD) of the data to separate (low-dimensional) signal from (temporally-uncorrelated) noise. This approach can be applied in parallel on local spatial patches and is therefore highly scalable, does not impose non-negativity constraints or require stringent identifiability assumptions (leading to significantly more robust results compared to NMF), and estimates all parameters directly from the data, so no hand-tuning is required. We have applied the method to a wide range of functional imaging data (including one-photon, two-photon, three-photon, widefield, somatic, axonal, dendritic, calcium, and voltage imaging datasets): in all cases, we observe $\sim$2-4x increases in SNR and compression rates of 20-300x with minimal visible loss of signal, with no adjustment of hyperparameters; this in turn facilitates the process of demixing the observed activity into contributions from individual neurons. We focus on two challenging applications: dendritic calcium imaging data and voltage imaging data in the context of optogenetic stimulation. In both cases, we show that our new approach leads to faster and much more robust extraction of activity from the video data.
author:
- |
E. Kelly Buchanan[^1],$^,$ Ian Kinsella,$^,$ Ding Zhou,$^,$ Rong Zhu[^2], Pengcheng Zhou,\
Felipe Gerhard[^3], John Ferrante,\
Ying Ma[^4], Sharon H. Kim, Mohammed A Shaik,\
Yajie Liang[^5], Rongwen Lu,\
Jacob Reimer[^6], Paul G Fahey, Taliah N Muhammad,\
Graham Dempsey, Elizabeth Hillman, Na Ji, Andreas S Tolias, Liam Paninski
bibliography:
- 'axon\_pipeline.bib'
title: 'Penalized matrix decomposition for denoising, compression, and improved demixing of functional imaging data'
---
Introduction {#introduction .unnumbered}
============
Functional imaging is a critical tool in neuroscience. For example, calcium imaging methods are used routinely in hundreds of labs, generating large-scale video datasets whose characteristics (cell shapes, signal-to-noise levels, background activity, signal timescales, etc.) can vary widely depending on the imaging modality and the details of the brain region and cell types being imaged. To handle this data, scientists must solve two basic tasks: we need to extract signals from the raw video data with minimal noise, and we need to store (and share) the data. A number of papers have focused on the first task [@mukamel2009automated; @maruyama2014detecting; @pnevmatikakis2016simultaneous; @pachitariu2016suite2p; @friedrich2017multi; @inan2017robust; @Reynolds2017; @petersen2017scalpel; @zhou2018efficient; @Mishne2018]; however, somewhat surprisingly, very little work has focused on the second task.
For both of these tasks, it is critical to denoise and compress the data as much as possible. Boosting the signal-to-noise ratio (SNR) is obviously important for detecting weak signals, performing single-trial analyses (where noise cannot be averaged over multiple trials), and for real-time experiments (where we may need to make decisions based on limited data - i.e., averaging over time is not an option). The benefits of compression are perhaps less obvious but are just as numerous: compression would facilitate much more widespread, routine open data sharing, enhancing reproducible neuroscience research. Compression will also be critical for in vivo imaging experiments in untethered animals, where data needs to be transmitted wirelessly, making data bandwidth a critical constraint. Finally, many signal extraction methods based on matrix factorization can be sped up significantly if run on suitably compressed data.
Previous methods for denoising and compressing functional data have several drawbacks. Generic video compression approaches do not take advantage of the special structure of functional imaging data and produce visible artifacts at high compression rates; more importantly, these approaches do not denoise the data, since they focus on compressing the full data, including noise, whereas our goal here is to discard the noise. Conversely, generic image denoising approaches do not offer any compression (and also fail to take advantage of strong structured correlations in the video data). Constrained nonnegative matrix factorization (CNMF) [@pnevmatikakis2016simultaneous] approaches provide state of the art denoising and demixing of calcium imaging data, but these methods can leave significant visible signal behind in the residual (discarding potentially valuable signal) and are highly dependent on the initialization of the matrix factorization; thus it would be dangerous to keep only the matrix factorization output and discard the raw data. Principal components analysis (PCA) is often employed as a compression and denoising method [@mukamel2009automated; @pachitariu2016suite2p], but PCA is based on a rather unstructured signal model and therefore provides a suboptimal encoder of functional data (we will discuss this point in further depth below). In addition, the computation time of PCA scales quadratically with the number of pixels (assuming a long video dataset) and therefore naive applications of PCA are rather slow [@friedrich2017multi]. Finally, importantly, it is difficult to automatically choose the number of principal components that should be retained in a given video (and the “correct” number of components can vary widely across different datasets).
Here we introduce a new simple approach to denoising and compressing functional video data. We apply a variant of penalized matrix decomposition [@Witten2009pmd] that operates locally in space, and encourages smoothness in both the spatial and temporal dimensions. This method offers multiple advantages over previous approaches. It is based on a signal model that is well-matched to the structure of the data: cells are local in space, there aren’t too many of them compared to the number of pixels (leading to a low-rank signal model), and cellular activity is smoother than the dominant noise sources, which are spatially and temporally uncorrelated. The approach is scalable (scaling linearly in the number of frames and pixels), and has modest memory requirements (because all processing is only performed in local spatial patches). All parameters (including the local matrix rank and the degree of smoothness of the output) are chosen automatically. Empirically we find that the method is highly effective, leaving behind minimal visible structure in the residual, while achieving 20-300x compression rates and 2-4x improvements in SNR. We demonstrate the method’s effectiveness on a wide variety of functional imaging datasets (both calcium and voltage imaging; one-, two- and three-photon imaging; and data including somas and dendrites) and show that the method is also effective on wide-field imaging data, where single-cell resolution is not available. Finally, we develop a new constrained NMF approach based on the denoised and compressed representation of the data, and apply this new demixing method to two challenging applications: dendritic calcium imaging data and voltage imaging data in the context of optogenetic stimulation. In both cases, we show that our new approach leads to faster and much more robust extraction of activity from the video data.
Methods {#methods .unnumbered}
=======
We begin by defining notation. Our starting point is an imaging dataset that has been motion-corrected (i.e., we assume that there is no motion of visible cellular components from frame to frame of the movie) and then “unfolded" into a $d \times T$ matrix $\mathbf{Y}$, where $T$ is the number of frames in the movie and $d$ is the number of pixels per frame (or voxels per frame if we are performing imaging in three dimensions). Now the typical approach is to model the data $\mathbf{Y}$ as $\mathbf{Y} = \mathbf{AC} + \mathbf{B} + \mathbf{E}$, where the columns of $\mathbf{A} \in \mathbb{R}^{d \times K}$ model the locations of each source (with $K$ sources total), the rows of $\mathbf{C} \in \mathbb{R}^{K \times T}$ model the time-varying fluorescence of each source, $\mathbf{B} \in \mathbb{R}^{d \times T}$ is a “background" term to handle signals that can not easily be split into single-neuronal components, and $\mathbf{E} \in \mathbb{R}^{d \times T}$ denotes temporally and spatially uncorrelated noise.
It is useful to break the processing pipeline into three sub-problems:
1. **Denoising**: separation of neural signal $\mathbf{Y}^{*} = \mathbf{A}\mathbf{C} + \mathbf{B}$ from noise $\mathbf{E}$;
2. **Compression** of signal $\mathbf{Y}^{*}$;
3. **Demixing**: factorization of $\mathbf{Y}^{*}$ into its constituent components $\mathbf{A},\mathbf{C}$, and $\mathbf{B}$.
Most prior work has attempted to solve these sub-problems simultaneously, e.g., to recover $\mathbf{A}$ and $\mathbf{C}$ directly from the raw data $\mathbf{Y}$. As emphasized above, this direct approach involves a challenging non-convex optimization problem; the solution to this problem typically misses some structure in $\mathbf{Y}$, is highly sensitive to initialization and hyperparameter settings, and can be particularly unstable in low-SNR regimes. We have found empirically that a sequential approach is more robust and effective. First we compute the compressed and denoised estimate $\hat{\mathbf{Y}} = \mathbf{UV}$; here $\mathbf{U}$ and $\mathbf{V}$ are chosen so that $\hat{\mathbf{Y}}$ captures all of the signal in $\mathbf{Y}$ while retaining minimal noise (i.e., $\hat{\mathbf{Y}} \approx \mathbf{Y}^{*}$) and also $\mathbf{U}$ and $\mathbf{V}$ are highly-structured, compressible matrices, but we do not enforce any constraints between $(\mathbf{U}, \mathbf{V})$ and $(\mathbf{A}, \mathbf{C}, \mathbf{B})$. The computation of $\mathbf{U}$ and $\mathbf{V}$ essentially solves sub-problems 1 and 2 simultaneously. Second, we exploit $\mathbf{U}$, $\mathbf{V}$, and the resulting denoised $\hat{\mathbf{Y}}$ to facilitate the solution of problem 3. We discuss each of these steps in turn below.
Denoising & Compression {#denoising-compression .unnumbered}
-----------------------
To achieve good compression and denoising we need to take advantage of three key properties of functional imaging data:
1. Signal sources are (mostly) spatially local;
2. Signal is structured both temporally and spatially, whereas noise is temporally and spatially uncorrelated;
3. Signal is (mostly) low-rank.
Given these structural assumptions, it is natural to construct $\mathbf{U}$ and $\mathbf{V}$ via a local penalized matrix decomposition approach[^7]: we break the original data matrix $\mathbf{Y}$ into a collection of overlapping spatial patches, then decompose each of these matrix patches (in parallel) using a factorization method that enforces smoothness in the estimated spatial and temporal factors, then combine the resulting collection of spatial and temporal factors over all the patches into a final estimate of $\mathbf{U}$ and $\mathbf{V}$. (See [CaImAn](https://github.com/flatironinstitute/CaImAn) for a similar patch-wise approach to the demixing problem.)
We have experimented with several approaches to penalized matrix decomposition (PMD), and found that an iterative rank-one deflation approach similar to the method described in [@Witten2009pmd] works well. We begin by standardizing the data within a patch: for each pixel, we subtract the mean and normalize by an estimate of the noise variance within each pixel; the noise variance is estimated using the frequency-domain method described in [@pnevmatikakis2016simultaneous], which exploits the fact that the signal and noise power are spectrally separated in movies with sufficiently high frame rates. After this normalization we can model the noise $\mathbf{E}$ as roughly spatially and temporally homogeneous. Denote this standardized data matrix within a patch as $\mathbf{Y_0}$, and Frobenius norm as $||.||_F$. Then at the $k^{th}$ iteration PMD extracts the best rank-one approximation $\mathbf{u}_k\mathbf{v}_k^T$ to the current residual $\mathbf{R}_k = \mathbf{Y_0} - \sum_{n=1}^{k-1} \mathbf{u}_n\mathbf{v}_n^T$, as determined by the objective $$(\mathbf{u}_k, \mathbf{v}_k) = \underset{\mathbf{u}, \mathbf{v}}{\arg\min} ~ || \mathbf{R}_k - \mathbf{u} \mathbf{v}^T ||_F \hspace{1em} \text{subject to} \hspace{1em}
P_{spatial}(\mathbf{u}) \leq c_{1}^k,\ P_{temporal}(\mathbf{v}) \leq c_{2}^{k}, \label{eqn:CPMD}$$ followed by a temporal debiasing update $\mathbf{v}_k = \mathbf{R}_k^T\mathbf{u}_k$. The objective (\[eqn:CPMD\]) can be ascended via alternating minimization on $\mathbf{u_k}$ and $\mathbf{v_k}$.
Note that if we drop the $P_{spatial}(\mathbf{u})$ and $P_{temporal}(\mathbf{v})$ constraints above then we can solve for $\mathbf{u}_k$ and $\mathbf{v}_k$ directly by computing the rank-1 singular value decomposition (SVD) of $\mathbf{R}_k$; in other words, by performing PCA within the patch. Since we have normalized the noise scale within each pixel, PCA should identify the signal subspace within the patch, given enough data (because the normalized projected data variance in any direction will be equal to one plus the signal variance in this direction; since PCA searches for signal directions that maximize variance, PCA will choose exactly the signal subspace in the limit of infinite data). Indeed, as discussed in the results section, simple patch-wise PCA (with an appropriate adaptive method for choosing the rank) often performs well, but incorporating spatial and temporal penalties in the optimization can push $\mathbf{u}_k$ and $\mathbf{v}_k$ closer to the signal subspace, resulting in improved compression and SNR.
How should we define the penalties $P_{spatial}(\mathbf{u})$ and $P_{temporal}(\mathbf{v})$, along with the corresponding constraints $c_{1}^k$ and $c_{2}^{k}$? The simplest option would be to use quadratic smoothing penalties; this would lead to a simple closed-form linear smoothing update for each $\mathbf{u_k}$ and $\mathbf{v_k}$. However, the signals of interest here have inhomogeneous smoothness levels — an apical dendrite might be spatially smooth in the apical direction but highly non-smooth in the orthogonal direction, and similarly a calcium signal might be very smooth except at the times at which a spike occurs. Therefore simple linear smoothing is typically highly suboptimal, often resulting in both undersmoothing and oversmoothing in different signal regions. We have found total variation (TV) [@Rudin:1992:NTV:142273.142312] and trend filtering (TF) [@Kim2009tf] penalties to be much more empirically effective. We let $$\begin{aligned}
P_{temporal}(\mathbf{v}) = \| \mathbf{D}^{(2)} \mathbf{v}\|_1 = \sum_{t=2}^{T-1} |\mathbf{v}_{t-1} - 2 \mathbf{v}_{t} + \mathbf{v}_{t+1}| \end{aligned}$$ and $$\begin{aligned}
P_{spatial}(\mathbf{u}) =
\|\mathbf{\nabla}_{\mathcal{G}}\mathbf{u}\|_1 = \sum_{(i,j) \in \mathcal{E}} | \mathbf{u}_i - \mathbf{u}_j |.\end{aligned}$$ Here $\mathbf{D}^{(2)}$ denotes the one-dimensional discrete second order difference operator and $\mathbf{\nabla}_{\mathcal{G}}$ the incidence matrix of the nearest-neighbor pixel-adjacency graph (pixels $(i,j)$ are in the edge set $\mathcal{E}$ if the pixels are nearest neighbors).
Similarly to [@pnevmatikakis2016simultaneous], we define the smoothing constraints $c_1^k$ and $c_2^k$ implicitly within the alternating updates by the simple reformulation $$\mathbf{u}_k = \underset{\mathbf{u}}{\arg\min} \| \mathbf{R}_k \mathbf{v}_k - \mathbf{u}\|_2^2\ s.t.\ \| \mathbf{\nabla}_{\mathcal{G}} \mathbf{u}\|_1 \leq c_1^k
\iff
\mathbf{u}_k = \underset{\mathbf{u}}{\arg\min} \| \mathbf{\nabla}_{\mathcal{G}} \mathbf{u}\|_1\ s.t.\ \| \mathbf{R}_k \mathbf{v}_k - \mathbf{u}\|_2^2 \leq \hat{\sigma}^2_{\tilde{\mathbf{u}}} d \label{eqn:spatial_update}$$ and $$\mathbf{v}_k = \underset{\mathbf{v}}{\arg \min} \| \mathbf{R}_k ^T \mathbf{u}_k - \mathbf{v}\|_2^2 \ s.t.\ \| \mathbf{D}^{(2)} \mathbf{v}\|_1 \leq c_2^k
\iff
\mathbf{v}_k = \underset{\mathbf{v}}{\arg\min} \| \mathbf{D}^{(2)} \mathbf{v}\|_1 \ s.t.\ \| \mathbf{R}_k ^T \mathbf{u}_k - \mathbf{v}\|_2^2 \leq \hat{\sigma}^2_{\tilde{\mathbf{v}}} T \label{eqn:temporal_update}$$ where $\hat{\sigma}^2_{\tilde{\mathbf{u}}}$ (resp. $\hat{\sigma}^2_{\tilde{\mathbf{v}}}$) estimates the noise level of the unregularized update $\tilde{\mathbf{u}}_k = \mathbf{R}_k \mathbf{v}_k$ (resp. $\tilde{\mathbf{v}}_k = \mathbf{R}_k^T \mathbf{u}_k$), and we are using the fact that if the residual $\mathbf{R}_k \mathbf{v}_k - \mathbf{u}$ contains just noise then its squared norm should be close to $\hat{\sigma}^2_{\tilde{\mathbf{u}}} d$, by the law of large numbers (and similarly for equation \[eqn:temporal\_update\]). See Algorithm \[alg:ROD\] for a summary.
To solve the constrained problems on the right-hand side we use the line search approach described in [@Langer2017cps]. We solve the primal form of the TV optimization problem (\[eqn:spatial\_update\]) using the proxTV package [@barberoTV14], and of the TF optimization problem (\[eqn:temporal\_update\]) using the Primal-Dual Active Set method in [@Han2016pdas]. Both of these methods can exploit warm starts, leading to major speedups after a good initial estimate is found. Empirically the TF optimization scales linearly with the movie length $T$; since the scale of the TV problem is bounded (because we work in local spatial patches) we have not explored the scaling of the TV problem in depth.
Figure \[fig:TrendFiltering\] illustrates the effect of trend filtering on a couple $\mathbf{v}$ components. One important difference compared to previous denoising approaches [@haeffele2014structured; @pnevmatikakis2016simultaneous] is that the TF model is more flexible than the sparse autoregressive model that is typically used to denoise calcium imaging data: the TF model does not require the estimation of any sparsity penalties or autoregressive coefficients, and can handle a mixture of positive and negative fluctuations, while the sparse nonnegative autoregressive model can not (by construction). This is important in this context since each component in $\mathbf{V}$ can include multiple cellular components (potentially with different timescales), mixed with both negative and positive weights.
![Illustration of trend filtering. Each row shows a component $\mathbf{v}$ extracted from the voltage imaging dataset (see Results section for details). Red indicates simple projected signal $\tilde{\mathbf{v}} = \mathbf{R}^T \mathbf{u}$; blue indicates $\mathbf{v}$ after trend filtering. Errorbars on left indicate $2 \times$ estimated noise scale; right panels show zoomed region indicated by dashed lines in left panel.[]{data-label="fig:TrendFiltering"}](./methods/Effect_Of_TF.png){width="17cm"}
To complete the description of the algorithm on a single patch we need an initialization and a stopping criterion to adaptively choose the rank of $\mathbf{U}$ and $\mathbf{V}$. For the latter, the basic idea is that we want to stop adding components $k$ as soon as the residual looks like uncorrelated noise. To make this precise, we define a pair of spatial and temporal “roughness" test statistics $$\begin{aligned}
&T_{temporal}(\mathbf{v}) = \|\mathbf{D}^{(2)} \mathbf{v}\|_1 / \| \mathbf{v} \|_1
&T_{spatial}(\mathbf{u}) = \|\mathbf{\nabla}_{\mathcal{G}}\mathbf{u}\|_1 / \| \mathbf{u} \|_1\end{aligned}$$ and compute these statistics on each extracted $\mathbf{u}_k$ and $\mathbf{v}_k$. We accept or reject each component according to a one-sided hypothesis test under the null hypothesis that $\mathbf{R}_k$ consists of uncorrelated Gaussian noise of variance one. (We compute the critical region for this test numerically.) In the compression stage we are aiming to be rather conservative (we are willing to accept a bit of extra noise or a slightly higher-rank $\mathbf{U}$ and $\mathbf{V}$ in order to ensure that we are capturing the large majority of the signal), so we terminate the outer loop (i.e., stop adding more components $k$) after we reject a couple components $k$ in a row. See Algorithm \[alg-full-PMD\] for a summary.
To initialize, we have found that setting $\mathbf{u}_0 \propto \mathbf{1}$ works well. To speed up early iterations, it is natural to iterate the projections while skipping the denoising steps; this corresponds to intializing with an approximate rank-1 SVD as computed by power iterations. Initializing in this manner can reduce the total number of iterations needed for $\mathbf{u}_k, \mathbf{v}_k$ to converge. Matrix-vector multiplications are a rate limiting step here; thus, these initial iterations can be sped up using spatial and temporal decimation on $\mathbf{R}_k$. Empirically, decimation has the added benefit of boosting signal (by averaging out noise in neighboring timepoints and pixels) and can be useful for extracting weak components in low SNR regimes; see [@friedrich2017multi] for a related discussion.
The method described so far handles a single spatial patch of data. We can process patches in parallel; a multi-core implementation of this method (assigning different patches to different cores) achieves nearly linear speedups. We have found that for some datasets edge artifacts can appear near patch boundaries if the patches do not overlap spatially. These boundary artifacts can be eliminated by performing a $4 \times$ over-complete block-wise decomposition of $\mathbf{Y}$ using half-offset grids for the partitions (so that each pixel $x$ lies within the interior of at least one patch). Then we combine the overlapping patches together via linear interpolation (see [@pnevmatikakis2017normcorre] for a similar approach): set $$\hat{\mathbf{Y}}(x,t) = \frac{\sum_{p} \mathbf{a}_p(x) \hat{\mathbf{Y}}_p(x,t)} {\sum_p \mathbf{a}_p(x)},$$ where $p$ indexes the patches (so $\hat{\mathbf{Y}}_p$ denotes the denoiser output in the $p$-th patch) and $0 \leq \mathbf{a}_p(x) \leq 1$ is a “pyramid" function composed of piecewise linear functions that start at $0$ at the patch boundaries and increase linearly to $1$ at the center of the patch.
The above is equivalent to starting with a collection of overlapping sparse local factorizations $\mathbf{U}_p \mathbf{V}_p$, forming element-wise products between the individual spatial components $\mathbf{U}_{ip}$ and the pyramid functions $\mathbf{a}_p$, and then forming the union of the result to obtain a new factorization $\mathbf{UV}$. Typically this will result in some redundancy due to the overlapping spatial components; we remove this redundancy in a final backwards model selection step that tests whether each temporal component can be explained as a weighted sum of its neighbors. More precisely, we sort the components in ascending order according to the $L_2$ norms of $\mathbf{U}_{ip} \cdot a_p$. For each $i$ in this order we then regress $\mathbf{V}_i$ onto the collection of temporal components $\mathbf{V}_j$ whose corresponding spatial components $\mathbf{U}_j$ overlap with $\mathbf{U}_i$, i.e., approximate $ \hat{\mathbf{V}_i} = \sum_j \beta_j \mathbf{V}_j$. We then test the signal strength of the residual $\mathbf{V}_i - \hat{\mathbf{V}_i}$ (using the temporal test statistic defined previously); the component is rejected if the residual is indistinguishable from noise according to this test statistic. If component $i$ is rejected then we distribute its energy to the remaining spatial components according to the regression weights: $\mathbf{U}_{j} = \mathbf{U}_{j} + \beta_{j} \mathbf{U}_{i}$.
We conclude with a few final implementation notes. First, the results do not depend strongly on the precise patch size, as long as the patch size is comparable to the spatial correlation scale of the data: if the patches are chosen to be much smaller than this then the $\mathbf{V}$ components in neighboring patches are highly correlated, leading to excessive redundancy and suboptimal compression. (Conversely, if the patch size is too big then the sparsity of $\mathbf{U}$ is reduced, and we lose the benefits of patch-wise processing.)
Second, in some datasets (e.g., widefield imaging, or microendoscopic imaging data), large background signals are present across large portions of the field of view. These background signals can be highly correlated across multiple spatial patches, leading to a suboptimal compression of the data if we use the simple independent-patch approach detailed above. Thus in some cases it is preferable to run a couple iterations of PMD(TV, TF) on the full $\mathbf{Y}$ and then subtract the resulting components away before moving on to the independent block processing scheme. We have found that this effectively subtracts away dominant background signals; these can then be encoded as a small number of dense columns in the matrix $\mathbf{U}$, to be followed by a larger number of sparse columns (corresponding to the small patches), resulting in an overall improvement in the compression rate. See the for an example.
The patch-wise PMD(TV,TF) approach results in an algorithm that scales linearly in three critical parameters: $T$ (due to the sparse nature of the second-difference operator in the TF step), $d$ (due to the patch-wise approach), and the rank of $\mathbf{U}$ and $\mathbf{V}$. We obtain further speedups by exploiting warm starts and parallel processing over patches. Additional speedups can be obtained for very long datasets by computing $\mathbf{U}$ on a subset of the data and then updating $\mathbf{V}$ on the remainder of the movie; the latter step does not require any PMD iterations (since the spatial signal subspace has already been identified) and is therefore very fast, just requiring a single temporal update call per element of $\mathbf{V}$.
Demixing {#demixing .unnumbered}
--------
The methods described above provide a compressed and denoised representation of the original data $\mathbf{Y}$: the output matrices $\mathbf{U}$ and $\mathbf{V}$ are low-rank compared to $\mathbf{Y}$, and $\mathbf{U}$ is additionally highly sparse (since $\mathbf{U}$ is formed by appending spatial components $\mathbf{u}$ from multiple local spatial patches, and each $\mathbf{u}_k$ is zero outside of its corresponding patch). How can we exploit this representation to improve the demixing step?
It is useful to first take a step back to consider the strengths and weaknesses of current state of the art demixing methods, most of which are based on NMF. The NMF model is very natural in calcium imaging applications, since each neuron has a shape that is fixed over the timescale of a typical imaging experiment (and these shapes can be represented as non-negative images, i.e., an element of the $\mathbf{A}$ matrix), and a corresponding time-varying calcium concentration that can be represented as a non-negative vector (an element of $\mathbf{C}$): to form a movie we simply take a product of each of these terms and add them together with noise and background, i.e., form $\mathbf{Y}= \mathbf{AC} + \mathbf{B} + \mathbf{E}$.
However, current NMF-based approaches leave room for improvement in several key directions. First, since NMF is a non-convex problem, good initializations are critical to obtain good results via the standard alternating optimization approaches (similar points are made in [@petersen2017scalpel]). Good initialization approaches have been developed for somatic or nuclear calcium imaging, where simple Gaussian shape models are useful crude approximations to the elements of $\mathbf{A}$ [@pnevmatikakis2016simultaneous], but these approaches do not apply to dendritic or axonal imaging. Second (related), it can be hard to separate weak components from noise using current NMF-based approaches. Finally, voltage imaging data does not neatly fit in the NMF framework, since voltage traces typically display both positive and negative fluctuations around the baseline resting potential.
To improve the robustness of NMF approaches for demixing functional data, we make use of the growing literature on “guaranteed NMF” approaches — methods for computing a non-negative matrix factorization that are guaranteed to output the “correct” answer under suitable conditions and assumptions [@donoho2004does; @recht2012factoring; @arora2012computing; @li2016recovery]. In practice, these methods work well on clean data of sufficiently small dimensionality, but are not robust to noise and scale poorly to high-dimensional data. We can solve both of these issues by “superpixelizing" the denoised version of $\mathbf{Y}$; the resulting NMF initialization method improves significantly on state of the art methods for processing dendritic and axonal data. We also take advantage of the sparse, low-rank structure of $\mathbf{U}$ and $\mathbf{V}$ to speed up the NMF iterations.
### Initialization via pure superpixels {#initialization-via-pure-superpixels .unnumbered}
The first step of the initialization procedure is to identify groups of highly correlated spatially connected pixels – “superpixels." The idea is that a pixel within a neuron should be highly correlated with its neighbors, while a pixel containing mostly noise should have a much lower neighbor correlation. These neighbor correlations, in turn, can be estimated much more accurately from the denoised compared to the raw data. The superpixelization procedure results in a set of non-overlapping groups of pixels which are likely to be contained in good neural components. Then we want to extract “pure” superpixels, i.e., the subset of superpixels dominated by signal from just one neural component. We will use the temporal signals extracted from these pure superpixels to seed $\mathbf{C}$ in the NMF decomposition.
To identify superpixels, we begin with the denoised data $\hat{\mathbf{Y}} = \mathbf{UV}$. Since the compression process discussed in the previous section is rather conservative (aiming to preserve the full signal, at the expense of retaining a modest amount of noise), there is room to apply a more aggressive lossy denoiser in the initialization stage to further reduce any remaining noise in $\hat{\mathbf{Y}}$. We soft-threshold signals in each pixel that are not sufficiently large — less than the median plus $\delta \times$ the median absolute deviation (MAD) within each pixel, with $\delta \approx 1$ or $2$. (This thresholding serves to extract mostly spiking activity from functional imaging data.) We identify two neighboring pixels to be from the same superpixel if their resulting denoised, soft-thresholded temporal signals have a correlation larger than a threshold $\epsilon$, with $\epsilon \approx 0.9$. Superpixels that contain fewer than $\tau$ pixels are discarded to further reduce noise and the total number of superpixels. We then apply rank 1 NMF on the signals from each superpixel to extract their (thresholded) temporal activities.
To extract pure superpixels, we apply the Successive Projection Algorithm (SPA) [@gillis2014fast] to the temporal activities of superpixels. This algorithm removes “mixed” superpixels whose temporal activity can be modeled as a nonnegative linear combination of activity in other superpixels (up to some R-squared level larger than $ 1-\kappa$, where we use $\kappa \approx 0.2$) and outputs the remaining “pure" superpixels. See Algorithm \[alg1\] for pseudocode.
Note that running SPA on superpixels rather than raw pixels improves performance significantly here, since averaging signals within superpixels boosts SNR (making it easier to separate signal from noise and isolate pure from mixed pixels) and also greatly reduces the dimensionality of the non-negative regression problem SPA has to solve at each iteration. (To keep the problem size small we also run SPA just on small local spatial patches, as in the previous section.) Finally, while we have obtained good results with SPA, other approaches are available [@gillis2018fast] and could be worth further exploration in the future. See Figure \[fig:vi\_superpixels\] for a visual summary of the full procedure.
-- --
-- --
### Local NMF {#local-nmf .unnumbered}
Next we run NMF, using the temporal signals extracted from the “pure” superpixels to initialize $\mathbf{C}$. Given the initial $\mathbf{C}$, the typical next step is to regress onto the data to initialize $\mathbf{A}$. (Note that pure superpixels typically capture just a subset of pixels within the corresponding neuron, so it is not efficient to initialize $\mathbf{A}$ with the pure superpixels.) However, given the large number of pixels in a typical functional imaging video, direct regression of $\mathbf{C}$ onto $\mathbf{Y}$ is slow and overfits, providing poor estimates of $\mathbf{A}$.
This issue is well-understood [@pnevmatikakis2016simultaneous], and several potential solutions have been proposed. For somatic imaging it makes sense to restrict the support of $\mathbf{A}$ to remain close to their initial values (we could use a dilation of the superpixel support for this). But for data with large dendritic or axonal components this approach would cut off large fractions of these components. Sparse regression updates are an option here, but these do not enforce spatial structure in the resulting $\mathbf{A}$ directly; this often results in “speckle" noise in the estimated spatial components (c.f. Figure \[realcompare\] below).
We have found the following approach to be more effective. We initialize the support set $\Omega_k$ as the support of the $k$-th “pure” superpixel. Given $\mathbf{C}$, we compute the correlation image for each component $k$ as the correlation between the denoised data $\hat{\mathbf{Y}}$ and the $k$-th temporal component, $\mathbf{C}_k$. We truncate this correlation image below a certain threshold $\epsilon_1$ to zero, then update $\Omega_k$ as the connected component of the truncated correlation image which overlaps spatially with the previous $\Omega_k$. We use the modified fastHALS algorithm in [@friedrich2017multi] to update $\mathbf{A}$, $\mathbf{C}$, and $\mathbf{B}$ to locally optimize the objective $$\label{objnmf}
\min_{\mathbf{A},\mathbf{C},{\textbf{\textit{b}}}} \|\hat{\mathbf{Y}}-\mathbf{AC} -\mathbf{B}\|_{F}^2,\ \mathrm{s.t.}\ \mathbf{A}_k^x = 0 ~ \forall x \not \in \Omega_k , \mathbf{A}\geqslant 0, \mathbf{C}\geqslant 0, \mathbf{B}={\textbf{\textit{b}}}\mathbf{1}^T, {\textbf{\textit{b}}}\geqslant 0.$$ Here we have modeled the background $\mathbf{B}$ as a simple temporally-constant vector; we discuss generalizations to time-varying backgrounds below. Also note that we are approximating $\hat{\mathbf{Y}}$ directly here, not the thresholded version we used to extract the superpixels above.
Finally, we incorporate a merge step: we truncate the correlation image below certain threshold $\epsilon_2$ to zero, and automatically merge neurons if their truncated correlation images are highly overlapped. The full algorithm is shown in Algorithm \[alg2\].
### Further implementation details {#further-implementation-details .unnumbered}
*Multi pass strategy:* As in [@zhou2018efficient], we find it effective to take a couple passes over the data; particularly in datasets with high neuron density, the first NMF pass might miss some dim neurons. We decrease the MAD threshold $\delta$ and re-run Algorithm \[alg1\] on the residual to find additional components, and then run a final merge and NMF update to complete the pipeline.
*Improvements from denoising and compression:* Compressed data leads to faster NMF updates, since we can replace $\hat{\mathbf{Y}}$ as $\mathbf{UV}$; in fastHALS, we can regress each ${\textbf{\textit{a}}}_k$ on $\mathbf{U}$ or ${\textbf{\textit{c}}}_{k}$ on $\mathbf{V}$ first instead of directly onto $\mathbf{Y}$. Similarly, when calculating the correlation image, we can compute the correlation between the low rank $\mathbf{V}$ and ${\textbf{\textit{c}}}_k$ first. As emphasized above, denoising also improves the estimation of the correlation images, which in turn improves the estimation of the support sets $\Omega_k$.
*Time-varying background:* It is straightforward to generalize the objective \[objnmf\] to include a time-varying background, using either a low-rank model (as in [@pnevmatikakis2016simultaneous]) or a ring-structured model (as in [@zhou2018efficient]). For the low-rank background model, we have found that performing an SVD on the data excluding the support of the superpixels provides an efficient initialization for the background temporal components.
*Incorporating temporal penalties*: Note that we are only imposing nonnegativity in $\mathbf{C}$ here; after denoising to obtain $\hat{\mathbf{Y}}$, we have found that this simple nonnegative constraint is sufficient for the datasets examined here. However, it is certainly possible to incorporate temporal penalties or constraints on $\mathbf{C}$ (e.g., a TF penalty or a non-negative auto-regressive penalty as in [@pnevmatikakis2016simultaneous]), either within each iteration or as a final denoising step.
*Post-processing*: We find that sorting the extracted components by their “brightness," computed as $\max {\textbf{\textit{a}}}_k\cdot\max{\textbf{\textit{c}}}_k$, serves to separate dim background components from bright single-neuronal components. We also found it useful to drop components whose temporal trace has skewness less than 0.5; traces with high skewness correspond to components with significant spiking activity, but low-skewness traces corresponded to noise.
Motion corrected data $\mathbf{Y}\in \mathbb{R}^{d\times T}$, MAD threshold $\delta$, minimum size of superpixels $\tau$, correlation threshold for superpixels $\epsilon$, $R^2$ threshold in SPA $\kappa$. $\sigma({\textbf{\textit{x}}})\leftarrow$ estimated noise for each pixel ${\textbf{\textit{x}}}$ of $\mathbf{Y}$; $\mu({\textbf{\textit{x}}})\leftarrow$ mean for each pixel of $\mathbf{Y}$; $\mathbf{Y} \leftarrow \left(\mathbf{Y}-\mu({\textbf{\textit{x}}})\right) / \sigma({\textbf{\textit{x}}})$; $(\hat{\mathbf{Y}},\mathbf{U},\mathbf{V}) \leftarrow$ PMD($\mathbf{Y}$); $n \leftarrow 0$; $\mathbf{A} \leftarrow [\ ]$, $\mathbf{C}\leftarrow [\ ]$, ${\textbf{\textit{b}}}\leftarrow\mathrm{median}$ for each pixel of $\hat{\mathbf{Y}}$; $\mathbf{R} \leftarrow \hat{\mathbf{Y}} -\mathbf{AC} - {\textbf{\textit{b}}}$; $\sigma_{med}({\textbf{\textit{x}}})\leftarrow$ median absolute deviation for each pixel of $\mathbf{R}$; $\mu_{med}({\textbf{\textit{x}}})\leftarrow$ median for each pixel of $\mathbf{R}$; $\tilde{\mathbf{Y}}\leftarrow \max\left(0, \mathbf{R} - \mu_{med}({\textbf{\textit{x}}}) - \delta\cdot\sigma_{med}({\textbf{\textit{x}}})\right)$; $\mathrm{corr}({\textbf{\textit{x}}},{\textbf{\textit{x}}}^*)\leftarrow \mathrm{corr}\left(\tilde{\mathbf{Y}}({\textbf{\textit{x}}},t),\tilde{\mathbf{Y}}({\textbf{\textit{x}}}^*,t)\right)$ for all neighbouring pixel pairs $({\textbf{\textit{x}}},{\textbf{\textit{x}}}^*)$; Extract superpixels: connect ${\textbf{\textit{x}}}$ and ${\textbf{\textit{x}}}^*$ together if $\mathrm{corr}({\textbf{\textit{x}}},{\textbf{\textit{x}}}^*)\geqslant \epsilon$ to construct connected components and discard those smaller than $\tau$, forming superpixels $\Omega_k,k=1,\cdots,K$; $({\textbf{\textit{a}}}_{k}, {\textbf{\textit{c}}}_{k})\leftarrow \mathrm{rank\ 1\ NMF}$ of $\tilde{\mathbf{Y}}$ on support $\Omega_k , k= 1,\cdots, K$; $[i_1,i_2,\cdots,i_S]\leftarrow\mathrm{SPA}([{\textbf{\textit{c}}}_{1},{\textbf{\textit{c}}}_{2},\cdots,{\textbf{\textit{c}}}_{K}], \kappa)$; $i_1,i_2,\cdots,i_S$ are indices of pure superpixels; $\mathbf{A}_0\leftarrow[\mathbf{A}, {\textbf{\textit{a}}}_{i_1},{\textbf{\textit{a}}}_{i_2},\cdots,{\textbf{\textit{a}}}_{i_S}]$; $\mathbf{C}_0\leftarrow[\mathbf{C}^T, {\textbf{\textit{c}}}_{i_1},{\textbf{\textit{c}}}_{i_2},\cdots,{\textbf{\textit{c}}}_{i_S}]^T$; ${\textbf{\textit{b}}}_0\leftarrow {\textbf{\textit{b}}}$; $(\mathbf{A}, \mathbf{C}, {\textbf{\textit{b}}})\leftarrow\mathrm{LocalNMF}(\mathbf{U}, \mathbf{V}, \mathbf{A}_0, \mathbf{C}_0, {\textbf{\textit{b}}}_0)$; $\delta \leftarrow \delta-1$; $n \leftarrow n+1$; $\eta(k)\leftarrow$ estimated noise for ${\textbf{\textit{c}}}_k$ using average of high frequency domain of PSD; (Optional) Denoise temporal components, e.g. by $\ell_1$ trend filter: ${\textbf{\textit{c}}}_k\leftarrow \min\limits_{\tilde{{\textbf{\textit{c}}}}_k} \|\tilde{{\textbf{\textit{c}}}}_k\|_1,\ \mathrm{s.t.}\ \|\tilde{{\textbf{\textit{c}}}}_k-{\textbf{\textit{c}}}_k\|_{F}\leqslant \eta(k)\sqrt{T}, k=1,\cdots,K$; $\mathbf{A},\mathbf{C},{\textbf{\textit{b}}}$
Compressed factors $\mathbf{U} \in \mathbb{R}^{d\times r}, \mathbf{V} \in \mathbb{R}^{T\times r}$ ($r = rank (\hat{\mathbf{Y}})$); initial constant background ${\textbf{\textit{b}}}_0$, spatial components $\mathbf{A}_0=[{\textbf{\textit{a}}}_{1,0},\cdots,{\textbf{\textit{a}}}_{K,0}]\in\mathbb{R}^{d\times K}$, and temporal components $\mathbf{C}_0=[{\textbf{\textit{c}}}_{1,0},\cdots,{\textbf{\textit{c}}}_{K,0}]^T \in\mathbb{R}^{K\times T}$; truncation threshold when updating support $\epsilon_1$, truncation threshold when merging $\epsilon_2$, overlap threshold when merging $\epsilon_3$. $\Omega_k \leftarrow \mathrm{supp}({\textbf{\textit{a}}}_{k,0})$ is spatial support for $k$-th component, $k=1,\cdots,K$; $\hat{\mathbf{A}} \leftarrow \mathbf{A}_0, \hat{\mathbf{C}}\leftarrow \mathbf{C}_0, \hat{{\textbf{\textit{b}}}}\leftarrow{\textbf{\textit{b}}}_0$; $\nu({\textbf{\textit{x}}})\leftarrow$ standard deviation for each pixel of $\hat{\mathbf{Y}} = \mathbf{UV}$; $\bar{\mathbf{V}}\leftarrow$ mean for each column of $\mathbf{V}$; $\mathbf{P} \leftarrow \left[\mathbf{U},-{\textbf{\textit{b}}}\right]\left(
\begin{bmatrix}
\mathbf{V}\\
\mathbf{1}^T\\
\end{bmatrix}\hat{\mathbf{C}}^T\right)$; $\mathbf{Q} \leftarrow \hat{\mathbf{C}}\hat{\mathbf{C}}^{T}$; Update spatial: $\hat{{\textbf{\textit{a}}}}_{k}(\Omega_k) \leftarrow \max\left(0, \hat{{\textbf{\textit{a}}}}_{k}(\Omega_k) + \frac{\mathbf{P}(\Omega_k,k)-\hat{\mathbf{A}}(\Omega_k)\mathbf{Q}(:,k)}{\mathbf{Q}(k,k)}\right)$; Update constant background: $\hat{{\textbf{\textit{b}}}} \leftarrow \max\left(0, \frac{1}{T}(\mathbf{UV}-\hat{\mathbf{A}}\hat{\mathbf{C}})\mathbf{1}\right)$; $\mathbf{P} \leftarrow \left[\mathbf{V}^T,\mathbf{1}\right]\left(\left[\mathbf{U},-{\textbf{\textit{b}}}\right]^T\hat{\mathbf{A}}\right)$; $\mathbf{Q} \leftarrow \hat{\mathbf{A}}^{T}\hat{\mathbf{A}}$; Update temporal: $\hat{{\textbf{\textit{c}}}}_{k} \leftarrow \max\left(0, \hat{{\textbf{\textit{c}}}}_{k} + \frac{\mathbf{P}(:,k)-\hat{\mathbf{C}}\mathbf{Q}(:,k)}{\mathbf{Q}(k,k)}\right)$; $\mathrm{corr}(k,{\textbf{\textit{x}}})\leftarrow \frac{1}{T\cdot\nu({\textbf{\textit{x}}})\cdot\mathrm{sd}({\textbf{\textit{c}}}_k)}\mathbf{U}({\textbf{\textit{x}}},:)\left((\mathbf{V} - \bar{\mathbf{V}})({\textbf{\textit{c}}}_k - \bar{{\textbf{\textit{c}}}}_k)\right)$; Update spatial support: $\Omega_k \leftarrow$ biggest connected component in $\{{\textbf{\textit{x}}}|\mathrm{corr}(k,{\textbf{\textit{x}}})\geqslant\epsilon_1\}$\
that spatially overlaps with $\{{\textbf{\textit{a}}}_k>0\}$; $\hat{{\textbf{\textit{a}}}}_k(\Omega_k^{c}) \leftarrow 0$; $\rho(k,{\textbf{\textit{x}}})\leftarrow\left(\mathrm{corr}(k,{\textbf{\textit{x}}})\geqslant\epsilon_2\right)$; Merge overlapping components $k_1,k_2$ if $\sum_{{\textbf{\textit{x}}}} \left(\rho(k_1,{\textbf{\textit{x}}}) * \rho(k_2,{\textbf{\textit{x}}})\right) / \sum_{{\textbf{\textit{x}}}}\rho(k_i,{\textbf{\textit{x}}}) \geqslant \epsilon_3$; $(\tilde{{\textbf{\textit{a}}}},\tilde{{\textbf{\textit{c}}}}) \leftarrow$ rank-1 NMF on $[\hat{{\textbf{\textit{a}}}}_{k_1},\cdots,\hat{{\textbf{\textit{a}}}}_{k_r}][\hat{{\textbf{\textit{c}}}}_{k_1},\cdots,\hat{{\textbf{\textit{c}}}}_{k_r}]$ for merged components $k_1,\cdots,k_r$; $\hat{\mathbf{A}}\leftarrow \left[\hat{\mathbf{A}}\backslash \{{\textbf{\textit{a}}}_{k_1},\cdots,{\textbf{\textit{a}}}_{k_r}\},\tilde{{\textbf{\textit{a}}}}\right], \hat{\mathbf{C}}\leftarrow \left[\hat{\mathbf{C}}^T\backslash \{{\textbf{\textit{c}}}_{k_1},\cdots,{\textbf{\textit{c}}}_{k_r}\},\tilde{{\textbf{\textit{c}}}}\right]^T;$ update number of components $K$; $\hat{\mathbf{A}},\hat{\mathbf{C}},\hat{{\textbf{\textit{b}}}}$
Results {#results .unnumbered}
=======
Denoising {#denoising .unnumbered}
---------
-------------- -------- ------------ ----------------- ------------------ ----------- ----------------- ------------
**Dataset** **Method** **Compression** **Total** **SNR**
Frames FOV Patch **ratio** **runtime (s)** **metric**
Endoscopic 6000 256x256 16x16 Patch-wise PMD 23 220.4 2.3
16x16 Patch-wise PCA\* X X X
NA Standard PCA 2 595.5 1.3
Dendritic 1000 192x192 16x16 Patch-wise PMD 52 3.2 3.7
16x16 Patch-wise PCA 32 1.2 2.5
NA Standard PCA 2 18.3 1.1
Three-photon 3650 160x240 20x20 Patch-wise PMD 94 12.4 1.8
20x20 Patch-wise PCA 44 3.5 1.4
NA Standard PCA 2 187.2 1.0
Widefield 1872 512x512 32x32 Patch-wise PMD 298 12.5 3.5
32x32 Patch-wise PCA 265 10.1 3.4
NA Standard PCA 10 80.1 1.6
Voltage 6834 80x800 40x40 Patch-wise PMD 180 30.5 2.8
40x40 Patch-wise PCA 213 8.7 2.7
NA Standard PCA 8 185.1 1.0
-------------- -------- ------------ ----------------- ------------------ ----------- ----------------- ------------
: Summary of performance for PCA vs. PMD(TV,TF). SNR metric: average ratio of denoised vs raw SNR, with average restricted to top 10% of pixels with highest raw SNR (to avoid division by small numbers when calculating SNR ratios); an SNR metric of 1 indicates no improvement compared to raw data. Compression ratio defined in the main text. \* denotes that the patch-wise PCA method left a significant amount of visible signal in the residual for this dataset, and therefore we did not pursue further comparisons of timing or the other statistics shown here. To obtain optimistic results for the standard PCA baseline, runtimes are reported for a truncated SVD with prior knowledge of the number of components to select for each dataset (i.e., runtimes did not include any model selection steps for standard PCA). Results for patch-wise methods are reported for a single (non-overlapping) tiling of the FOV; note that total runtimes are reported (not runtimes per patch). All experiments were run using an Intel Core i7-6850K 6-core processor.[]{data-label="tab:pro_pro"}
![Illustration of the compression approach applied to microendoscopic imaging data. Top: individual frame extracted from the raw movie $\mathbf{Y}$ (left), denoised movie $\hat{\mathbf{Y}}$ (middle), and residual $\mathbf{Y} - \hat{\mathbf{Y}}$ (right). Bottom: example single-pixel traces from the movie (locations of pixels are circled in the top plots; first trace indicated by the black circle and second trace indicated by the gray circle). Note that the denoiser increases SNR significantly, and minimal signal is left behind in the residual. These results are best viewed in video form; see [microendoscopic imaging video](\VideoEndoscopeURL) for details.[]{data-label="fig:denoised_endoscope_1"}](./pmd_results/Endoscope_PMD.pdf){width="18cm" height="14cm"}
![Further analysis of microendoscopic imaging data. Top: per-pixel SNR estimated from the raw movie $\mathbf{Y}$ (left), denoised movie $\hat{\mathbf{Y}}$ (middle), and residual $\mathbf{Y} - \hat{\mathbf{Y}}$ (right). Red box indicates zoomed-in region shown in the previous figure. Bottom left panel: ratio of denoised vs. raw SNR; compression boosts SNR by roughly a factor of two here. Bottom middle and right: “correlation images" quantifying the average correlation of the temporal signals in each pixel vs. those in the nearest neighbor pixels [@Smith_2010], computed on raw and residual data, indicating that minimal signal is left behind in the residual. All results here and in the previous figure are based on background-subtracted data, for better visibility. []{data-label="fig:denoised_endoscope_2"}](./pmd_results/Endoscope_PMD.pdf){width="18cm" height="14cm"}
![Example frames and traces from Bessel dendritic imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_1\]. See [Bessel dendritic imaging demixing video](\VideoDemixDendriticURL) for details.[]{data-label="fig:denoised_dendritic_1"}](./pmd_results/Dendritic_PMD.pdf){width="18cm" height="14cm"}
![Summary quantification for denoising of Bessel dendritic imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_2\]. []{data-label="fig:denoised_dendritic_2"}](./pmd_results/Dendritic_PMD.pdf){width="18cm" height="14cm"}
![Example frames and traces from three-photon imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_1\]. See [three-photon imaging video](\VideoThreePURL) for details. []{data-label="fig:denoised_3p_1"}](./pmd_results/3P_PMD.pdf){width="18cm" height="14cm"}
![Summary quantification for denoising of three-photon imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_2\]. []{data-label="fig:denoised_3p_2"}](./pmd_results/3P_PMD.pdf){width="18cm" height="14cm"}
![Example frames and traces from widefield imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_1\]. See [widefield imaging video](\VideoWidefieldURL) for details. []{data-label="fig:denoised_widefield_1"}](./pmd_results/Widefield_PMD.pdf){width="18cm" height="14cm"}
![Summary quantification for denoising of widefield imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_2\].[]{data-label="fig:denoised_widefield_2"}](./pmd_results/Widefield_PMD.pdf){width="18cm" height="14cm"}
![Example frames and traces from voltage imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_1\]. See [voltage imaging demixing video](\VideoDemixVoltageURL) for details.[]{data-label="fig:denoised_voltage_1"}](./pmd_results/QState_PMD.pdf){width="18cm" height="14cm"}
![Summary quantification for denoising of voltage imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_2\]. []{data-label="fig:denoised_voltage_2"}](./pmd_results/QState_PMD.pdf){width="18cm" height="14cm"}
We have applied the denoising and compression approach described above to a wide variety of functional imaging datasets (See Appendix for full details):
- **Endoscopic**: one-photon microendoscopic calcium imaging in dorsal striatum of behaving mouse
- **Dendritic**: two-photon Bessel-beam calcium imaging of dendrites in somatosensory cortex of mouse in vivo
- **Three-photon**: three-photon calcium imaging of visual cortex of mouse in vivo
- **Widefield**: one-photon widefield whole-cortex calcium imaging in behaving mouse
- **Voltage**: one-photon in vitro voltage imaging under optogenetic stimulation.
The proposed methods perform well in all cases with no parameter tuning. We obtain compression ratios (defined as $nnz(\mathbf{Y}) / [nnz(\mathbf{U})+nnz(\mathbf{V})]$, where $nnz(\mathbf{A})$ counts the number of nonzero elements of the matrix $\mathbf{A}$) of 20x-200x, and SNR improvements typically in the range of about 2x but ranging up to 10x, depending on the dataset and the region of interest (we find that SNR improvements are often largest in regions of strongest activity, so SNR improvements vary significantly from pixel to pixel). See Table \[tab:pro\_pro\] and Figures \[fig:denoised\_endoscope\_1\]-\[fig:denoised\_voltage\_2\] for details.
In terms of runtime, we observed the expected scaling: the proposed method scales linearly in $T$, $d$, and the number of extracted components. In turn, the number of estimated components scales roughly proportionally to the number of neurons visible in each movie (in the datasets with single-cell resolution). Total runtimes ranged from a few seconds to a few minutes (for the “Endoscope" dataset, which had the largest number of extracted components); these runtimes are fast enough for the proposed method to be useful as a pre-processing step to be run prior to demixing.
We also performed comparisons against two simpler baselines: standard PCA run on the full dataset, and “patch-wise PCA" run on the same patches as used by PMD. For patch-wise PCA, we used the same stopping rule for choosing the rank of $\hat{\mathbf{Y}}$ as described above for PMD, but did not apply the TV or TF penalty. We find that using the same rank selection criterion for PCA applied to the full dataset performs relatively poorly: in each of the five datasets examined, this approach left significant visible signal behind in the residual. Thus, to make the comparisons as favorable as possible for standard PCA, we chose the rank manually, to retain as much visible signal as possible while keeping the rank as low as possible. Nonetheless, we found that the PMD approach outperformed standard PCA significantly on all three metrics examined here (compression ratio, SNR improvement, and runtime), largely because PCA on the full image outputs dense $\mathbf{U}$ matrices (leading to slower computation and worse noise suppression) whereas the $\mathbf{U}$ matrices output by the patch-wise approaches are highly sparse.
The patch-wise PCA approach has much stronger performance than standard PCA applied to the full data. In four out of five datasets (the “Endoscope" dataset was the exception) patch-wise PCA captured all the visible signal in the dataset and did not leave any visible signal behind in the residual. In these four datasets PMD performed comparably or significantly better than patch-wise PCA in terms of SNR improvement and compression score, but patch-wise PCA was faster. Thus there may be some room to combine these two approaches, e.g., to use PCA as a fast initial method and then PMD to provide further denoising and compression. We leave this direction for future work.
Demixing {#demixing-1 .unnumbered}
--------
### Voltage imaging data {#voltage-imaging-data .unnumbered}
-- --
-- --
-- -- -- -- -- -- --
-- -- -- -- -- -- --
Next we turn to the problem of demixing. We begin with an analysis of a challenging voltage imaging dataset. Voltage imaging (VI) data presents a few important challenges compared to calcium imaging (CI) data: currently-available VI data typically has much lower SNR and displays much stronger bleaching effects than CI data. The dataset we focus on here has another challenging feature: the preparation was driven with time-varying full-field optogenetic stimulation, resulting in highly correlated subthreshold activity in the visible cells, which are highly overlapping spatially. In preliminary analyses of this data we applied variants of CNMF-E [@zhou2018efficient] but did not obtain good results (data not shown), due to the strong bleaching and optogenetic stimulation-induced correlations present in this data.
Thus we pre-processed this data by applying a spline-based detrending to each pixel (see Appendix for full details). This served to attenuate the highly-correlated bleaching signals and subthreshold fluctuations in the raw data, leaving behind spiking signals (which were not perfectly correlated at the millisecond resolution of the video data here) along with uncorrelated noise as the dominant visible signals in the data. Figure \[fig:vi\_superpixels\] shows that the denoiser (followed by soft-thresholding) serves to significantly improve the separability of neural signals from noise in this data: the superpixels obtained after denoising and soft-thresholding provide excellent seeds for the constrained NMF analysis. Figures \[fig:vi\_demixing\] (and the corresponding video) and \[fig:vi\_components\] demonstrate that the full demixing pipeline achieves good performance, extracting components with high spatial and temporal SNR and leaving relatively little residual signal behind despite the limited SNR and the multiple overlapping signals visible in the original (detrended) data. Note that in the final step we project the estimated spatial components back onto the original data, recovering the (highly correlated) temporal components including strong bleaching components (panel D of Figure \[fig:vi\_components\]). Finally, we achieved a speedup in the NMF iterations here that was roughly proportional to the ratio of the rank of $\mathbf{Y}$ compared to the rank of $\mathbf{U}$.
### Bessel dendritic imaging data {#bessel-dendritic-imaging-data .unnumbered}
[ccc]{} Proposed pipeline & NMF on $\hat{\mathbf{Y}}$ & NMF on $\mathbf{Y}$\
[cccc]{} Ground truth & Proposed pipeline & NMF on $\hat{\mathbf{Y}}$ & NMF on $\mathbf{Y}$\
[ccc]{} Spatial components & Spatial components support & Temporal components\
\
The VI dataset analyzed in the preceding subsection contained a number of large visible axonal and dendritic components, but also displayed strong somatic components. For our next example we focus on a CI dataset dominated by dendritic components, where the simple Gaussian spatial filter approach introduced in [@pnevmatikakis2016simultaneous] for initializing somatic components is ineffective. (Indeed, in dendritic or axonal imaging datasets, a search for “hotspots" in the images is biased towards pixels summing activity from multiple neurons — and these “non-pure" pixels are exactly those we wish to avoid in the demixing initialization strategy proposed here.)
Figure \[realcompare\] illustrates several of the spatial components extracted by our pipeline (again, see the corresponding video for a more detailed illustration of the demixing performance); these components visually appear to be dendritic segments and match well with the signals visible in the data movie. Notably, no parameter tuning was necessary to obtain good demixing performance on both the VI and CI datasets, despite the many differences between these data types. Additionally, as a baseline comparison we applied a simple sparse NMF approach with random initialization (similar to the method described in [@pnevmatikakis2016simultaneous]) to both the denoised and raw data ($\hat{\mathbf{Y}}$ and $\mathbf{Y}$, respectively). As shown in the right columns of Figure \[realcompare\], this baseline approach extracted components that were much more mixed and noisy than the components extracted by our proposed demixing pipeline; we also found that the baseline approach was more prone to missing weaker, dimmer components than was the proposed pipeline (data not shown).
The above analyses depended on qualitative visual examinations of the obtained components and demixing video. We also generated simulated data with characteristics closely matched to the raw data, in order to more quantitatively test the demixing performance against a known (albeit simulated) ground truth. To generate simulated data $\mathbf{Y}$, we used the $\mathbf{A}$ and $\mathbf{C}$ estimated from the raw data, and further estimated the conditional distribution of the residual as a function of the denoised data $\mathbf{A} \mathbf{C}$ in the corresponding pixel $x$ and time bin $t$; then we added independent noise samples from this signal-dependent conditional distribution (but with the noise scale multiplied 2x, to make the simulation more challenging) to $\mathbf{AC}$. See the [simulated Bessel dendritic imaging video](\VideoSimulateDendriticURL) for comparison of real and simulated data. We ran the three demixing pipelines on this simulated data. Typical results of these simulations are shown in Figure \[simcompare\]: again we see that the proposed pipeline captures the ground truth components much more accurately than do the baseline methods, similar to the results shown in Figure \[realcompare\]. Quantitatively, components extracted by proposed pipeline have higher correlation with ground truth components than do those extracted by sparse NMF approaches, as shown in Figure \[corr\_sim\_comp\].
Discussion {#discussion .unnumbered}
==========
We have presented new scalable approaches for compressing, denoising, and demixing functional imaging data. The compression and denoising methods presented are generally applicable and can serve as a useful generic step in any functional video processing pipeline, following motion correction and artifact removal. The new demixing methods proposed here are particularly useful for data with many dendritic and axonal processes, where methods based on simple sparse NMF are less effective.
Related work {#related-work .unnumbered}
------------
Other work [@haeffele2014structured; @pnevmatikakis2016simultaneous; @de2018structured] has explored penalized matrix decomposition incorporating sparsity or total variation penalties in related contexts. An important strength of our proposed approach is the focus on highly scalable patch-wise computations (similar to [CaImAn](https://github.com/flatironinstitute/CaImAn)); this leads to fast computations and avoids overfitting by (implicitly) imposing strong sparsity constraints on the spatial matrix $\mathbf{U}$. We also employ a constrained optimization approach using the trend-filtering (TF) penalty, which is more flexible e.g. than the sparse convolutional temporal penalty used in [@haeffele2014structured], since the constrained TF approach doesn’t require us to fit a specific convolutional model or to estimate any Lagrange multipliers for the sparsity penalty.
There are also some interesting connections between the demixing approach proposed in [@petersen2017scalpel] and our approach to initializing NMF, which is based on the sparse projection algorithm (SPA). [@fu2015self; @gillis2018fast] discuss the relationships between SPA and group-sparse dictionary selection methods related to the approach used in [@petersen2017scalpel]; thus the methods we use to compute “pure" superpixels and the methods used in [@petersen2017scalpel] to select neural dictionary elements are closely related. However, our denoise-then-superpixelize approach to seeding the dictionary of neural temporal components is in a sense converse to the clustering approach developed in [@petersen2017scalpel] for seeding the dictionary of neural spatial components. There may be room to fruitfully combine these two approaches in the future.
Future work {#future-work .unnumbered}
-----------
Real-time online updates for $\mathbf{U}$ and $\mathbf{V}$ should be possible, which would enable the incorporation of the compression and denoising approach into [@giovannucci2017onacid] for improved online demixing of neural activity. We are also continuing to explore alternative methods for spatial and temporal denoising of $\mathbf{u}_k$ and $\mathbf{v}_k$, e.g. artificial neural network denoisers.
In the near future we plan to incorporate our code into the [CaImAn](https://github.com/flatironinstitute/CaImAn) and [CNMF-E](https://github.com/zhoupc/CNMF_E) packages for calcium imaging analysis. We hope that the proposed compression methods will help facilitate more widespread and routine public sharing of these valuable datasets and lead to more open and reproducible neuroscience.
Code {#code .unnumbered}
----
Open source code is available at [https://github.com/paninski-lab/funimag](https://github.com/paninski-lab/funimag
).
Video captions {#video-captions .unnumbered}
--------------
1. \
(left) Raw movie $\mathbf{Y}$; (middle) background $\mathbf{Y}_{BG}$ estimated via rank-5 PMD; (right) estimated foreground $\mathbf{Y} - \mathbf{Y}_{BG}$. Ticks along the horizontal and vertical axis (in this video and in the videos below) indicate patch borders; note that no edge artifacts are visible at these borders.
2. \
(left) Foreground; (middle) denoised foreground $\hat{\mathbf{Y}}$; (right) residual $\mathbf{Y} - \hat{\mathbf{Y}}$.
3. \
(left) Raw movie $\mathbf{Y}$; (middle) denoised movie $\hat{\mathbf{Y}}$; (right) residual $\mathbf{Y} - \hat{\mathbf{Y}}$.
4. \
Same format as previous video.
5. \
Panels from top to bottom: (1) detrended movie $\mathbf{Y}$; (2) denoised movie $\hat{\mathbf{Y}}$; (3) MAD soft-thresholded movie; (4) rank-1 NMF approximation within superpixels; (5) superpixels; (6) pure superpixels.
6. \
Panels from top to bottom: (1) detrended movie $\mathbf{Y}$; (2) denoised movie $\hat{\mathbf{Y}}$; (3) estimated signal $\mathbf{AC}$; (4) background $\mathbf{B}$; (5) residual $\hat{\mathbf{Y}} - \mathbf{AC} - \mathbf{B}$; (6) estimated noise $\mathbf{Y} - \hat{\mathbf{Y}}$.
7. \
Top: (left) motion corrected movie $\mathbf{Y}$; (middle) denoised movie $\hat{\mathbf{Y}}$; (right) estimated signal $\mathbf{AC}$; Bottom: (left) background $\mathbf{B}$, (middle) residual $\hat{\mathbf{Y}} - \mathbf{AC} - \mathbf{B}$, and (right) estimated noise $\mathbf{Y} - \hat{\mathbf{Y}}$.
8. \
Top: (left) Motion corrected real movie; (right) simulated movie. Bottom: (left) estimated noise from real movie; (right) simulated noise.
Acknowledgments {#acknowledgments .unnumbered}
---------------
We thank Shay Neufeld and Bernardo Sabatini for generously sharing their micro-endoscopic data with us, and Andrea Giovanucci, Eftychios Pnevmatikakis, Ziqiang Wei, Darcy Peterka, Jack Bowler, and Uygar Sumbul for helpful conversations. We also thank our colleagues in the International Brain Laboratory for motivating our efforts towards compressing functional imaging data. This work was funded by Army Research Office W911NF-12-1-0594 (MURI; EH and LP), the Simons Foundation Collaboration on the Global Brain (LP), National Institutes of Health R01EB22913 (LP), R21EY027592 (LP), 1U01NS103489-01 (NJ and LP), R01NS063226 (EH), R01NS076628 (EH), RF1MH114276 (EH), and U19NS104649-01 (EH and LP); in addition, this work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DoI/IBC) contract number D16PC00003 (LP). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Author contributions {#author-contributions .unnumbered}
--------------------
EKB and LP conceived the project. EKB led development of the local PCA compression and denoising approach, including the 4x overcomplete approach for avoiding block artifacts. IK led development of the PMD(TF,TV) approach. DZ led development of the superpixelization and local NMF demixing approach. RZ developed a preliminary version of the PMD approach. PZ contributed to the development of the demixing approach. FG, JF, and GD contributed the voltage imaging dataset. JR, PF, TM, and AT contributed the three-photon imaging dataset. YL, RL, and NJ contributed the Bessel dendritic dataset. YM, SK, MS, and EH contributed the widefield dataset. EKB, IK, DZ, and LP wrote the paper, with input from PZ. LP supervised the project.
Appendix: dataset details {#appendix-dataset-details .unnumbered}
=========================
Microendoscopic imaging data {#microendoscopic-imaging-data .unnumbered}
----------------------------
This dataset was analyzed previously in [@zhou2018efficient]; see the “Dorsal Striatum Data" subsection of the Methods section of that paper for full experimental details. Briefly, a 1 mm gradient index of refraction (GRIN) lens was implanted into dorsal striatum of a mouse expressing AAV1-Syn-GCaMP6f; imaging was performed using a miniature one-photon microscope with an integrated 475 nm LED (Inscopix) while the mouse was freely moving in an open-field arena. Images were acquired at 30 Hz and then down sampled to 10 Hz.
Bessel dendritic imaging data {#bessel-dendritic-imaging-data-1 .unnumbered}
-----------------------------
All surgical procedures were in accordance with protocols approved by the Howard Hughes Medical Institute Janelia Research Campus Institutional Animal Care and Use Committee. C57BL/6J mice over 8 weeks old at the time of surgery were anesthetized with isoflurane anesthesia (1–2%). A craniotomy over nearly the entire left dorsal cortex (from Bregma +3 mm to Bregma -4.0 mm) was performed with the dura left intact, with the procedure described in detail previously in [@sofroniew2016large]. AAV2/9-synapsin-flex-GCaMP6s (2.5$\times 10^{13}$ GC/ml) was mixed with AAV2/1-synapsin-Cre (1.5$\times 10^{13}$ GC/ml, 1000$\times$dilution with PBS) at 1:1 to make the working viral solution for intracerebral injections. 30 nl viral solution was slowly injected into exposed cortex at 0.5 mm below dura. Injection sites were evenly spaced (at 0.7-0.9 mm separation) along two lines at 2.3 mm and 3.3 mm parallel to the midline. A custom-made glass coverslip (450 $\mu$m thick) was embedded in the craniotomy and sealed in place with dental acrylic. A titanium head bar was attached to the skull surrounding the coverslip. After recovery from surgery, the mice were habituated to head fixation. Four weeks after surgery, the head-fixed mouse was placed on a floating ball in the dark. The spontaneous neural activity as indicated by GCaMP6s fluorescence signal was recorded in the somatosensory cortex.
Volumetric imaging of dendrites was achieved by scanning an axially extended Bessel focus in [@lu201850] and [@lu2017video]. An axicon-based Bessel beam module was incorporated into a 2-photon random access mesoscope (2p-RAM) in [@lu201850]. Details of the 2p-RAM have been described previously in [@sofroniew2016large]. Briefly, the system was equipped with a 12kHz resonant scanner (24 kHz line rate) and a remote focusing unit that enabled fast axial movements of the focal plane. The system has an excitation numerical aperture (NA) of 0.6 and a collection NA of 1.0. The measured lateral full width at half maximum (FWHM) of the Gaussian focus at the center of the field of view was 0.65 $\mu$m. The lateral and axial FWHMs of the Bessel focus were 0.60 $\mu$m and 71 $\mu$m, respectively. Scanning the Bessel focus in two dimensions, therefore, probed brain volumes within a 100 $\mu$m axial range. The volumetric dendritic data presented in this paper were obtained by placing the center of the Bessel focus at 62 $\mu$m below dura to probe structures at 12 $\mu$m to 112 $\mu$m below dura (figure \[gaussian\_bessel\]). Dendrites within this volume were imaged at an effective volume rate of 3.7 Hz, with each image having 1924$\times$2104 pixels at 0.33 $\mu$m/pixel in the x-y plane. The wavelength of the excitation light was 970 nm and the post-objective excitation power was 120 mW. Images were spatially decimated and cropped for the analyses shown here.
![In vivo volumetric imaging of dendrites in the mouse brain. (a) Maximum intensity projection of a 3D volume (635 $\mu$m x 694 $\mu$m x 100 $\mu$m) of dendrites. The sampling size was 0.33 $\mu$m/pixel. Post-objective power: 24 mW. (b) Image of the same volume collected by scanning a Bessel focus with 0.60 $\mu$m lateral FWHM and 71 $\mu$m axial FWHM. The effective volume rate was 3.7 Hz. Post-objective power: 120 mW. Excitation wavelength: 970 nm. Scale bar: 100 $\mu$m. []{data-label="gaussian_bessel"}](./plots/experiment/gaussian_bessel.png){width="100.00000%"}
Three-photon imaging data {#three-photon-imaging-data .unnumbered}
-------------------------
All procedures were carried out in accordance with the ethical guidelines of the National Institutes of Health and were approved by the Institutional Animal Care and Use Committee (IACUC) of Baylor College of Medicine. Cranial window surgeries over visual cortex were performed as described previously [@reimer2014pupil]. Briefly, a 4 mm cranial window was opened under isoflurane anesthesia and sealed with a 4 mm glass coverslip and surgical glue. The dura was removed before applying the coverslip to increase optical access to the cortex. Imaging was performed in a triple-transgenic mouse (Slc17a7-Cre x Dlx5-CreER x Ai148) expressing GCaMP6f pan-neuronally throughout cortex. Three-photon imaging data was collected as described previously [@ouzounov2017vivo]. Three-photon excitation of GCaMP6 was at 1320nm, which also enabled visualization of unlabeled vasculature and white matter via THG (third harmonic generation). Power was calibrated prior to each day of scanning and carefully maintained below 1.5nJ at the focal plane. For this study, scans were collected at 680 microns and 710 microns below the cortical surface with a 540 x 540 micron field of view at 0.59 pixels/micron spatial resolution and a frame rate of 5 Hz. Imaging was performed at the border of V1 and LM during presentation of oriented noise stimuli.
Widefield imaging data {#widefield-imaging-data .unnumbered}
----------------------
See [@ma2016resting; @ma2016wide] for full details.
Voltage imaging data {#voltage-imaging-data-1 .unnumbered}
--------------------
Q-State’s proprietary Optopatch all-optical electrophysiology platform was used to record fluorescence recordings from induced pluripotent stem (iPS) cell-derived NGN2 excitatory neurons from a cohort of human subjects [@werley2017all]. Stimulation of action potentials was achieved with a blue light-activated channelrhodopsin (CheRiff). Fluorescent readout of voltage was enabled by an Archaerhodopsin variant (QuasAr). NGN2 neurons were produced at Q-State using a transcriptional programming approach. Recordings were performed with an ultra-widefield instrument with a resolution of 800x80 pixels (corresponding field of view of 2 mm$^2$) at a frame rate of 987 Hz.
The obtained data displayed breaks during stimulus resets and photobleaching. To remove these effects from the raw data, we removed frames during stimulus resets, extracted slow trends with a robust B-spline regression (with knots chosen to allow for non-differentiability at stimulus change-points and discontinuity at stimulus resets), and then a quadratic regression against frames with no stimuli to capture and then remove photobleaching effects.
[^1]: Equal contribution, arranged alphabetically; ekb2154, iak2119, dz2336@columbia.edu
[^2]: Departments of Statistics and Neuroscience, Grossman Center for the Statistics of Mind, Center for Theoretical Neuroscience, and Zuckerman Mind Brain Behavior Institute, Columbia University
[^3]: Q-State Biosciences, Inc., Cambridge, MA
[^4]: Department of Biomedical Engineering and Zuckerman Mind Brain Behavior Institute, Columbia University
[^5]: Departments of Physics and Molecular and Cell Biology, UC Berkeley
[^6]: Department of Neuroscience and Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine
[^7]: One important note: many matrix factorizations are possible here to obtain a compressed representation $(\mathbf{U},\mathbf{V})$. This non-uniqueness does not pose an issue for either compression or denoising. This makes these problems inherently easier than the demixing problem, where the identifiability of $\mathbf{A}$, $\mathbf{C}$, and $\mathbf{B}$ (perhaps up to permutations of the rows and columns of $\mathbf{A}$ and $\mathbf{C}$) is critical.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: '**Artificial neural networks (ANNs) suffer from catastrophic forgetting when trained on a sequence of tasks. While this phenomenon was studied in the past, there is only very limited recent research on this phenomenon. We propose a method for determining the contribution of individual parameters in an ANN to catastrophic forgetting. The method is used to analyze an ANNs response to three different continual learning scenarios.**'
address: 'Institute of Signal Processing and System Theory, University of Stuttgart, Germany'
bibliography:
- 'refs.bib'
title: Localizing Catastrophic Forgetting in Neural Networks
---
Continual Learning, Catastrophic Forgetting, Path Integral, Localization
Introduction {#sec:intro}
============
Artificial neural networks suffer from a phenomenon called catastrophic forgetting, which is characterized by a rapid decrease in performance on a learned task when trained on a new task [@ratcliff1990connectionist; @mccloskey1989catastrophic]. For example an ANN trained on machine translation between English and German will essentially “forget” everything it has learned when it is trained on translating between German and French. This is in contrast to human learning, where a human typically will remember at least something he or she has learned on a past task. Solving this problem of catastrophic forgetting, i.e. enabling continual learning of ANNs, is of great interest, because it can enable the accumulation of knowledge over possibly long periods of time without requiring training examples from all but the most recent task. This comes with a number of benefits compared with the current standard of jointly training an ANN on all tasks simultaneously. First: Since training would not require examples from all previously learned tasks, data for a task that has already been learned is no longer needed and can be discarded, reducing the memory required for training. Second: After a ANN was trained to solve some tasks, it is not static but can be adjusted to solve newly and potentially unforeseen tasks. Third: The overall time required for training an ANN on a sequence of tasks could be reduced, since it only needs to be trained on the new task without retraining with the data of previously learned tasks.\
\
While catastrophic forgetting is known in the literature since 1989 and has been studied quite intensively in the past [@robins1996consolidation; @yamaguchi2004reassessment; @french1999catastrophic; @hetherington1993catastrophic], interest in this phenomenon has decayed over the years. Only recently there has been a renewed interest in solving this problem. Many new methods for overcoming catastrophic forgetting like *Elastic Weight Consolidation* (EWC), *Synaptic Intelligence* (SI), *Deep Generative Replay* (DGR), *Variational Continual Learning* (VCL) and more have been proposed [@kirkpatrick2017overcoming; @zenke2017continual; @shin2017continual; @v.2018variational]. Although these works propose new ways of mitigating catastrophic forgetting there is only a very limited research on the phenomenon itself. One example of this is an empirical study on catastrophic forgetting by Goodfellow et al. [@goodfellow2013empirical]. In their work the authors compare different activation functions and their effect on mitigating catastrophic forgetting. The choice of activation function is a vital part of designing a neural architecture and its resilience to catastrophic forgetting is import, but it does not give an insight into the internal mechanisms of an ANN.\
\
In this paper, we study catastrophic forgetting in ANNs by quantifying which part of the network contributed with what extend to forget a previously learned task. While catastrophic forgetting in previous works is measured as a scalar value, e.g. the increase of loss or decrease of accuracy on a previously learned task, we propose a method to quantify catastrophic forgetting separately for every parameter in an ANN. This not only allows for a coarse analysis, i.e. if a ANN experiences catastrophic forgetting or not, but it also localizes which part of a neural architecture contributes to which extend of forgetting a previously learned task. We think that a deeper understanding of catastrophic forgetting in ANNs, enabled through this work, can lead to better methods for overcoming it.
Methods {#sec:metho}
=======
In this section we describe the notation, define different scenarios and methods used in this work.
Notation {#ssec:notat}
--------
In order to clearly define a sequence of tasks on which a ANN is trained, we borrow the notation used in the remainder of this paper from the closely related field of transfer learning with a slight modification[^1] in order to avoid confusion [@weiss2016survey]. We start by defining a domain $\mathcal{D}$ which consists of two parts, a feature space $\mathcal{X}$ and a marginal data generating distribution $P(\mathbf{X})$, where $\mathbf{X}=\lbrace\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\rbrace\in\mathcal{X}$ is a set of training examples. In image classification, the feature space is given by $\mathcal{X}=\lbrace 0,1,\ldots,255\rbrace^{N\times C}$, where $N$ and $C$ are the number of pixels and channels an image contains. An assignment $\mathcal{A}$ for a given domain $\mathcal{D}$ is again defined by two parts, a label space $\mathcal{Y}$ and a function $f:\mathcal{X}\rightarrow\mathcal{Y}$, which represents the mapping from feature to label space. The function $f$ is learned from pairs $\lbrace x_{i},y_{i}\rbrace$, where $x_{i}\in\mathcal{X}$ is a training example and $y_{i}\in\mathcal{Y}$ is the corresponding label. In image classification, this function maps an image to its label. With this notation we can define the phenomenon of catastrophic forgetting more precisely as a rapid decrease in performance of an ANN on Task $A$, defined by $\mathcal{D}_{A}$ and $\mathcal{A}_{A}$, as it is trained on Task $B$, defined by $\mathcal{D}_{B}$ and $\mathcal{A}_{B}$, if $\mathcal{D}_{B}\neq\mathcal{D}_{A}$ and/or $\mathcal{A}_{B}\neq\mathcal{A}_{A}$.
Continual Learning Scenarios {#ssec:CLsce}
----------------------------
Although the recent work on continual learning shares the same goal of mitigating catastrophic forgetting, different experimental setups are used to evaluate the proposed methods. These differ significantly and pose different challenges to the algorithms and methods which are evaluated on them. In order to make the research in this area more comparable, three different scenarios of continual learning were recently proposed [@Hsu2018ReevaluatingCL; @Ven2018GenerativeRW]. These categories are defined in the following subsections.
### Incremental Domain Learning {#sssec:IncDo}
Incremental domain learning (IDL) is characterized by a change in at least one part of the domain $\mathcal{D}$, either the feature space $\mathcal{X}$, the data generating distribution $P(\mathbf{X})$ or both change. This scenario is similar but not identical to domain adaptation in the field of transfer learning [@weiss2016survey]. The difference between domain adaptation and IDL is given by the fact, that in domain adaption one is only interested in transferring an ANNs knowledge from domain $\mathcal{D}_{A}$ to $\mathcal{D}_{B}$. After this transfer the ANNs performance on Domain $\mathcal{D}_{A}$ is typically irrelevant. In IDL this is not the case. Here one is interested learning to solve a task on domain $\mathcal{D}_{A}$ and on $\mathcal{D}_{B}$ without catastrophic forgetting and possibly a transfer of knowledge between the two domains. Another important property of IDL is that the assignment remains unchanged, i.e. the label space $\mathcal{Y}$ and the function $f$ remain unchanged. This can be represented more formally with $f(\mathbf{x}_{i})=f(\hat{\mathbf{x}}_{i})=\mathbf{y}_{i}$, where $\hat{\mathbf{x}}_{i}$ is the representation of $\mathbf{x}_{i}$ in a different domain. In practice this means, that we can share the same output layer of an ANN over different domains. A widely used example for this scenario is permutation MNIST [@kirkpatrick2017overcoming; @zenke2017continual; @shin2017continual; @v.2018variational]. In order to generate different domains, random pixel permutations of images in the MNIST are used, where one realization of a permutation is applied to all images. Although this does not change the feature space $\mathcal{X}$, it changes the data generating distribution $P(\mathbf{X})$ and hence the domain $\mathcal{D}$. The random permutations used in this example generate uncorrelated domains, which is not very realistic and has caused some criticism [@zenke2017continual; @Hsu2018ReevaluatingCL]. A very simple example of IDL with highly correlated, and therefore more realistic, domains based on the MNIST data set can be generated by just inverting the pixel intensities.
### Incremental Class Learning {#sssec:IncCl}
In incremental class learning (ICL) each task adds one or possibly more new classes to classify by an ANN. In each task the ANN is presented with a data set containing only examples of at least one new class to learn. This means that not only the domain $\mathcal{D}$ but also the assignment $\mathcal{A}$ changes between tasks. Formally, the feature space $\mathcal{X}$ and/or the data generating distribution $P(\mathbf{X})$ and the function $f$ change. The label space $\mathcal{Y}$ remains unchanged and therefore the output layer of the ANN can also be shared between tasks. A widely used example for this scenario is split MNIST [@shin2017continual]. In order to generate a sequence of tasks, the MNIST data set is split in such a way that each split contains all the examples from at least one class. A typical way to split the MNIST data set is to separate it into $5$ disjoint sets each containing two classes, e.g. $(0,1)$, $(2,3)$, $(4,5)$, $(6,7)$ and $(8,9)$. In practice this means that in the first task only examples of the classes $0$ and $1$ are used to train an ANN while its output layer contains $10$ neurons and is therefore able to distinct between $10$ different classes.
### Incremental Task Learning {#sssec:IncTa}
The last scenario, incremental task learning (ITL), also allows for changes in both the domain $\mathcal{D}$ and the assignment $\mathcal{A}$ between tasks. In contrast to ICL the label space $\mathcal{Y}$ also changes in ITL, i.e. a ANN can first learn a classification and then a regression task. Since these tasks typically require different activation functions for the last layer of an ANN, a new output layer is used for every task. Such an ANN is known as a multi-headed ANN in the continual learning literature [@v.2018variational; @Hsu2018ReevaluatingCL]. This implies that during inference the identity of a task, which needs to be solved by an ANN, is known in order to select the corresponding head. Requiring such a prior knowledge about the task identity is in stark contrast to ICL, where the ANN not only solves the task at hand but also infers which task is to be solved. A typical example for a sequence of tasks for ITL can be generate from split MNIST or a set of different data sets, where each split or data set has to be learned in each task, by using a different output layer for each task.\
\
These three scenarios for continual learning differ not only in their setup but also in the challenges they pose to an ANN. While ICL requires the model not only to solve a task but also to recognize which task, from all the previously learned tasks, it has to solve. IDL on the other hand requires an ANN to solve the same task across different domains. The last scenario ITL just requires a model to solve individual tasks while sharing only a subset of the network across them.
Proposed Method {#ssec:PropM}
---------------
![Proposed method for localizing catastrophic forgetting[]{data-label="fig:method"}](method){width="1.0\linewidth"}
In order to quantify catastrophic forgetting we need a measure for the performance of an ANN on a given task. Since ANNs are typically trained using stochastic gradient descent, or some variant of it, to minimize a defined loss function $\mathcal{L}(\boldsymbol{\theta}, \mathcal{D}, \mathcal{A})$ with respect to its parameters $\boldsymbol{\theta}$, it is a natural choice to use this loss function for quantifying catastrophic forgetting. To simplify the notation and reduce clutter we will use $\mathcal{L}_{A}(\boldsymbol{\theta})$ for a loss function $\mathcal{L}(\boldsymbol{\theta}, \mathcal{D}_{A}, \mathcal{A}_{A})$, which is defined on domain $\mathcal{D}_{A}$ with assignment $\mathcal{A}_{A}$.\
\
Similar to the inspiring work of Zenke et al. [@zenke2017continual] we interpret the training process of an ANN as a trajectory in parameter space defined by $\boldsymbol{\theta}(n)$, where $n\in\mathbb{N}^{+}$ is the current training step. Moving the parameters $\boldsymbol{\theta}$ along this trajectory causes a change in the loss function $\Delta\mathcal{L}$. If we compute the gradient $\boldsymbol{\nabla}_{\boldsymbol{\theta}}\mathcal{L}$ at each point in parameter space, we can compute $\Delta\mathcal{L}$ either through the difference in loss between the start- and endpoint or through the path integral of $\boldsymbol{\nabla}_{\boldsymbol{\theta}}\mathcal{L}$ along the trajectory as $$\begin{aligned}
\label{eq:deltaL}
\Delta\mathcal{L}=\mathcal{L}(\boldsymbol{\theta}(N))-\mathcal{L}(\boldsymbol{\theta}(1))=\int_{\mathcal{C}}\boldsymbol{\nabla}_{\boldsymbol{\theta}}\mathcal{L}(\boldsymbol{\theta})\mathrm{d}\boldsymbol{\theta},\end{aligned}$$ where $N$ is the number of training steps and $\mathcal{C}$ is the trajectory of $\boldsymbol{\theta}$ through parameter space during training. This equivalence holds, since the gradient vector field is a conservative field. Although both methods for computing this change yield the same result, they differ in their complexity and the insight they can provide. While evaluating $\Delta\mathcal{L}$ via the difference in loss at the start- and endpoint is fast and simple, it can only provide information about the ANN as a whole. Using the path integral is computationally more expensive and therefore slower, but it enables us to determine the contribution of individual parameters. In order to calculate a parameter specific contribution to the change in loss, we decompose the path integral and approximate it with a sum as $$\begin{aligned}
\Delta\mathcal{L}&=\int_{\mathcal{C}}\sum_{i}\boldsymbol{\nabla}_{\theta_{i}}\mathcal{L}(\boldsymbol{\theta})\mathrm{d}\theta_{i}\nonumber\\&\approx\sum_{j=1}^{N}\sum_{i}\boldsymbol{\nabla}_{\theta_{i}}\mathcal{L}(\boldsymbol{\theta}_{j})\Delta\theta_{ij}=\sum_{i}\Delta\mathcal{L}_{i},\end{aligned}$$ where $\Delta\theta_{ij}$ is a small change in the $i$th parameter at training step $j$. With this approximation we can determine the individual contribution of the $i$th parameter $\Delta\mathcal{L}_{i}$ to the overall change in loss $\Delta\mathcal{L}$. In order to check if the approximation is accurate, we can use equation \[eq:deltaL\] to compute $\Delta\mathcal{L}$ exactly and compare it with our approximation. In general the accuracy will depend on the curvature of the loss surface and the step size used for parameter updates. If there is an unacceptable difference between the exact change and the proposed approximation, one can improve the accuracy by inserting some intermediate steps for evaluation of the path integral between two parameter updates.\
\
Catastrophic forgetting occurs when we transition from training and ANN on minimizing $\mathcal{L}_{A}(\boldsymbol{\theta})$ to it being trained to minimize $\mathcal{L}_{B}(\boldsymbol{\theta})$ and is characterized by a rapid increase of the former right after the transition. This period of rapid change is of particular interest, since it represents the period over which the ANN forgets a previously learned task. Determining the contributions of individual parameters is therefore most useful right after the transition and has to be done for the loss function $\mathcal{L}_{A}(\boldsymbol{\theta})$. This process is illustrated in Fig. \[fig:method\].\
\
Although we limit our study of catastrophic forgetting to the scenarios described in section \[ssec:CLsce\], where every task represents a supervised classification problem, the proposed method can be applied to many other settings. For any continual learning task, which involves training an ANN to minimize a loss functions over a sequence of tasks, the change in loss can be approximated as described above. Examples for such sequences of task include, but are not limited to, learning of representations over different domains, a sequence of regression tasks or training generative models continually to capture different data generating distributions.
Experiments {#sec:exper}
===========
In this section we will introduce the model used for the following experiments and describe the exact realization of the different continual learning scenarios introduced in section \[ssec:CLsce\]. We use the same architecture on all three scenarios in order to allow for a comparison of the results. Since a ANNs architecture can have a significant influence on its resilience to catastrophic forgetting [@goodfellow2013empirical] and we are interested in analyzing what challenges the different scenarios pose to an ANN, changing the models structure between evaluations is avoided. The architecture used in this work is a small convolutional neural network (CNN) with four hidden layers according to table \[tbl:CNN\]. Dropout is applied to the input of respective layers and a stride of $2$ is used in all convolutions.
[cccc]{}\
Layer & Act. Size & Act. Func. & Dropout\
Input & $28\times 28\times 1$ & - & -\
Conv. $32\times 3\times 3$& $14\times 14\times 32$ & ReLU & -\
Conv. $32\times 3\times 3$& $7\times 7\times 32$ & ReLU & -\
Dense & $64$ & ReLU & $0.2$\
Dense & $32$ & ReLU & $0.2$\
Dense & $10$ & Soft max & $0.2$\
\[tbl:CNN\]
Incremental Task Learning {#ssec:IncTa}
-------------------------
As described in section \[ssec:CLsce\], ITL is characterized by a change in the domain $\mathcal{D}$ and the assignment $\mathcal{A}$. In order to generate two tasks, which can be used to localize catastrophic forgetting in this scenario, we utilize two popular data sets, MNIST [@lecun-mnisthandwrittendigit-2010] and FashionMNIST [@xiao2017/online]. While MNIST is a data set for handwritten digit classification with $60000$ training and $10000$ test samples of size $28\times 28\times 1$, FashionMNIST is a drop in replacement for MNIST containing images of fashion from 10 different categories.\
\
The sequence of tasks is created by first training the ANN on classifying the handwritten digits of MNIST and then the different fashion categories of FashionMNIST. During this sequence the domain and assignment change. Considering the domains of both tasks, $\mathcal{D}_{M}$ and $\mathcal{D}_{F}$, the feature space $\mathcal{X}_{M}=\mathcal{X}_{F}=\lbrace 0,1,\ldots,255\rbrace^{28\times 28\times 1}$ remains unchanged, while the data generating distributions change, i.e. $\mathcal{P}_{M}(\mathbf{X})\neq\mathcal{P}_{F}(\mathbf{X})$. The assignments, $\mathcal{A}_{M}$ and $\mathcal{A}_{F}$, differ in both the label space and the function learned from training examples, i.e. $\mathcal{Y}_{M}\neq\mathcal{Y}_{F}$ and $f_{M}\neq f_{F}$. We realize this by utilizing a separate output layer, with $10$ neurons, of the ANN for each task.
Incremental Domain Learning {#ssec:IncDo}
---------------------------
In IDL the domain $\mathcal{D}$ changes between tasks while the assignment $\mathcal{A}$ is unchanged. This means, although the feature space $\mathcal{X}$ and/or the corresponding data generating distribution $P(\mathbf{X})$ change, the ANN has to solve the same task but based on different inputs. For experiments on IDL we again utilize the MNIST data set.\
\
A common way to generate a sequence of tasks for IDL based on the MNIST data set is to apply a different random permutation of pixels to each image in the data set for every new task. This is known as permutation MNIST in the literature [@kirkpatrick2017overcoming; @zenke2017continual; @shin2017continual; @v.2018variational; @Ven2018GenerativeRW]. Formally we change the domain $\mathcal{D}$ between tasks by changing the data generating distribution $P(\mathbf{X})$, while the feature space $\mathcal{X}=\lbrace 0,1,\ldots,255\rbrace^{28\times 28\times 1}$ remains unchanged. It is possible to create a very large number of different tasks using this method, but most of the generated domains are uncorrelated and do not resemble natural images.
Incremental Class Learning {#ssec:IncCl}
--------------------------
Experiments for ICL are commonly based on learning the classes in one specific data set in an incremental way. This typically corresponds to a change in both the domain $\mathcal{D}$ and the assignment $\mathcal{A}$ between tasks. But, in contrast to ITL, the feature space $\mathcal{X}$ and the label space $\mathcal{Y}$ are shared between the tasks.\
\
In order to generate a sequence of tasks for ICL, a data set is commonly split into disjoint subsets, where each subset contains only examples of one or more classes. A widely used example for this is split MNIST [@zenke2017continual; @shin2017continual; @v.2018variational; @Ven2018GenerativeRW]. In this case the MNIST data set is commonly split into 5 subsets containing two classes each, e.g. $(0,1)$, $(2,3)$, $(4,5)$, $(6,7)$ and $(8,9)$. Learning to classify the classes in each of these subsets is considered a task.
Results {#sec:resul}
=======
![Results of the experiments described in section \[sec:exper\]. This plot shows the absolute contribution of the weight matrices and bias vectors of individual layers. The ordering from left to right mirrors the CNN’s structure from table \[tbl:CNN\].[]{data-label="fig:combinedresultssum"}](combined_results_sum){width="1.0\linewidth"}
![Results of the experiments described in section \[sec:exper\]. This plot shows the average contribution of an element in the weight matrices and bias vectors of individual layers. The ordering from left to right mirrors the CNN’s structure from table \[tbl:CNN\].[]{data-label="fig:combinedresultsmean"}](combined_results_mean){width="1.0\linewidth"}
In this section we present the results obtained during the experiments described in section \[sec:exper\]. We use the CNN architecture depicted in table \[tbl:CNN\] and train with the Adam optimizer and a learning rate of $0.001$ and a batch size of $128$ for $10$ epochs on every task. No learning rate schedules or early stopping was used. Since the extend of catastrophic forgetting depends not only on the architecture used but also the random initialization of the model we run every experiment $10$ times and report average values with their standard deviation. Although an interpretation of the results is difficult, since it is highly dependent on many different factors like the model architecture, weight initialization, the optimizer and many other hyperparameters, we can at least compare the same configuration across the three different continual learning scenarios introduced in section \[ssec:CLsce\].\
\
Figure \[fig:combinedresultssum\] shows the absolute change in loss aggregated over the weight matrices/tensors and bias vectors of every layer. The ordering from left to right corresponds with the architecture shown in table \[tbl:CNN\] from top to bottom. Comparing the overall distribution of change in loss over the different continual learning scenarios, we can observe distinct patterns for these. The absolute contributions of the convolutional layers on all scenarios is lower than those of the dense layers. Also the absolute contribution of the bias vectors is small when compared to the weight matrices. We can even identify an average decrease in loss for the weight tensor of the second convolutional layer. On ICL the variance of this change in loss is however very high when compared to the other scenarios. While the convolutional layers show a more or less homogeneous change over the different continual learning scenarios, we can observe an interesting difference in the dense layers across them. On ITL and IDL the absolute contribution of the layers decreases from left to right. This is expected since the overall number of neurons in these layers also decreases from left to right. On ICL however we can observe an opposite behavior. Although the number of neurons decreases, the over all contribution to the change in loss increases. This observation is in line with Farguhar & Gal [@Farquhar2018TowardsRE] who observe more catastrophic forgetting on split MNIST than on permutation MNIST and reason that this is caused by gradients with higher magnitude while training the last layer due to more similar looking images in ICL when compared with IDL. Although our observations also indicate that in ICL the last layers are mostly responsible for the change in loss and therefore catastrophic forgetting, we can not give a general explanation for this when considering the limited scope of our experiments. But we can at least support Farguhar & Gal observation that the last layers are mostly responsible for catastrophic forgetting in ICL. This becomes even more evident when we consider the average contribution of a neuron/filter over the layers as shown in figure \[fig:combinedresultsmean\]. Here we have averaged the contributions of neurons, filters without their bias elements, which are plotted separately. Comparing ITL, IDL and ICL we can again observe that the average contribution of a neuron/filter increases when going from ITL to IDL and reaches its maximum for ICL. But we can also again observe that while on ITL and IDL the average contribution of a neuron/filter is approx constant over the dense layers while on ICL it increases from the first dense layer to the output layer.\
\
Overall we can observe different responses of the studied architecture when exposed to the three continual learning scenarios. While ITL causes the least catastrophic forgetting, the evaluation of IDL shows very similar behavior but increased catastrophic forgetting. Our evaluation on ICL not only shows the overall highest change in loss but also a very different pattern than the other two scenarios.
Conclusion {#sec:concl}
==========
Catastrophic forgetting is a fundamental problem in the training process of ANNs. Although it was studied in the past, there was surprisingly few research on the phenomenon itself published over the recent years. We proposed a method for determining the contribution of individual parameters in an ANN to a change in loss, which can be linked to catastrophic forgetting. This method allows a more detailed analysis of this phenomenon through a localization of parts in an ANN that contribute the most to such a change in loss. We evaluated our method on three different continual learning scenarios on common data sets in the field. We could not only support claims from other researchers based on a different experimental evaluation but also found similarities and differences in the response of a specific ANN, which was exposed to these different scenarios.
[^1]: We use the term “assignment” for what is referred to as a “task” in transfer learning.
| {
"pile_set_name": "ArXiv"
} |
---
bibliography:
- 'disorderbib.bib'
---
$$\begin{tikzpicture}
\draw (\textwidth, 0) node[text width = \textwidth, right] {\color{white} easter egg};
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw (0.5\textwidth, -3) node[text width = \textwidth] {\huge \textsf{\textbf{Hydrodynamic transport in strongly coupled \\ \vspace{0.07in} disordered quantum field theories}} };
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw (0.5\textwidth, 0.1) node[text width=\textwidth] {\large \color{black} $\text{\textsf{Andrew Lucas}}$};
\draw (0.5\textwidth, -0.5) node[text width=\textwidth] {\small \textsf{Department of Physics, Harvard University, Cambridge, MA 02138, USA}};
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw (0, -13.1) node[right, text width=0.5\paperwidth] {\texttt{lucas@fas.harvard.edu}};
\draw (\textwidth, -13.1) node[left] {\textsf{\today}};
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw[very thick, color={violet}] (0.0\textwidth, -5.75) -- (0.99\textwidth, -5.75);
\draw (0.12\textwidth, -6.25) node[left] {\color{{violet}} \textsf{\textbf{Abstract:}}};
\draw (0.53\textwidth, -6) node[below, text width=0.8\textwidth, text justified] {\small We compute direct current (dc) thermoelectric transport coefficients in strongly coupled quantum field theories without long lived quasiparticles, at finite temperature and charge density, and disordered on long wavelengths compared to the length scale of local thermalization. Many previous transport computations in strongly coupled systems are interpretable hydrodynamically, despite formally going beyond the hydrodynamic regime. This includes momentum relaxation times previously derived by the memory matrix formalism, and non-perturbative holographic results; in the latter case, this is subject to some important subtleties. Our formalism may extend some memory matrix computations to higher orders in the perturbative disorder strength, as well as give valuable insight into non-perturbative regimes. Strongly coupled metals with quantum critical contributions to transport generically transition between coherent and incoherent metals as disorder strength is increased at fixed temperature, analogous to mean field holographic treatments of disorder. From a condensed matter perspective, our theory generalizes the resistor network approximation, and associated variational techniques, to strongly interacting systems where momentum is long lived.};
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw[very thick, color={violet}] (0.0\textwidth, -5.75) -- (0.99\textwidth, -5.75);
\end{tikzpicture}$$
Introduction
============
One of the most exotic and mysterious systems in condensed matter physics is the strange metal, non-Fermi liquid phase of the high $T_{\mathrm{c}}$ superconductors [@taillefer; @keimer]. The transport data in these materials – including, most famously, the linear in temperature dc electrical resistivity – defies clear explanation by a theory of long lived quasiparticles [@kasahara]. Alternatively, the effectively relativistic plasma in graphene may provide an experimental realization of a strongly interacting quantum fluid [@muller]. Finally, recent advances in ultracold atomic gases have paved the way to realizing strongly interacting fluids [@adams]. In all of the above systems, the absence of long lived quasiparticles on experimentally appropriate time scales (e.g., in the computation of dc transport coefficients) poses a challenge for traditional, quasiparticle-based approaches to condensed matter physics.
From a theoretical perspective, a generic strongly interacting quantum field theory (QFT) in more than one spatial dimension has only a few quantities (energy, charge and momentum) that are long lived, and so hydrodynamics may be a sensible description of the low energy physics at finite temperature and density of all of the above systems. Though hydrodynamics is an old theory [@landau; @kadanoff], its implications for transport have only been understood comparatively recently [@hkms], in weakly disordered systems near quantum criticality, in external magnetic fields [@hkms; @bhaseen; @bhaseen2], and in some simple examples of disordered, non-relativistic electron fluids [@andreev]. This is because “textbook" hydrodynamics is utterly inappropriate for most metals, where the electron-impurity scattering length is short compared to the electron-electron scattering length. Momentum and energy rapidly decay, and the only hydrodynamic variable is the charge density. Note that in contrast to this canonical lore, [@zaanen] proposed that observing viscous hydrodynamics in some metals may be possible.
In most ways, hydrodynamics is a far simpler theory to understand (and perform computations in) than quasiparticle based approaches, such as kinetic theory. The difficulty in studying these systems theoretically lies in the fact that hydrodynamics does not completely solve the transport problem: the coefficients in the hydrodynamic equations must be related to Green’s functions in a microscopic model. Nonetheless, if hydrodynamics is valid, it does provide universal constraints on transport, and a transparent physical picture to interpret the results. There are two tractable approaches that can compute the requisite microscopic Green’s functions, without reference to quasiparticles. The first is methods from (perturbative) QFT, combined with the memory matrix approach [@zwanzig; @mori; @forster], which has recently been used in many microscopic models of strange metals, reasonable for describing cuprate strange metals [@raghu1; @raghu2; @patel; @debanjan]. These approaches rely on properly resumming certain families of Feynman diagrams to all orders. The second approach is holography [@review1; @review2; @review3], which reduces the computation of Green’s functions to solving classical differential equations in a black hole background. This can be done numerically [@santoslat1; @santoslat2; @santoslat3; @chesler; @ling; @donos1409; @rangamani], though in some cases analytic insight can be obtained [@ugajin; @chesler; @btv; @lss; @donos1409; @lucas1411; @peet; @rangamani; @grozdanov], sometimes by employing the memory matrix method [@hkms; @hartnollimpure; @hartnollhofman].
Surprisingly, many of the above transport theories from recent years completely match hydrodynamic predictions, at least superficially, despite being formally beyond the regime of validity of hydrodynamics. We take this as an indication that a thorough understanding of hydrodynamic implications for transport in disordered theories is worthwhile, though we will also carefully describe the regime of validity of the approach. In addition, while almost every citation above aims to address the strange metal phase [@taillefer; @keimer; @kasahara], the “hydrodynamic insight" gained from these methods may be applicable to a much broader set of experimentally realized interacting quantum systems.
Motivation: Incoherent Metals and Holography
--------------------------------------------
Let us begin with the main quantitative motivation for the present paper, which is the physical interpretation of a large body of recent holographic work on transport in QFTs without translational symmetry.
[@hkms] proposed a simple hydrodynamic framework for dc transport which has been quite predictive of both holographic and memory function results in subsequent works, at weak disorder. As we previously mentioned, this framework has been surprisingly good at describing many holographic models that treat disorder at a mean field level. A natural conjecture is that disordered hydrodynamic dc transport can describe holographic systems with explicitly broken translational symmetry, and so it is worthwhile to fully flesh out the disordered hydrodynamic formalism.
We begin with a generic hydrodynamic framework for zero frequency transport calculations in Section \[sec2\]. Our emphasis is on a clear presentation of the assumptions and regime of validity of a hydrodynamic description of transport.
We exactly solve the transport problem in Section \[sec3\], at leading order in the strength of disorder, in the limit where translational symmetry is only broken weakly. These systems describe coherent metals, in the language of [@hartnoll1] – henceforth we will also adopt this terminology. We show that our resulting computations exactly equal the results found by the memory function formalism, under the assumption that the momentum is long-lived, justifying that our approach is sensible, as well as providing a physically transparent derivation of memory function based formulas for conductivities (at least, in the mutual regime of validity of the two methods).
We further show in Section \[sec4\] that the hydrodynamic framework can be used to interpret exact, non-perturbative analytic results for dc transport found using holography. This is subject to the important subtlety that the transport coefficients are computed in terms of a new emergent horizon fluid with distinct, emergent (but somewhat sensible) equations of state.
We then proceed to study hydrodynamic transport in non-perturbatively disordered QFTs. Though not amenable to analytic techniques, we develop a combination of rigorous variational approaches and heuristic approximations, outlined in Section \[sec5\], to calculate dc transport in this regime. One might expect that transport becomes dominated by dissipative hydrodynamics, as momentum may become a “bad" conserved quantity; such a state is an incoherent metal, in the language of [@hartnoll1]. We find further evidence for this qualitative picture.
To date, all models of incoherent metals are holographic massive gravity models [@vegh; @davison; @blake1; @dsz; @thermoel1; @thermoel2], or similarly inspired holographic approaches [@hartnolldonos; @donos1; @andrade; @donos2; @gouteraux; @thermoel3; @donos1406; @kim; @gouteraux2; @davison15; @blake2] which we will henceforth lump together under the label of mean-field disorder. These models break translational symmetry phenomenologically, but not explicitly.[^1] These models always predict dc transport which, at all disorder strengths, can be interpreted in terms of the hydrodynamic results of [@hkms], or the slight generalization of [@lucasMM]. A simple example of this is the exact formula for dc electrical conductivity in an isotropic system [@blake1]: $$\sigma = \sigma_{\textsc{q}} + \frac{\mathcal{Q}^2 \tau}{\mathcal{M}}, \label{drude}$$ with $\sigma_{\textsc{q}}$ a transport coefficient independent of disorder strength, $\mathcal{Q}$ a charge density, and $\mathcal{M}$ an analogue of the mass density. The parameter $\tau$ is analogous to a momentum relaxation time, and related to the phenomenological graviton mass. This formula was already known from quantum critical hydrodynamics [@hkms], using computations valid as $\tau \rightarrow \infty$. Indeed, the latter term is nothing more than the Drude formula, valid in a system without quasiparticles, and the former is a quantum effect that can be important close to a particle-hole symmetric point [@patel2]. Mean field models always predict that (\[drude\]) holds even as $\tau\rightarrow 0$, or in the non-perturbative, strong disorder regime. In this limit $\tau$ cannot be interpreted as the momentum relaxation time directly, but importantly, $\sigma$ stays larger than $\sigma_{\textsc{q}}$, which is the conductivity when $\mathcal{Q}=0$ (an uncharged theory).[^2] And while mean field models do agree with approaches that explicitly break translational symmetry weakly [@btv; @lss; @lucas1411], this is simply a consequence of the perturbative equivalence between holographic and memory function computations of transport, proven in many cases in [@lucas].
One might suspect that the fact that (\[drude\]) holds as $\tau\rightarrow 0$ is a sign that mean field physics is a poor description of a strongly disordered QFT, even in holography. For example, it is well known that mean field descriptions of disorder can completely fail to capture even basic thermodynamics of strongly disordered spin models in classical statistical physics – instead, the emergent phases are spin glasses and must be treated using much more delicate technologies [@spinglass].
Our work in Section \[sec5\] demonstrates that our hydrodynamic framework gives an independent framework in which the qualitative picture of dc transport given in (\[drude\]) *is correct at all disorder strengths* until the hydrodynamic description fails. As an important example, we argue that for an isotropic quantum critical system where viscous transport may be neglected, $$\sigma_{\textsc{q}1}(u) + \sigma_{\textsc{q}2}(u)\frac{\mathcal{Q}_0^2}{u^2} \le \sigma \lesssim \sigma_{\textsc{q}3}(u) + \sigma_{\textsc{q}4}(u) \frac{\mathcal{Q}_0^2}{u^2}, \label{eq2}$$ with $\sigma$ the dc electrical conductivity, $\mathcal{Q}_0$ the spatial average of the charge density $\mathcal{Q}$, $u$ the typical size of fluctuations in $\mathcal{Q}$ about this average, and $\sigma_{\textsc{q}1,2,3,4}$ are related to the “quantum critical" diffusive conductivity, $\sigma_{\textsc{q}}$.[^3] As $u\rightarrow 0$, they are all proportional to the constant $\sigma_{\textsc{q}}$ associated with the translationally invariant QFT. At stronger $u$, $\sigma_{\textsc{q}1,2,3,4}$ may be complicated (spatially-averaged) correlations between $\mathcal{Q}$ and $\sigma_{\textsc{q}}$: see (\[saa\]), (\[sq2\]) and (\[eq125\]).
We have written (\[eq2\]) in the manner we did to emphasize asymptotic behavior. When $u\ll \mathcal{Q}_0$, $\sigma \sim u^{-2}$ if $\mathcal{Q}_0 \ne 0$, and this is a direct consequence of the fact that currents can only decay due to the very slow relaxation of momentum, as in a translationally invariant (momentum conserving) system at finite $\mathcal{Q}_0$, Galilean invariance imposes $\sigma=\infty$.[^4] In contrast, when $u\gg \mathcal{Q}_0$, $\sigma$ is sensitive to typical behavior of local $\sigma_{\textsc{q}}$ and is not parametrically larger. Remarkably, $\sigma_{\textsc{q}1}>0$ in any system where the local quantum critical conductivity never vanishes, so any such system is provably a conductor. The physical intuition behind this is that a current can always flow locally due to finite $\sigma_{\textsc{q}}$, and so if these local currents can always flow, a global current flow can necessarily be established. The upper bound is simply a statement that (up to subtleties involving conservation laws) we can bound the power dissipated (and accordingly the conductance), with the average electric field fixed, by assuming that the electric field within the system is uniform. When $u\ll \mathcal{Q}_0$, we can make contact with the memory matrix formalism and exactly compute $\sigma$ perturbatively, as we discuss in Section \[sec3\], and so in this regime we can do better than the bounds in (\[eq2\]). Indeed, in this regime, we can identify $\tau/\mathcal{M} \approx \sigma_{\textsc{q}}/u^2$, and so our bounds justify (\[drude\]) in our class of hydrodynamic models. Analogous phenomena may be responsible for the finite conductivity of mean field holographic models at all “disorder strengths".
A pictorial summary of (\[eq2\]) is shown in Figure \[fig1\], and the main quantitative result of this paper is the justification for Figure \[fig1\] without any mean-field treatment of disorder, and the development of new techniques to address the strongly disordered regime.
![A qualitative sketch of the coherent-incoherent transition realizable in our framework. $\sigma$ denotes the value of a transport coefficient, such as electrical conductivity, and $u$ denotes the “strength of randomness". The solid black line shows our perturbative analytic computation of $\sigma \sim u^{-2}$ as $u\rightarrow 0$. The dashed red line is the qualitative prediction of mean field models that $\sigma$ saturates at a finite value at strong disorder in a theory with quantum critical transport; in particular, $\sigma\ge \sigma^*$. The gray shaded region corresponds to the region of $\sigma$ allowed by variational bounds on $\sigma$, in generic agreement with mean field models. $u\sim \mathcal{Q}_0$ is the scale of the crossover between a coherent and incoherent metal. []{data-label="fig1"}](fig1hydro.pdf){width="3.5in"}
After we had completed this work, [@donos1506; @donos1507] appeared, and has some overlap with ideas in Section \[sec4\].
Motivation: Beyond Resistor Lattices
------------------------------------
Though the main quantitative focus of this work is a set of computational tools to study hydrodynamic transport in relativistic fluids, such as in holography, we also emphasize that the framework we are developing (with suitable generalizations) is sensible for a description of transport in strongly interacting condensed matter systems, without any reference to holography. A common approximation made in condensed matter is what we will refer to below as the “resistor lattice" approximation, which in physical terms is the statement that the slow, hydrodynamic sector of the theory consists of only a conserved charge. One may then model the emergent hydrodynamics – a simple diffusion equation for charge – as a local resistor network: see e.g. [@ruzin; @halperin]. As mentioned before, this is sensible if electrons scatter more frequently from impurities than they do from each other.
However, we will point out in Section \[sec32\] that this approximation fails in a clean hydrodynamic system: the necessary resistor lattice becomes nonlocal. This is not a surprise. What this paper clarifies is the technique that unifies the computation of transport in a weakly disordered (memory matrix) regime and a strongly disordered regime. In doing so, we generalize well-known variational techniques from resistor networks to account for convective transport. In very special cases [@andreev] performed similar calculations, though did not elucidate the connections with the memory matrix formalism, or with resistor lattice technologies, which we generalize directly in the continuum in this paper. Such resistor lattice methods – commonly with an additional approximation called effective medium theory [@landauer; @kirkpatrick] – have been used recently to study transport in a variety of experimentally realizable systems [@meir; @sarma; @demler]. Our approach can generalize these computations to the regime when disorder is weak, and may result in interesting new experimental predictions.
We emphasize that the calculations in [@meir; @sarma; @demler] typically include non-relativistic effects such as Coulomb screening, or additionally approximate that electron-hole recombination is slow enough that both the electron and hole densities are hydrodynamic quantities. We will not make either assumption in this paper, but the general framework and many computational methods we develop almost certainly extend quite naturally to account for these effects.
Steady-State Hydrodynamics {#sec2}
==========================
We consider a strongly coupled QFT in $d$ spatial dimensions at finite temperature and density, on a flat spacetime. It is necessary to generalize to curved spaces to connect with the results of Section \[sec4\], but every result in this paper generalizes in the obvious way (replacing partial derivatives with covariant derivatives, $\int \mathrm{d}^d\mathbf{x} \rightarrow \int \mathrm{d}^d\mathbf{x} \sqrt{g}$, etc.), and so we will not do so explicitly for ease of presentation. Without quasiparticles, the long time dynamics are that of charge, energy and momentum. In this section, we will work with relativistic notation, though the techniques work for non-relativistic theories as well. We focus on theories with a single conserved charge, but the techniques straightforwardly generalize to theories with multiple conserved charges. Note that we will work in units with $\hbar=1$.
We deform the microscopic Hamiltonian $H$ by an external chemical potential: $$H \rightarrow H - \int \mathrm{d}^{d}\mathbf{x} \; \bar A_\mu J^\mu .$$ with $J^\mu$ a conserved electrical current, $\bar F=\mathrm{d}\bar A$, and$$\bar A = \bar \mu(\mathbf{x}) \mathrm{d}t,$$ so if $\mathcal{Q}(\mathbf{x})$ is the local charge density operator, $$H \rightarrow H - \int \mathrm{d}^d\mathbf{x} \; \bar\mu(\mathbf{x}) \mathcal{Q}(\mathbf{x}).$$The chemical potential in the fluid is thus $\bar \mu$. We also assume that the temperature is uniformly $T$, and that there is no fluid velocity, in our background state. This forms the basis of a consistent solution to hydrodynamic equations, driven by the coupling $\bar\mu$ to an external bath, as we will derive below. The steady-state hydrodynamic equations read (in relativistic notation) [@hkms]
\[hydroeq\]$$\begin{aligned}
\partial_i T^{i\mu} &= \bar F^{\mu\nu}J_\nu, \\
\partial_i J^i &= 0, \end{aligned}$$
where Greek indices denote spacetime indices and Latin indices denote spatial indices and $T^{\mu\nu}$ is the energy-momentum current. We have implicitly taken expectation values over all operators in (\[hydroeq\]) and will do so for the remainder of the paper. Because we have sourced disorder in our fluid entirely through $\bar\mu(\mathbf{x})$, we do not need to couple any other dynamical sectors to the theory, though we will point out how this may be done perturbatively in Section \[sec:scalar\], when additional scalars contribute to disorder. The coupling of the fluid to an external chemical potential means that both energy and momentum may be exchanged with the external bath.
In order for hydrodynamics to be valid, it is necessary that $\bar\mu$ vary slowly in space, on a length scale $\xi$ which is large compared to the (possibly position-dependent) mean free path of the fluid $l$. In our strongly interacting fluid, $l$ is the analogue of the electron-electron scattering length in traditional solid-state physics. Without quasiparticles, it is best interpreted as the minimal length scale at which a hydrodynamic description is sensible. The requirement that $\bar\mu$ vary slowly is often written as $$\left|\frac{\partial_x \bar\mu}{\bar\mu}\right| \ll \frac{1}{l}, \label{eq5}$$though this should not be taken literally ($\bar \mu$ may vary slowly through $\bar \mu=0$). The requirement we will assume henceforth in calculations is that, in Fourier space, $\bar\mu(\mathbf{k})$ is only non-negligible for $|\mathbf{k}|\xi \lesssim 1$. It is not necessary that $\bar \mu$ be approximately the same at all points at space:[^5] disorder can be non-perturbative, with hydrodynamic coefficients such as viscosity and charge density, contained within $T^{\mu\nu}$ and $J^\mu$ in (\[hydroeq\]), varying substantially over distances large compared to $l$; see Figure \[figfluid\]. This was noted in [@andreev] as well.
![We employ a separation of 3 length scales in this paper. $\bar\mu$, and the local fluid properties such as entropy density $\mathcal{S}$, may vary substantially over the distance scale $\xi$. We require $l \ll \xi$ for a hydrodynamic description to be sensible. We will often put our fluids in a large but finite box of length $L\gg \xi$ as well.[]{data-label="figfluid"}](fig2fluid.pdf){width="5in"}
In a quantum critical theory of dynamical exponent $z$, one finds $$l \sim T^{-1/z} \label{lT1z}$$ by dimensional analysis [@sachdev]. Note that (\[eq5\]) does not hold as $T\rightarrow 0$ – it is thus important that we are considering the finite temperature response of the QFT. (\[lT1z\]) may be modified in models where the hydrodynamic limit can persist in regimes where $\bar\mu \gg T$, such as in holography [@davisoncold] (in this particular model one seems to find $l\sim \bar\mu^{-1}$), or when the expectation value of neutral scalar fields is large. Explicit computations of $l$ are beyond the scope of this paper but are necessary to properly understand the regime of validity of hydrodynamics. A conservative requirement is certainly to fix the background temperature $T$ to be uniform and large enough that $\xi \gg T^{-1/z}$ at all points in space, but this may be too strict, as we will see in holographic models.
More liberally, one could only require that $\xi \gg l$ hold locally, with short wavelength disorder in “hot" regions of space with small $l$, and long wavelength disorder in “cold" regions of space with large $l$. So long as the solution to the hydrodynamic equations of motion itself varies slowly in the cold regions with large $l$, then our hydrodynamic formalism should be an acceptable description of transport.
When (\[eq5\]) holds, it is a sensible approximation (and standard in condensed matter physics) to assume that thermodynamic and hydrodynamic coefficients, such as viscosity $\eta$ or charge density $\mathcal{Q}$, are *local* and depend only on $\bar \mu(\mathbf{x})$. We then – very crudely speaking – put together pieces of homogeneous fluid of width $\xi$, whose equations of state are translation invariant, and smooth over the fluctuations from piece to piece. Our approach to transport is to focus on the response of the low energy, hydrodynamic degrees of freedom, exactly treating their evolution across the slowly varying background fluid, as we now detail.
For holographic theories, we do *not* need to make the assumption $\xi \gg l$, or the assumption that all transport coefficients are functions of $\bar\mu$ alone. It is remarkable that the mathematical framework we develop in this paper is nonetheless applicable to so many holographic computations.
Linear Response: A Warm-Up
--------------------------
Let us begin with some simple calculations to get an intuitive feeling for hydrodynamic transport. We work with first order hydrodynamics, and will justify this later in the section. We will also assume that our theory is isotropic, another assumption which we relax later.
A first simple case to consider is when the only slow dynamics in the system are of charge. As we mentioned previously, in this case the dynamics reduce to the solution of a diffusion equation: $$\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i = \partial_i \left(\sigma^{\mathrm{loc}}({\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_i-\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\tilde\mu)\right) = 0, \label{eqjsimple}$$ where ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_i$ is the infinitesimal, constant, externally applied electric field, ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i$ is the infinitesimal electric current, and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\tilde\mu$ is the infinitesimal local chemical potential, excited in response to ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_j$. $\sigma^{\mathrm{loc}}(\mathbf{x})$ is a transport coefficient which is inhomogeneous in space, and can be interpreted as the local conductivity of the theory. This approximation is well known in condensed matter physics, and we mentioned it in the introduction. Note that chemical potential gradients are equivalent to electric fields in the hydrodynamic equations of motion. The electrical conductivity of such a system is defined as $$\mathbb{E}[{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i] =\sigma_{ij}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_j,$$ where ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i$ is evaluated on the unique solution to (\[eqjsimple\]) where ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu$ obeys sensible boundary conditions (e.g., periodicity when disorder is periodic with period $L$), and we have denoted with $\mathbb{E}[\cdots]$ a uniform spatial average. If the disorder is isotropic, then $\sigma_{ij} = \sigma {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}$. Note that $\sigma$ is not equivalent to $\mathbb{E}[\sigma^{\mathrm{loc}}]$. Henceforth, we will define $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu \equiv {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\tilde\mu - x_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_j,$$so that (\[eqjsimple\]) can be written compactly as $$\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i = -\partial_i \left(\sigma^{\mathrm{loc}}\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu\right).$$
Let us now account for convective transport – this means that momentum is a long lived quantity and must be included in hydrodynamics. If we neglect thermal transport, then we must modify (\[eqjsimple\]) to account for the convective contributions to charge: $$\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i = \partial_i \left(\mathcal{Q}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i-\sigma^{\textsc{q}}\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu\right) = 0,$$ where $\mathcal{Q}$ is the charge density, and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i$ is the velocity field in the fluid. $\sigma^{\textsc{q}} \ne \sigma^{\mathrm{loc}}$ is a “quantum critical" transport coefficient corresponding to the flow of a current in the absence of any velocity field [@hkms]. The momentum conservation equation allows us to determine ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i$, and we will show more carefully below that this equation is the following analogue of the Navier-Stokes equation: $$\mathcal{Q} \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu = \partial_j \left( \eta \partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i + \eta \partial_i{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_j+ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}\left(\zeta - \frac{2\eta}{d}\right)\partial_k {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_k \right)$$ with $\eta$ the shear viscosity and $\zeta$ the bulk viscosity. As before, $\mathcal{Q}$, $\sigma^{\textsc{q}}$, $\zeta$ and $\eta$ can all depend on position $\mathbf{x}$, though not in an arbitrary way. As we discussed previously, the $\mathbf{x}$-dependence of all these coefficients is fixed by their dependence on $\bar\mu(\mathbf{x})$, as determined in a locally homogeneous fluid.
Linear Response: Complete Theory
--------------------------------
Let us now describe the complete linear response theory which includes the response of temperature, chemical potential and velocity to external fields.
About our background fluid we perturb the system with an infinitesimal electric field $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_i = \mathbb{E}\left[- \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu\right],$$and an infinitesimal temperature gradient $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\zeta_i = \mathbb{E}\left[-\frac{1}{T} \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T\right].$$ Defining the heat current$$Q^i \equiv T^{it}- \bar\mu J^i,$$ we find that the heat current is conserved (divergenceless): $$\partial_i Q^i = \partial_i T^{it} - \partial_i (\bar \mu J^i) = - \bar F_{ti}J_i -J_i \partial_i \bar \mu = 0.$$ The thermoelectric response of the theory is given by the matrices: $$\left(\begin{array}{c} \mathbb{E}[ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i] \\ \mathbb{E}[{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}Q_i] \end{array}\right) = \left(\begin{array}{cc} \sigma_{ij} &\ \alpha_{ij} T \\ \bar\alpha_{ij} T &\ \bar\kappa_{ij} T\end{array}\right) \left(\begin{array}{c} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_j \\ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\zeta_j \end{array}\right). \label{jq}$$
Let us define
$$\begin{aligned}
{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha &\equiv \left(\begin{array}{c} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu \\ T^{-1} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T\end{array}\right), \\
{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}F^\alpha_i &\equiv \left(\begin{array}{c} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_i \\ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\zeta_i \end{array}\right), \\
{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{J}^\alpha_i &\equiv \left(\begin{array}{c} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i \\ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}Q_i \end{array}\right), \end{aligned}$$
where the $\alpha$ vector index denotes charge (q) or heat (h). Note that we may write bold-face vectors below, but this always refers to spatial indices only – we will always write out the $\alpha$ index explicitly in equations. We then may write (\[jq\]) as $$\mathbb{E}[{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{J}^\alpha_i] = \sigma^{\alpha\beta}_{ij} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}F^\beta_j.$$
We write down the gradient expansion of hydrodynamics to first order in derivatives acting on ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T$, ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu$ and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i$, by expanding stress tensor and charge current in terms of the linear response ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T$, ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu$ and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i$ of the fluid. The charge and heat conservation equations of the fluid may be written as
$$\begin{aligned}
0 &= \partial_i \left(\mathcal{Q}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i - \sigma^{\textsc{q}}_{ij}\partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu - \alpha^{\textsc{q}}_{ij}\partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T\right), \\
0 &= \partial_i \left(T\mathcal{S}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i - T \bar\alpha^{\textsc{q}}_{ij}\partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu - \bar\kappa^{\textsc{q}}_{ij}\partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T\right),\end{aligned}$$
which we henceforth package into the more compact form $$0 = \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{J}^\alpha_i = \partial_i \left[\rho^\alpha {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i - \Sigma^{\alpha\beta}_{ij} \partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\beta\right], \label{maineq1}$$ where $$\rho^\alpha \equiv \left(\begin{array}{c} \mathcal{Q} \\ T\mathcal{S} \end{array}\right),$$ where $\mathcal{Q}$ is the electric charge density and $\mathcal{S}$ is the entropy density. $\Sigma^{\alpha\beta}_{ij}$ correspond to diffusive transport coefficients that couple charge and heat flows to gradients in ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu$ and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T$, even in the absence of any convective (non-vanishing $v_i$) fluid motion. In particular, $\Sigma^{\text{qq}}_{ij} = \sigma^{\textsc{q}}_{ij}$ corresponds to “quantum critical" conductivity and is typically assumed to vanish in a non-relativistic theory without particle-hole symmetry, as in [@andreev]. $\Sigma^{\mathrm{qh}}_{ij} = \alpha^{\textsc{q}}_{ij}$ corresponds to an intrinsic diffusive conductivity that couples charge and heat flows. In standard non-relativistic theories, only $\Sigma^{\mathrm{hh}}_{ij} = \bar\kappa^{\textsc{q}}_{ij}$ is nonvanishing, as in [@andreev]. All three are non-zero in relativistic systems [@hkms]. We assume that $$\Sigma^{\alpha\beta}_{ij} = \Sigma^{\beta\alpha}_{ji}, \label{diffsym1}$$and that locally $\Sigma$ be a positive definite matrix (note $\alpha i$ and $\beta j$ group together for purposes of matrix inversion). This is sensible from the point of view of the second law of thermodynamics. Indeed, in isotropic theories, the second law provides more constraints on these transport coefficients than (\[diffsym1\]) alone [@hkms], but we can and will relax these constraints in the technical formalism we develop without substantially altering any physical content. This loose treatment of entropic constraints proves useful in Section \[sec4\].
The momentum conservation equation becomes[^6] $$\begin{aligned}
\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}P + \partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{T}_{ij} &={\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\left[\mathcal{S}\partial_i T + \mathcal{Q}\partial_i \mu \right]+ \partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{T}_{ij}= \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\left( \Phi^\alpha \rho^\alpha \right) - \partial_j \left[ \eta_{ijkl} \partial_l {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_k\right] = {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{Q} \partial_i \bar\mu,\end{aligned}$$ where $\mathcal{T}_{ij}$ is the viscous stress tensor, $P$ is the pressure, $\eta_{ijkl}$ is the viscosity tensor with symmetries $$\eta_{ijkl}= \eta_{jikl} = \eta_{ijlk} = \eta_{klij}, \label{diffsym2}$$ and we have used the fact that thermodynamic relations imply $$\partial_i P = \mathcal{S} \partial_i T + \mathcal{Q} \partial_i \mu, \label{dip}$$ Now since $T$ is constant on the background, and as the background $\mu$ is simply given by $\bar \mu$, we cancel the two ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{Q}$ terms,and we are left with $$0 = \rho^\alpha\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha - \partial_j \left[ \eta_{ijkl} \partial_k {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_l\right]. \label{maineq2}$$ In the above equations, $\rho$, $\Sigma$ and $\eta$ are all smooth functions of $\bar\mu(\mathbf{x})$, varying on large length scales compared to $l$.
(\[dip\]), along with (\[hydroeq\]) and the fact that $\mathcal{J}^\alpha_i = \mathcal{T}_{ij}=0$ on the background solution, demonstrates that the background solution to the hydrodynamic equations indeed exists. $\mathcal{J}^\alpha_i=0$ on the background because the hydrodynamic equations only depend on $\bar\mu - \mu$, which identically vanishes [@hkms]. Though we expect that disorder implies that $\rho^\alpha$, $\Sigma^{\alpha\beta}_{ij}$ and $\eta_{ijkl}$ are all functions of $\bar\mu$ alone, we will not comment further the precise nature of this dependence, and a microscopic computation is necessary in general.
(\[maineq1\]) and (\[maineq2\]) are linear and have a unique solution when subject to appropriate boundary conditions. These boundary conditions will be periodic boundary conditions in a large box of size $L$ in all directions, up to non-trivial gradients ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}F^\alpha_i = \mathbb{E}[-\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha]$. We also stress that ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha$ only enters the equations of motion through derivatives – this is crucial in order for the linear response problem to be well posed on spaces that are periodic or compact. Henceforth we will drop the ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}$ so as to avoid clutter, with a few exceptions.
Having imposed these boundary conditions, we prove in Appendix \[apponsager\] that, for any hydrodynamic transport computation,$$\sigma^{\alpha\beta}_{ij} = \sigma^{\beta\alpha}_{ji}.$$ This is referred to as Onsager reciprocity, and is a non-trivial consistency check on this framework. Note that this condition is violated when time-reversal symmetry (in the microscopic Hamiltonian $H$) is broken, e.g. by a background magnetic field [@hkms]. We do not consider this possibility in this paper.
As mentioned previously, we have truncated the hydrodynamic gradient expansion at first order. Let us give some sensible, though non-rigorous, justifications for this. The hydrodynamic gradient expansion can be organized as follows:
$$\begin{aligned}
\mathcal{T}_{ij} &\sim l \mathcal{T}^{(1)}_{ij} + l^2 \mathcal{T}^{(2)}_{ij} + \cdots, \\
\mathcal{J}^\alpha_i - \rho^\alpha v_i &\sim l \mathcal{J}^{(1)\alpha}_i+ l^2 \mathcal{J}^{(2)\alpha}_i + \cdots
\end{aligned}$$
$\mathcal{T}^{(n)}_{ij}$ corresponds to the coefficient of the stress tensor carrying $n$ spatial derivatives; similarly for $\mathcal{J}^{(n)\alpha}_i$. This is a qualitative statement – the basic idea is that the coefficients of $\mathcal{T}_{ij} \sim l^n \epsilon/v$ at $n^{\mathrm{th}}$ order in derivatives, e.g., with $\epsilon$ the energy density and $v$ a velocity scale such as the speed of sound, and so we have extracted out the overall scaling in $l$ above. Assuming that the solution $\Phi$ and $v_i$ varies over the length scale $\xi \gg l$, we see that higher derivative corrections to the charge, heat and momentum currents are suppressed by powers of $l/\xi$, and thus can be neglected. In the special case where diffusive charge and heat transport dominates, this argument can be made rigorous. When the convective contributions cannot be ignored, this argument is not rigorous – not all coefficients $\rho$, $\Sigma$ and $\eta$ scale as the same power of $l$, in general, and so it is not obvious to prove that $\Phi$ and $v_i$ must vary on the length scale $\xi$. However, this is still a plausible assumption – any rapidly oscillatory $\Phi$ and $v_i$ on short length scales compared to $\xi$ seems non-sensible as a static solution, since static solutions to dissipative hydrodynamics tend to be “as close as possible" to equilibrium, given the boundary conditions; the variational methods we will develop in this paper also suggest that it is unlikely to have fast variations of $\Phi$ and $v_i$. This general framework readily generalizes to account for higher derivative corrections to hydrodynamics, if one wishes to directly include them, but we will not include them in this paper. (\[maineq1\]) and (\[maineq2\]) are not well-posed until we include first order corrections to hydrodynamics, so it is necessary to work at least to this order in the gradient expansion.
In the absence of other dynamical sectors of the theory, it is necessary that either $\mathcal{S}$ or $\mathcal{Q}$ be position dependent in order to obtain finite thermoelectric conductivities. Indeed, if both $\mathcal{S}$ and $\mathcal{Q}$ are constants, there is a zero mode in (\[maineq1\]) and (\[maineq2\]) corresponding to uniform shifts in $v_i$ and $\mathcal{J}^\alpha_i$. This zero mode is responsible for infinite dc transport coefficients in a fully translation invariant theory. Mathematically, we could break translation invariance only in $\Sigma$ or $\eta$ and still have this zero mode. However, in a microscopic theory $\Sigma$, $\eta$, $\mathcal{S}$ and $\mathcal{Q}$ are not arbitrary but are fixed by equations of state that relate these parameters to $\bar\mu$, so in general both will be inhomogeneous. Alternatively, as we will discuss in Section \[sec:scalar\], it is possible to add other dynamical disordered sectors of the theory which lead to finite conductivities even when $\mathcal{S}$ and $\mathcal{Q}$ are constants.
Let us also briefly mention the issue of momentum relaxation times. In many holographic mean field models of disorder, the momentum relaxation time can be parametrically fast [@gouteraux2]. In these hydrodynamic models, the “momentum relaxation time" is parametrically slower than the mean free time ($1/T$ in most quantum critical models). We do not explicitly compute this momentum relaxation time, and a single momentum relaxation time will not be easily definable when disorder is non-perturbative. We will see that it is possible to spoil coherent transport despite parametrically long lived local momentum currents.
It is also worth stressing that henceforth, when we refer to “hydrodynamic" transport we refer to the transport equations being written in terms of (\[maineq1\]) and (\[maineq2\]). Remarkably, essentially all of our results rely only on the structure of these equations being obeyed, and not on $\mathcal{S}$ and $\mathcal{Q}$ obeying thermodynamic Maxwell relations – holographic horizon fluids do not obey any obvious Maxwell relations. If one has a microscopic system of interest, with known equations of state, they may simply take the above equations and numerically solve them. So the point of this paper is less to describe complicated (numerical) solutions to these equations of motion, but rather to elucidate simple and universal physical consequences of hydrodynamic transport: first through exact results for weakly disordered theories, and then through a combination of rigorous bounds and heuristic arguments for strongly disordered theories. Numerical solutions to these equations, and a discussion of their relevance to realistic quantum critical systems, will be presented elsewhere.
Weak Disorder Limit {#sec3}
===================
In this section, we specialize to the weak disorder limit in which slow momentum relaxation dominates the conductivities. In this limit we can make direct contact with the memory matrix formalism [@zwanzig; @mori; @forster], and provide physically transparent derivations of many previous results derived within this formalism, in the overlapping regime of validity of hydrodynamics and the memory function formalism. We simply quote the results of this approach (see e.g. [@lucasMM]): if the Hamiltonian of our weakly disordered system is $$H = H_0 - \int \mathrm{d}^d\mathbf{x}\; h(\mathbf{x})\mathcal{O}(\mathbf{x}).$$ with $H_0$ translation invariant, and $\mathcal{O}$ an operator in the theory coupled to the field $h$, then the memory matrix formalism predicts, at leading order in perturbation theory: $$\sigma^{\alpha\beta}_{ij} \approx \mathbb{E}\left[ \rho^\alpha
\right]\mathbb{E}\left[\rho^\beta\right] \left[\sum_{\mathbf{k}} \; k_i k_j |h(\mathbf{k})|^2\left[ \lim_{\omega\rightarrow 0} \frac{\mathrm{Im}\left(G_{\mathcal{OO}}(\mathbf{k},\omega)\right)}{\omega}\right] \right]^{-1} \equiv \mathbb{E}\left[ \rho^\alpha
\right]\mathbb{E}\left[\rho^\beta\right] \Gamma^{-1}_{ij} \label{eqmm}$$ In some models of strange metals appropriate for real world modeling, some care is required in defining $\mathcal{Q}$ [@patel]. Formally, the memory matrix formalism is exact, but the formalism does not appear to be tractable in practice beyond leading order, in higher dimensional models.
Let us briefly note our conventions in Fourier space. Fourier transforms are defined as $$\mathcal{O}(\mathbf{k}) = \frac{1}{L^d} \int \mathrm{d}^d\mathbf{x} \; \mathrm{e}^{-\mathrm{i}\mathbf{k}\cdot\mathbf{x}}\mathcal{O}(\mathbf{x}).$$ We will often assume that disordered sectors of the fluid have (zero-mean) Gaussian fluctuations: e.g., quenched disorder on $\mathcal{O}$ would scale as
$$\begin{aligned}
\mathbb{E}_{\mathrm{d}}[\mathcal{O}(\mathbf{k})] &= 0, \\
\mathbb{E}_{\mathrm{d}}[\mathcal{O}(\mathbf{k})\mathcal{O}(\mathbf{q})] &= \frac{V^2_{\mathcal{O}}}{N}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},-\mathbf{q}}, \;\;\;\; (|\mathbf{k}|\xi \lesssim 1)\end{aligned}$$
where $N\gg 1$ represents the number of Fourier modes which “meaningfully contribute" to $\mathcal{O}(\mathbf{x})$, and $\mathbb{E}_{\mathrm{d}}$ denotes an average over quenched disorder: $$N \sim \left(\frac{L}{\xi}\right)^d.$$These definitions are chosen so that $\mathbb{E}[\mathcal{O}(\mathbf{x})^2] = V_{\mathcal{O}}^2$. We will use these conventions throughout the paper. Such disorder is consistent with (\[eq5\]). At a typical point in the fluid, $$\left|\frac{\partial_x \mathcal{O}}{\mathcal{O}}\right|^2 \sim \frac{\mathbb{E}[(\partial_x \mathcal{O})^2]}{\mathbb{E}[\mathcal{O}^2]} \sim \frac{\xi^{-2} V_{\mathcal{O}}^2}{V_{\mathcal{O}}^2} \sim \frac{1}{\xi^2}.$$ To obtain the numerator in the third step above, it is helpful to go to Fourier space, and note that $|\mathbf{k}| \lesssim \xi^{-1}$ for all non-negligible modes.
In [@hkms] and many subsequent works, within the hydrodynamic approach to transport, the momentum transport equation is modified to $$\partial_j T^{ji} = -\frac{T^{ti}}{\tau} + \cdots,$$where $T^{ti}$ is the momentum density, and $\tau$ is a phenomenological relaxation time that is subsequently computed using memory functions.[^7] However, as we will see in this section, at least for dc transport, it is actually not necessary to add in $\tau$ by hand. With weak disorder, the dc transport can be accounted for exactly from hydrodynamics, and (\[eqmm\]) recovered provided that the equations of motion properly account for disorder.
Interestingly, our hydrodynamic approach requires the disorder is always long wavelength compared to $l$, and the memory matrix formalism does not require this restriction. Nonetheless, at leading order in perturbation theory, we will recover the exact memory matrix formula for transport coefficients from hydrodynamic considerations. It is also worth noting that the memory function approach is equivalent to holographic computations of transport in their overlapping regime of validity [@lucas]. Thus, all three approaches give the same picture of transport, which is best physically understood in terms of this simple hydrodynamic framework (notwithstanding the regime of validity).
Disorder Sourced by Scalar Operators {#sec:scalar}
------------------------------------
Let us begin with the case where the operator $\mathcal{O}$ is a scalar field. We assume that all hydrodynamic coefficients are $\mathbf{x}$-independent – $h$ is the only disordered parameter. In this case, (\[maineq2\]) must be modified:$$\rho^\alpha \partial_j \Phi^\alpha + \partial_i \mathcal{T}_{ij} = {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O} \partial_j h . \label{maineq2scalar}$$We place a ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}$ on $\mathcal{O}$ to distinguish the response due to the electric field from the background. The scalar’s static equation of motion is [@kadanoff] $$\int \mathrm{d}^d\mathbf{y}\; G_{\mathcal{OO}}^{-1}(\mathbf{x}-\mathbf{y},\omega=0) \mathcal{O}(\mathbf{y}) = h(\mathbf{x}).$$The Green’s function $G_{\mathcal{OO}}$ is the retarded Green’s function of the translationally invariant Hamiltonian $H_0$: in position space,$$G_{\mathcal{OO}}(\mathbf{x},t) \equiv \mathrm{i}\mathrm{\Theta}(t) \langle [\mathcal{O}(\mathbf{x},t),\mathcal{O}(\mathbf{0},0)]\rangle$$with the average $\langle \cdots \rangle$ taken over quantum and thermal fluctuations, and $\mathrm{\Theta}$ the Heaviside step function. We emphasize that while $G_{\mathcal{OO}}$ is the true quantum Green’s function of $\mathcal{O}$, and an intricate quantum mechanical computation may be necessary to compute it, $G_{\mathcal{OO}}$ does play the role of the coefficient of proportionality in the linear response of macroscopic, thermal expectation values.
Let us make an ansatz for the solution to the hydrodynamic equations, and show that it is consistent with all conservation laws. Our ansatz is that the only divergent terms in linear response, as $h\rightarrow 0$, are $ \mathbf{v} \sim h^{-2}$ and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O} \sim h^{-1}$. Furthermore, at leading order $ \mathbf{v}=\mathbf{v}_0$ is a constant. All other spatially dependent response is $\mathrm{O}(h^0)$ and we will see that it can be neglected in the computation of $\sigma_{ij}$ at leading order.
The leading order response of $\mathcal{O}$ in the $h\rightarrow 0$ limit is best computed in the rest frame of the fluid, which has shifted. It is simplest to Fourier transform to momentum space as well: $$\mathcal{O}(\mathbf{k},-\mathbf{k}\cdot\mathbf{v}_0)_{\mathrm{co-moving}} = G_{\mathcal{OO}}(\mathbf{k}, -\mathbf{k}\cdot\mathbf{v}_0) h(\mathbf{k},-\mathbf{k}\cdot\mathbf{v}_0)_{\mathrm{co-moving}}.$$Here, everything is measured in the co-moving frame of the fluid, and so the only non-vanishing $h$ (and therefore $\mathcal{O}$) will have this special relation between $\mathbf{k}$ and $\omega$. Note, of course, that $\mathcal{O}(\mathbf{k},-\mathbf{k}\cdot\mathbf{v}_0)_{\mathrm{co-moving}} = \mathcal{O}(\mathbf{k})$, as measured in the original rest frame, and similarly for $h(\mathbf{k})$. We wish to keep only the linear response coefficient, proportional to $\mathbf{v}_0$:[^8] $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O}(\mathbf{k}) = -h(\mathbf{k})\frac{\partial G_{\mathcal{OO}}(\mathbf{k},0)}{\partial \omega} \mathbf{k}\cdot\mathbf{v}_0 = -\mathrm{i}h(\mathbf{k}) \left[ \lim_{\omega\rightarrow 0} \frac{\mathrm{Im}\left(G_{\mathcal{OO}}(\mathbf{k},\omega)\right)}{\omega}\right] \mathbf{k}\cdot \mathbf{v}_0.$$In the latter equality we have used reality propeties of Green’s functions, and assumed analyticity near the real axis for $\mathbf{k}\ne \mathbf{0}$.
Now, let us study the momentum conservation equation, averaged over space, so the derivatives of the stress tensor do not contribute: $$\begin{aligned}
0 &= \sum_{\mathbf{k}} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O}(\mathbf{k}) (-\mathrm{i}k_j h(-\mathbf{k}))+ \rho^\alpha F^\alpha_j \notag \\
&= -\sum_{\mathbf{k}} \; k_i k_j |h(\mathbf{k})|^2\left[ \lim_{\omega\rightarrow 0} \frac{\mathrm{Im}\left(G_{\mathcal{OO}}(\mathbf{k},\omega)\right)}{\omega}\right] v_{0i} + \rho^\alpha F^\alpha_j = -\Gamma_{ij}v_{0j} + \rho^\alpha F^\alpha_i.\end{aligned}$$ At leading order, the electric current is uniform:$$\mathcal{J}^\alpha_i \approx \rho^\alpha v_{0i} = \rho^\alpha \rho^\beta \Gamma^{-1}_{ij}F^\beta_j, \label{jpv}$$ which gives us (\[eqmm\]). It is straightforward to generalize to the case where there are multiple types of scalar fields.
Let us now argue that the ansatz (and thus results) we have found are self-consistent. If we do not average the momentum conservation equation over space, then the ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O}\partial_j h$ term is not translationally invariant, and this will induce corrections to $ T$, $ \mu$ and $ \mathbf{v}$ which are spatially varying. However, ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O} \sim h^{-1}$ and so these spatially varying corrections will be $\sim h^0$. Indeed, it is easy to see that (\[maineq1\]) and (\[maineq2scalar\]) are consistent with the leading order inhomogeneous response (except in ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O}$) arising at this subleading order. Thus, our ansatz is indeed correct in the asymptotic limit $h\rightarrow 0$, and we have derived from hydrodynamic principles the momentum relaxation times derived via the memory function formalism. The computation above is completely analogous to the holographic computation of [@lucas].
Disorder Sourced by Chemical Potential {#sec32}
--------------------------------------
In this section, we consider the case where $$\bar\mu = \mu_0 + \epsilon \hat\mu,$$with $\mathbb{E}[\hat\mu]=0$, and $\epsilon \ll 1$ a small perturbative parameter. Alternatively, we can write $$\bar\Phi^\alpha = \Phi_0^\alpha + \epsilon \hat\Phi^\alpha$$with $\bar\Phi^{\mathrm{h}}=T\mathcal{S}$ and $\hat\Phi^{\mathrm{h}}=0$ – this will be more compact notation for subsequent manipulations. We will denote $\hat\rho^\alpha$ with the fluctuations in the charge and entropy densities associated with $\hat\mu$ – in general both will be non-zero. For simplicity, we assume that the background fluid is isotropic, though the technique certainly generalizes (but with more cumbersome calculations). [@dsz] studied similar problems employing hydrodynamic Green’s functions in the memory matrix formalism.
Again, we make the ansatz that the only response at $\mathrm{O}(\epsilon^{-2})$ is a constant velocity field $\mathbf{v}_0$, so that $\mathcal{J}^\alpha_i$ is again approximated by (\[jpv\]). At $\mathrm{O}(\epsilon^{-1})$, there are $\mathbf{x}$-dependent corrections to $T$, $\mu$, and $\mathbf{v}$. A similar calculation to before gives $$\Gamma_{ij} = \sum_{\mathbf{k}} \; k_ik_j \hat\rho^{\alpha}(-\mathbf{k}) \left[\frac{1}{\eta^\prime}\rho^\alpha_0 \rho^\beta_0 + k^2 \Sigma^{\alpha\beta}\right]^{-1} \hat \rho^\beta(\mathbf{k}) \equiv\sum_{\mathbf{k}} \; k_ik_j \hat\rho^{\alpha}(-\mathbf{k}) (\mathfrak{m}^{-1})^{\alpha\beta} \hat \rho^\beta(\mathbf{k}), \label{gammaij32}$$with $$\eta^\prime \equiv \eta \left(2-\frac{2}{d}\right) +\zeta,$$ with $\eta$ the shear viscosity and $\zeta$ the bulk viscosity. We provide more details in Appendix \[apppert\].
Let us briefly discuss some simplisitic limiting cases of (\[gammaij32\]) and give some analytic insight into the solutions – in general, the solutions will be more complicated than what we write here.
First, let us begin with the case with $\eta^\prime \rightarrow \infty$ and $\hat\rho^{\mathrm{h}} \approx 0$. Suppose further $\hat\rho^{\mathrm{q}}$ are Gaussian disordered random variables:
$$\begin{aligned}
\mathbb{E}_{\mathrm{d}}[\hat\rho^\alpha(\mathbf{k})] &= 0, \\
\mathbb{E}_{\mathrm{d}}[\hat\rho^{\mathrm{q}}(\mathbf{k})\hat\rho^{\mathrm{q}}(\mathbf{q})] &= \frac{u^2}{N}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},-\mathbf{q}},
\end{aligned}$$
with $\mathbb{E}_{\mathrm{d}}[\cdots]$ denoting averages over the distribution of quenched disorder modes. Then we find $$\mathbb{E}_{\mathrm{d}}[\Gamma_{ij}] \approx \frac{1}{Nd}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \sum_{\mathbf{k}} \frac{u^2}{\Sigma^{\mathrm{qq}}} = {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \frac{u^2}{d\Sigma^{\mathrm{qq}}} = {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \mathbb{E}\left[\frac{\hat{\mathcal{Q}}^2}{d\Sigma^{\mathrm{qq}}} \right]. \label{gamma40}$$ Fluctuations of this quantity are suppressed in the limit $V_d\rightarrow\infty$ [@lucas1411], so $\mathbb{E}_{\mathrm{d}}[\Gamma_{ij}] \approx \Gamma_{ij}$.
An alternative simple case is thermal transport with $\mathcal{Q}=0$, and $\mathcal{S}\approx \mathcal{S}_0$ with small variations. In this case, we may approximately neglect the $\Sigma$ contributions to $\mathfrak{m}$ as $\xi \rightarrow \infty$, and we find by a similar calculation to above: $$\mathbb{E}_{\mathrm{d}}[\Gamma_{ij}] \approx {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \mathbb{E}\left[\frac{\eta^\prime}{d} \frac{(\partial_i \mathcal{S})^2}{\mathcal{S}_0^2}\right],$$ which is similar to results discussed in [@dsz].
To make contact between (\[gammaij32\]) and the memory function framework, of course, we need to compute the retarded Green’s functions for charge and heat. Unfortunately, the Green’s functions coupling charge and heat flow in a relativistic hydrodynamic system are quite messy [@kovtunlec], and so we will prove the equivalence with (\[eqmm\]) in a more abstract manner. We proceed analogously to the previous case – as above, we have showed that the leading order response of the fluid that contributes to $\mathcal{J}^\alpha_i$ at $\mathrm{O}(\epsilon^{-2})$ is from a constant shift to the velocity.
The retarded Green’s function is *defined* as: $$\hat\rho^\alpha(\mathbf{k}) + {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\alpha(\mathbf{k}) = G^{\alpha\beta}(\mathbf{k},-\mathbf{k}\cdot\mathbf{v}_0) \hat\Phi^\beta(\mathbf{k}) \approx \chi^{\alpha\beta}\hat\Phi^\beta(\mathbf{k}) - \mathbf{k}\cdot\mathbf{v}_0 \frac{\partial G^{\alpha\beta}(\mathbf{k},0)}{\partial \omega} \hat\Phi^\beta(\mathbf{k}).$$ Of course, just as in Section \[sec:scalar\], we must use the boosted Green’s function to obtain the linear response ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha$: the $\mathrm{O}(v^0)$ term is the response of the background fluid, and the $\mathrm{O}(v)$ term is the linear response contribution of interest for the computation of transport – we focus on the latter (${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\alpha$) henceforth. We can also relate ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\alpha$ to ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha$ by the static susceptibilities, since we are perturbing about a translationally invariant state: $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\alpha = \chi^{\alpha\beta} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\beta.$$ Again, we can relate $\mathbf{v}_0$ to $\mathbf{F}^\alpha$ by spatially averaging (\[maineq2\]): $$\begin{aligned}
\rho_0^\alpha F^\alpha_i &= \sum_{\mathbf{k}} \; \hat \rho^\alpha(-\mathbf{k}) \mathrm{i}k_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha(\mathbf{k}) = \sum_{\mathbf{k}} \; \hat \rho^\alpha(-\mathbf{k}) \mathrm{i}k_i \left(\chi^{-1}\right)^{\alpha\beta}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\beta \notag \\
&= \sum_{\mathbf{k}} \; k_ik_j \hat\Phi^\alpha(\mathbf{-k}) \left[\lim_{\omega\rightarrow 0} \frac{\mathrm{Im}(G^{\alpha\beta}(\mathbf{k},\omega))}{\omega}\right]\hat\Phi^\beta(\mathbf{k}) v_{0j}.\end{aligned}$$ It is straightforward to read off $\Gamma_{ij}$ from this equation, and we see that it agrees with the generalization of (\[eqmm\]) to multiple disordered quantities – though of course at this point we use the fact that only $\hat\Phi^{\mathrm{q}} \ne 0$. Since $\mathcal{J}^\alpha_i\approx \rho^\alpha v_{0i}$, we reproduce the results of the memory matrix formalism.
We have worked through two specific examples of deriving (\[eqmm\]) from hydrodynamics. Of course one may need to generalize further, but it should be quite evident from the derivations above that the agreement between the memory matrix formalism and our hydrodynamic framework will persist.
It is possible to compute the transport coefficients at higher orders in perturbation theory, where the memory matrix formalism has become unwieldy enough that such a calculation has not yet been attempted. Even at next order in perturbation theory, the corrections to the conductivity become quite messy. We discuss the general structure of higher order computations in Appendix \[apppert\]. The key point is that organizing the perturbative expansion in a hydrodynamic framework is straightforward in principle, albeit messy to carry out in practice. This framework provides quantitative predictions for memory matrix calculations at higher orders in $\epsilon$, for classes of models perturbed by $\bar\mu$.
### Breakdown of the Resistor Lattice Approximation
Finally, let us compare the results of this subsection with the “resistor lattice" approximation:$$J_i \approx - \sigma_{ij}(\mathbf{x}) \partial_j \mu(\mathbf{x}), \label{emteq}$$ with $\sigma_{ij} \ne \Sigma^{\mathrm{qq}}_{ij}$ taken to be a local function, determined in terms of local properties of the fluid, and and $-\partial_j \mu$ the local electric field in the sample. Of course such a function $\sigma$ may be found by solving a linear algebra problem at each point in space, and so the question is whether this is a useful statement – namely, whether $\sigma_{ij}(\mathbf{x})$ can be computed by appealing to local properties of the disordered QFT (on length scales large compared to $l$). Essentially, can we integrate out $v_i$ and $T$, and be left with a local, dissipative description of electrical transport in terms of $\mu$ alone?
This is impossible in the weak disorder limit, though our comments appear to have broader validity whenever viscosity cannot be neglected. $\partial_j \mu$ is actually inhomogeneous at leading order $\epsilon^{-1}$, and must be expressed in terms of a non-local integral over $\mathcal{Q}(\mathbf{x}) = \rho^{\mathrm{q}}(\mathbf{x}) \approx \chi^{\mathrm{qq}}\epsilon \hat\mu(\mathbf{x})$, with $\chi^{\mathrm{qq}}$ the charge-charge susceptibility, assumed to be constant at this order in perturbation theory. This is derived in (\[eqphim1\]), with the $X$ and $Y$ corrections in (\[eqphim1\]) vanishing at leading order in $\epsilon$; in this equation, the leading order behavior of $\mu$ is “local in Fourier space", and becomes non-local in position space, in terms of the original disorder $\hat\mu$. It is therefore generally impossible to find a function $\sigma_{ij} \sim \epsilon^{-1}$ expressable in terms of $\hat\mu$ or its (low order) derivatives, such that $J_i = \sigma_{ij}(-\partial_j \mu) = \text{constant} \sim \epsilon^{-2}$ (at leading order).
Comparing to Holography {#sec4}
=======================
\[sec:holostripe\] Let us now compare with holographic results. Many holographic results, valid in the weak disorder limit, are equivalent to the memory matrix results [@lucas] – and therefore our hydrodynamic framework. So our focus here will be on non-perturbative holographic results.
Our discussion of holography is brief – for further details, consult the excellent reviews [@review1; @review2; @review3]. Holography refers to a conjectured duality between a classical gravity theory in $d+2$ spacetime dimensions, in an emergent anti-de Sitter (AdS) space, and a strongly coupled QFT in $d$ spatial dimensions. The strongly coupled QFTs (in every case where we know them explicitly) are large-$N$ matrix models, and can be thought to “live" at the boundary of AdS. Making the gravity theory classical is equivalent to sending the bulk Newton’s gravitational constant to 0, and this makes bulk quantum gravity fluctuations negligible. This corresponds to the limit $N\rightarrow \infty$ in the dual theory. However, unlike vector models, these matrix models do not behave at all like free theories, and encode rich quantum critical dynamics. The nonlinear dynamics of gravity is dual to the stress-energy sector of the boundary theory. Furthermore, studying finite temperature dynamics becomes simply related to studying the dynamics in a black hole background. These black holes will be assumed to have the same planar (or toroidal) topology of the boundary theory. Adding a finite charge density in the boundary theory is dual to adding a bulk U(1) gauge field, and charging the black hole under the associated U(1) charge. Of interest for us in this paper is that holographic models can further be used to add strong disorder in addition to finite temperature and density. We are interested in modeling disorder explicitly, and so the bulk geometry becomes inhomogeneous and rugged. At small temperatures, the black hole gets pushed “farther back" into the emergent bulk direction, and the bulk fields become significantly renormalized, with higher momentum modes usually decaying away, as depicted in Figure \[fig2\].
![A qualitative sketch of holography. A finite temperature $T$ and density boundary theory is dual to an emergent gravitational theory in one extra spatial dimension (depicted above). Strong disorder in the boundary theory (depicted in green) backreacts and leads to the formation of a lumpy charged black hole of Hawking temperature $T$. The emergent black hole horizon is curved and is denoted in black. The membrane paradigm suggests that dc transport can be computed in an emergent fluid living on the horizon, which can undergo renormalization relative to the “bare fluid" in the boundary theory.[]{data-label="fig2"}](fig2ads.pdf){width="2.75in"}
The precise duality allows us to compute correlation functions in our unknown QFT by solving gravitational equations instead. The basic idea is as follows: correlation functions of the stress-energy tensor $T^{\mu\nu}$ and the U(1) current $J^\mu$ are respectively related to bulk Green’s functions of the metric $g_{MN}$ and a gauge field $A_M$, all computed at (classical) tree level. We are using $MN$ etc. to denote all coordinates, including the bulk radial coordinate. For example, to compute the electrical conductivity, we add an explicit infinitesimal source for the bulk field $A_i$ at the AdS boundary, and then compute the expectation value of the current[^9] in the field theory in the background perturbed by $A_i$.
These computations are often intractable analytically. However, in the simple case of dc transport, many analytic computations of dc transport in holography are performed using the membrane paradigm [@iqbal]. In this case one can show that there is an analogous “electric current" flowing on the black hole horizon, which is also conserved and whose average value equals that of $J^\mu$ in the boundary (an analogous more complicated story holds for the heat current). The resulting computation of the conductivities then depend only on black hole horizon data. It is natural to conjecture that we should solve the hydrodynamic transport problem using the equations of state of an emergent “fluid" whose equation of state is related to local properties of the black hole horizon. Indeed – up to some subtleties we will see shortly – this is the case.
It is remarkable that *first order* hydrodynamics captures the transport problem on the emergent black hole horizons in holography, in all nonperturbative computations to date. As we will see, however, this emergent horizon fluid is not locally equivalent to a fluid in the boundary QFT – instead it has undergone non-local renormalization, through the radial evolution of the bulk fields and geometry. In addition, while we previously had to make assumptions that the disorder correlation length was large to justify our hydrodynamic formalism, no such justification is necessary (at least, a priori) for the holographic results to be valid. When $\xi\rightarrow \infty$ in a holographic model, the differences between the horizon and boundary fluids become negligible, as was found in [@herzog].
Let us begin with the striped models studied in [@donos1409],[^10] which consider gravitational solutions to the AdS-Einstein-Maxwell system: $$S = \int \mathrm{d}^4x\; \sqrt{-g} \left(R+6 - \frac{F^2}{4}\right).$$ (we have set Newton’s constant, the bulk charge, and the size of AdS equal to 1 to follow their notation). Above $R$ is the Ricci scalar, and $F$ is the Maxwell tensor associated with the bulk gauge field. In these models, translation invariance is only broken in the $x$ direction, but the boundary theory lives in $d=2$. Let us summarize their results briefly. They found that (after inverting their thermoelectric conductivity matrix):
\[donosdata1\]$$\begin{aligned}
T\bar\kappa_{xx} = \sigma^{\mathrm{hh}}_{xx} &= \frac{16{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}^2T^2}{X} \mathbb{E}\left[\mathrm{e}^{B}\right], \\
T\alpha_{xx} = \sigma^{\mathrm{qh}}_{xx} &=\frac{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T}{X} \mathbb{E}\left[ \mathrm{e}^{B}\frac{a_t}{H_{tt}}\right], \\
\sigma_{xx} = \sigma^{\mathrm{qq}}_{xx} &= \frac{1}{X} \mathbb{E}\left[ \mathrm{e}^{B}\left(\frac{a_t}{H_{tt}}\right)^2 + \frac{1}{S}\left(\partial_x B - \frac{\partial_xS}{S}\right)^2\right]\end{aligned}$$
where $$X = \mathbb{E}\left[ \mathrm{e}^{B}\left(\frac{a_t}{H_{tt}}\right)^2 + \frac{1}{S}\left(\partial_x B - \frac{\partial_xS}{S}\right)^2\right] \mathbb{E}\left[\mathrm{e}^{B}\right] - \mathbb{E}\left[ \mathrm{e}^{B}\frac{a_t}{H_{tt}}\right]^2$$ and $B$, $a_t$, $H_{tt}$ and $S$ are data associated with the solution of classical Einstein-Maxwell gravity, near the horizon of a black hole, as detailed below. The near horizon geometry in their coordinate system was $$\label{ds21}
\mathrm{d}s^2 \approx \frac{H_{tt}(x)}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}Tr} \mathrm{d}r^2 -4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}TrH_{tt}(x)\mathrm{d}t^2 + S(x) \left[\mathrm{e}^{B(x)} \mathrm{d}x^2 + \mathrm{e}^{-B(x)}\mathrm{d}y^2\right],$$ with $r$ the radial coordinate, and $r=0$ denoting the location of the black hole. The bulk gauge field only has a time-like component, whose value is $a_t$. $T$ denotes the temperature of the boundary theory.
Let us postulate the following equations of state for the emergent fluid on the horizon:
\[donosdata2\]$$\begin{aligned}
\eta &= \frac{\mathcal{S}}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}} = S, \\
\mathcal{Q} &= \frac{Sa_t}{H_{tt}}, \\
\Sigma^{\mathrm{qq}} &= 1, \\
\Sigma^{\mathrm{qh}} &= \Sigma^{\mathrm{hh}} = 0.\end{aligned}$$
The first of these equations is the canonical result for $\eta/\mathcal{S}$ in a strongly coupled theory [@kss], which also holds for the charged black holes here [@mas; @kss2], in the translationally invariant limit, though this universal ratio can be different in mean-field disordered black holes [@gouteraux2]. The last of these equations was argued to occur in holographic models in [@lucasMM], by matching $\omega=0$ results of massive gravity. More recently, it has been pointed out that this is not the correct interpretation of $\Sigma^{\alpha\beta}$ in the boundary theory, and this becomes discernable at finite $\omega$ [@davison15]. However, we will see that this prescription can describe correctly an emergent fluid, associated with data on the horizon, whose hydrodynamic response is equivalent to (\[donosdata1\]). The inequivalence of the boundary fluid and this emergent “horizon fluid" is an important subtlety, and one we will not resolve in this paper.
We also need to make two more assumptions. The first is rather simple – let us suppose (\[donosdata2\]) is valid for the disordered model, with $x$-dependence trivially put in: e.g., $\eta(x) = S(x)$. This is in accordance with our logic in Section \[sec2\]. The second assumption is that the boundary fluid lives on a curved space with metric $$\mathrm{d}s^2 \equiv \gamma_{ij} \mathrm{d}x^i\mathrm{d}x^j = \mathrm{e}^{B(x)} \mathrm{d}x^2 + \mathrm{e}^{-B(x)}\mathrm{d}y^2,$$which differs from (\[ds21\]) by a conformal rescaling. Intuitively this can be argued for on the grounds that $S$ determines $\mathcal{S}$ and $\eta$, and therefore should not determine the boundary metric $\gamma_{ij}$, since we expect that $\gamma_{ij}={\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}$ in a translationally invariant isotropic model, even though $S\ne 1$ in general. We compute the thermoelectric conductivties for such a fluid using special techniques for striped systems, discussed in Appendix \[appstripe\], and we find
\[donosresult\]$$\begin{aligned}
\left(\sigma^{-1}\right)^{\mathrm{qq}}_{xx} &= \mathbb{E}\left[\mathrm{e}^{B}\right], \\
\left(\sigma^{-1}\right)^{\mathrm{qh}}_{xx} &= -\frac{1}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T}\mathbb{E}\left[ \mathrm{e}^{B}\frac{a_t}{H_{tt}}\right] , \\
\left(\sigma^{-1}\right)^{\mathrm{hh}}_{xx} &= \frac{1}{(4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T)^2} \mathbb{E}\left[ \mathrm{e}^{B}\left(\frac{a_t}{H_{tt}}\right)^2 + \frac{1}{S}\left(\partial_x B - \frac{\partial_xS}{S}\right)^2\right].\end{aligned}$$
Inverting this matrix returns (\[donosdata1\]), the exact result found holographically in [@donos1409].
More recently [@rangamani] has generalized the results of [@donos1409] to include the effects of a dynamical scalar field in the dual theory. In general this scalar must be consistently included within hydrodynamics, and so we must consider a more general theory to connect with these results in the generic case.
In the case where translational symmetry is broken in multiple directions, there is an important subtlety. It turns out that the local “current" in the emergent horizon fluid is *not* equivalent to $\langle J\rangle$ in the boundary theory. This is true only after taking a spatial average [@donos1506]. To relate the “current" in the horizon fluid to the current the boundary fluid, one must add a non-local integral over the bulk direction.
It was recently shown [@donos1506] in a more general context that dc transport in holography reduces to solving “hydrodynamic" equations on the black hole horizon. [@donos1506] interpreted the resulting fluid equations as an incompressible Navier-Stokes equation. [@grozdanov] points out that these equations can also be interpreted in the framework of the present paper.
These examples suggest that – while the hydrodynamic framework of this paper is extremely helpful providing some physical intuition to these non-perturbative holographic results – this story is not complete. Importantly, however, much of the variational technology that we develop can be directly applied to holographic models.
Strong Disorder {#sec5}
===============
We cannot be as rigorous in the strong disorder limit and give closed form expressions for the conductivity matrix. Nonetheless, we will develop simple but powerful variational methods that allow us to get a flavor of transport at strong disorder, by providing lower and upper bounds on the conductivity matrix. We focus on the discussion of $\sigma^{\mathrm{qq}}_{ij}$ in an isotropic theory in this section. However, the techniques developed below may be used to compute all thermoelectric transport coefficients. Our discussion is therefore not an exhaustion of all possible physics contained in the hydrodynamic formalism, but simply a demonstration of what we believe is a general feature of hydrodynamic transport: a crossover from coherent (Drude) physics to incoherent behavior as disorder strength increases.
We present the mathematical formalism in the subsections below – explicit examples of calculations may be found in Appendix \[appa\].
Power Dissipated
----------------
Define “voltage drops" $V^\alpha_i$ of each conserved quantity in each direction as $$V^\alpha_i \equiv \Phi^\alpha (x_i=0, \mathbf{x}_{\perp i}) - \Phi^\alpha(x_i=L, \mathbf{x}_{\perp i}).$$ Recall our boundary conditions on $\Phi^\alpha$: it must be periodic up to linear terms. We also define the net currents flowing in the $i$ direction via $$I^\alpha_i \equiv \int\limits_{\text{fixed }x_i} \mathrm{d}^{d-1}\mathbf{x}\; \mathcal{J}^\alpha_i.$$Note that by current conservation, $I^\alpha_i$ can be evaluated at any $x_i$. Since $V$ and $I$ are determined by the solution to a linear response problem, we can relate them via a conductance matrix (do not confuse this $G^{\alpha\beta}$ with the retarded Green’s function defined previously)$$I^\alpha_i \equiv G^{\alpha\beta}_{ij} V^\beta_j,$$or via its inverse, the resistance matrix $$V^\alpha_i = R^{\alpha\beta}_{ij} I^\beta_j.$$ $G^{\alpha\beta}$ is by definition related to the dc transport coefficients: $$G^{\alpha\beta}_{ij} = L^{d-2} \sigma^{\alpha\beta}_{ij}.$$ We claim that the power dissipated in the system is simply given by $$\mathcal{P} = I^\alpha_i R^{\alpha\beta}_{ij} I^\beta_j = V^\alpha_i G^{\alpha\beta}_{ij} V^\beta_j.$$
Let us verify this. Energy is dissipated[^11] locally via the dissipative ($\Sigma$ and $\eta$) terms in hydrodynamics: $$\mathcal{P} = \int \mathrm{d}^d\mathbf{x} \left(\Sigma_{ij}^{\alpha\beta} \partial_i\Phi^\alpha \partial_j \Phi^\beta + \eta_{ijkl} \partial_j v_i \partial_l v_k\right). \label{eqppd}$$ We integrate by parts on the second term and use (\[maineq2\]) (recall $v_i$ obeys periodic boundary conditions): $$\mathcal{P} = \int \mathrm{d}^d\mathbf{x} \left(\Sigma_{ij}^{\alpha\beta} \partial_i\Phi^\alpha \partial_j \Phi^\beta - v_i \rho^\alpha \partial_i\Phi^\alpha \right) = - \int \mathrm{d}^d\mathbf{x} \; \mathcal{J}^\alpha_i \partial_i\Phi^\alpha.$$But $\mathcal{J}^\alpha_i$ is a conserved current, and so $$\mathcal{P} = -\oint \mathrm{d}^{d-1}\mathbf{x} \; \Phi^\alpha n_i \mathcal{J}^\alpha_i = \sum_i I^\alpha_i(\Phi^\alpha(x_i=0) - \Phi^\alpha(x_i=L)) = I^\alpha_i V^\alpha_i.$$with $n_i$ the outward pointing normal.
Lower Bounds
------------
Let us begin by discussing the lower bounds on conductivities. These are by far the more important bounds to obtain, because – as we will see – they allow us to rule out insulating behavior in a wide variety of strongly disordered hydrodynamic systems.
We obtain lower bounds on conductivities analogous to how one obtains upper bounds on the resistance of a disordered resistor network, via Thomson’s principle [@levin]. Similar approaches are also used in kinetic theory [@ziman]. Thomson’s principle states that if we run any set of “trial" currents through a resistor network, subject to appropriate boundary conditions, then we can upper bound the inverse conductivity by simply computing the power dissipated by our trial currents. The power dissipated in the resistor network is minimal on the true distribution of currents, which is compatible with Ohm’s Law and a singly-valued voltage function. We will see that remarkably, this simple approach immediately generalizes.
Let us propose a trial set of charge and heat currents, $\tilde{\mathcal{J}}^\alpha_i$, which are periodic functions, and exactly conserved: $$\partial_i \tilde{\mathcal{J}}^\alpha_i =0.$$ In general, this trial function will not be compatible with a single-valued (well-defined) $\Phi^\alpha$. We write $$\tilde{\mathcal{J}}^\alpha_i = \bar{\mathcal{J}}^\alpha_i + \hat{\mathcal{J}}^\alpha_i$$with overbars denoting the true solution of the hydrodynamic equations subject to our boundary conditions, tildes denoting our trial “guesses" at the true solution, and hats denoting the deivations, on all variables henceforth. We also impose $$\int \mathrm{d}^{d-1}\mathbf{x}\; \hat{\mathcal{J}}^\alpha_i(x_i=0,L) = 0, \label{hatjaeq}$$ as there is a true solution $\bar{\mathcal{J}}^\alpha_i$ with the same net currents $I^\alpha_i$ as our trial $ \tilde{\mathcal{J}}^\alpha_i$. We also propose a completely arbitrary periodic velocity field $\tilde{v}_i$. Define $$\tilde{\mathcal{P}} = \int \mathrm{d}^d\mathbf{x}\left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\left(\tilde{\mathcal{J}}^\alpha_i - \rho^\alpha \tilde{v}_i\right)\left(\tilde{\mathcal{J}}^\beta_j - \rho^\beta \tilde{v}_j\right) + \eta_{ijkl} \partial_j \tilde{v}_i \partial_l \tilde{v}_k \right], \label{tildeplower}$$which, on the true solution, is analogous to (\[eqppd\]). We define $\bar{\mathcal{P}}$ (the true power dissipated) and $\hat{\mathcal{P}}$ analogously. Recall $\bar{\mathcal{P}},\;\tilde{\mathcal{P}},\;\hat{\mathcal{P}}\ge 0$, and expand out $\tilde{\mathcal{P}}$: $$\tilde{\mathcal{P}} = \hat{\mathcal{P}} + \bar{\mathcal{P}} +2 \int \mathrm{d}^{d}\mathbf{x}\; \left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\left(\hat{\mathcal{J}}^\alpha_i - \rho^\alpha \hat{v}_i\right)\left(\bar{\mathcal{J}}^\beta_j - \rho^\beta \bar{v}_j\right) + \eta_{ijkl} \partial_j \hat{v}_i \partial_l \bar{v}_k \right] \equiv \hat{\mathcal{P}} + \bar{\mathcal{P}} +2\mathcal{K}$$Now $$\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\left(\hat{\mathcal{J}}^\alpha_i - \rho^\alpha \hat{v}_i\right)\left(\bar{\mathcal{J}}^\beta_j - \rho^\beta \bar{v}_j\right) = -\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\left(\hat{\mathcal{J}}^\alpha_i - \rho^\alpha \hat{v}_i\right) \Sigma^{\beta\gamma}_{jk} \partial_k \bar{\Phi}^\gamma = -\partial_i \bar\Phi^\alpha\left(\hat{\mathcal{J}}^\alpha_i - \rho^\alpha \hat{v}_i\right)$$and so we obtain, integrating by parts: $$\begin{aligned}
\mathcal{K} &= \int \mathrm{d}^d\mathbf{x} \left[ - \partial_i \bar\Phi^\alpha \hat{\mathcal{J}}^\alpha_i + \rho^\alpha \hat{v}_i \partial_i \bar{\Phi}^\alpha + \eta_{ijkl} \partial_j \hat{v}_i \partial_l \bar{v}_k\right] \notag \\ &= -\oint \mathrm{d}^{d-1}\mathbf{x} \bar{\Phi}^\alpha n_i \hat{\mathcal{J}}_i^\alpha + \int \mathrm{d}^d\mathbf{x} \left[ \rho^\alpha \partial_i \bar{\Phi}^\alpha - \partial_j ( \eta_{ijkl} \partial_l \bar{v}_k)\right]\hat{v}_i = 0.\end{aligned}$$ The first term vanishes since $\tilde{\mathcal{J}}^\alpha_i$ is periodic, and the constant gradient terms vanish due to (\[hatjaeq\]); the second by (\[maineq2\]). We conclude that $\tilde{\mathcal{P}} \ge \bar{\mathcal{P}}$. If we define $\tilde{\mathcal{P}} = \tilde{R}^{\alpha\beta}_{ij} I^\alpha_i I^\beta_j$, then we obtain $$I^\alpha_i \tilde{R}^{\alpha\beta}_{ij} I^\beta_j \ge I^\alpha_i R^{\alpha\beta}_{ij} I^\beta_j.$$
In particular, we can immediately obtain bounds for all diagonal entries of $R^{\alpha\beta}_{ij}$. Suppose that we have a large, isotropic disordered metal, in which case we find $R^{\alpha\beta}_{ij} = R^{\alpha\beta}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}$ and $\tilde R^{\alpha\beta}_{ij} = \tilde R^{\alpha\beta}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}$. Then we have the generic bounds
\[eqlowerbound\]$$\begin{aligned}
\frac{1}{\tilde R^{\mathrm{qq}}} &\le \frac{1}{R^{\mathrm{qq}}} = \frac{G^{\mathrm{hh}}G^{\mathrm{qq}}- (G^{\mathrm{hq}})^2}{G^{\mathrm{hh}}} \le G^{\mathrm{qq}}, \\
\frac{1}{\tilde R^{\mathrm{hh}}} &\le G^{\mathrm{hh}}.\end{aligned}$$
It is also straightforward to convert from $R^{\alpha\beta}_{ij}$ into $(\sigma^{-1})^{\alpha\beta}_{ij}$: $$\mathcal{P} = I^\alpha_i R^{\alpha\beta}_{ij}I^\beta_j = L^{2d-2} \mathbb{E}\left[\mathcal{J}^\alpha_i \right] R^{\alpha\beta}_{ij} \mathbb{E}\left[\mathcal{J}^\beta_j \right] = L^d \mathbb{E}\left[\mathcal{J}^\alpha_i \right] (\sigma^{-1})^{\alpha\beta}_{ij} \mathbb{E}\left[\mathcal{J}^\beta_j \right].$$ To obtain bounds on off-diagonal elements is slightly more subtle. Information can be found about off-diagonal elements by studying linear combinations of various components of $I^\alpha_i$, but it is not as clearcut as (\[eqlowerbound\]).
Upper Bounds
------------
Obtaining upper bounds on conductivities is in principle more simple, but quite a bit more subtle in practice. Let us write $$\mathcal{P} = \int \mathrm{d}^d\mathbf{x}\left(\Sigma_{ij}^{\alpha\beta}\partial_i \Phi^\alpha \partial_j \Phi^\beta + \eta^{-1}_{ijkl} \mathcal{T}_{ij}\mathcal{T}_{kl}\right) \equiv V^\alpha_i G^{\alpha\beta}_{ij} V^\beta_j, \label{pupperbound}$$ where $\mathcal{T}_{ij} = -\eta_{ijkl} \partial_l v_k$ is the viscous stress tensor, which has $d(d+1)/2$ independent components; $\eta^{-1}$ is a matrix inverse with the first two indices grouped together and the last two indices grouped together (but only in symmetric combinations). It is possible that $\eta$ may not be invertible, but in this case it is straightforward to regulate the zero eigenvalue with an infinitesimal positive eigenvalue and then take the inverse.[^12] To compute the conductance, we need to demand (as stated previously) that $\Phi^\alpha$ obeys $\Phi^\alpha(x_i=0) - \Phi^\alpha(x_i=L) = V^\alpha_i$.
We are going to guess a single valued trial function $\tilde{\Phi}^\alpha = \bar\Phi^\alpha + \hat\Phi^\alpha$, with $\bar\Phi^\alpha$ the exact solution as before, and $\hat\Phi$ a periodic function (recall that $\Phi^\alpha$ should be periodic up to the linear gradient terms). We will also guess a trial $\tilde{\mathcal{T}}_{ij} = \bar{\mathcal{T}}_{ij} + \hat{\mathcal{T}}_{ij}$, which must be a symmetric tensor, and a periodic function. We do *not* require that $\tilde{\mathcal{T}}_{ij}$ be expressable in terms of a velocity function, or that $\partial_i\tilde{\mathcal{J}}^\alpha_i = 0$. Let us verify the circumstances under which we can nevertheless find $\tilde{\mathcal{P}} \ge \bar{\mathcal{P}}$, as before. We find that $\tilde{\mathcal{P}} = \bar{\mathcal{P}} + \hat{\mathcal{P}} + 2\mathcal{K}$, with $$\begin{aligned}
\mathcal{K} &= \int \mathrm{d}^d\mathbf{x} \left( \Sigma^{\alpha\beta}_{ij} \partial_i \hat\Phi^\alpha \partial_j \bar\Phi^\beta + \eta^{-1}_{ijkl} \hat{\mathcal{T}}_{ij} \bar{\mathcal{T}}_{kl}\right) = \int \mathrm{d}^d\mathbf{x} \left( \left(\rho^\alpha \bar{v}_i - \bar{\mathcal{J}}^\alpha_i\right) \partial_i \hat\Phi^\alpha - \eta^{-1}_{ijkl} \hat{\mathcal{T}}_{ij} \eta_{klmn} \partial_n \bar{v}_m\right) \notag \\
&= \int \mathrm{d}^d\mathbf{x} \left(\rho^\alpha \partial_i \hat{\Phi}^\alpha + \partial_j \hat{\mathcal{T}}_{ij} \right)\bar{v}_i - \oint \mathrm{d}^{d-1}\mathbf{x} \; n_i \bar{\mathcal{J}}^\alpha_i \hat{\Phi}^\alpha\end{aligned}$$ We have used the periodicity of $\tilde{\mathcal{T}}_{ij}$ to integrate that term in $\mathcal{K}$ by parts. The first term vanishes if we require that $$\rho^\alpha \partial_i \tilde{\Phi}^\alpha + \partial_j \tilde{\mathcal{T}}_{ij} =0 \label{upcons}$$ of all perturbations. The second term vanishes because $\hat\Phi^\alpha$ and $\bar{\mathcal{J}}^\alpha_i$ are periodic, with $n_i \bar{\mathcal{J}}^\alpha_i$ taking opposite signs on each face.
And since the integrand in (\[pupperbound\]) is positive semi-definite, $\hat{\mathcal{P}}\ge 0$. We conclude that this forms the basis of a variational principle for the computation of $G^{\alpha\beta}_{ij}$. If we define $$\tilde{\mathcal{P}} = V^\alpha_i \tilde{G}^{\alpha\beta}_{ij} V^\beta_j,$$ then we obtain bounds on $G^{\alpha\beta}_{ij}$, analogously to $\tilde{G}^{\alpha\beta}_{ij}$. As before, diagonal elements of $G^{\alpha\beta}_{ij}$ can be straightforwardly upper bounded, and off-diagonal elements require more care.
Discussion of Variational Results
---------------------------------
Here we present a summary of the calculations performed in Appendix \[appa\]. For simplicity, let $\mathbb{E}[\mathcal{Q}]=\mathcal{Q}_0$ and $\mathrm{Var}[\mathcal{Q}] = \mathbb{E}[\mathcal{Q}^2] - \mathbb{E}[\mathcal{Q}]^2 = u^2$. The electrical conductivity will, in a disordered isotropic fluid without parametrically large fluctuations in $\Sigma$ or $\eta$, be bounded from above and below by the following schematic bounds: $$\sigma_{\textsc{q}1}(u) + \sigma_{\textsc{q}2}(u) \frac{\mathcal{Q}_0^2}{u^2} \le \sigma \lesssim \sigma_{\textsc{q}3}(u) + \sigma_{\textsc{q}4}(u) \frac{\mathcal{Q}_0^2}{u^2} + \frac{\xi^2\mathcal{Q}_0^4}{\eta_1(u) u^2} + \frac{\xi^2 u^2}{\eta_2(u)}+ \frac{\xi^2 \mathcal{Q}_0^2}{\eta_3(u)} \label{boundseq}$$with each $\sigma_{\textsc{q}}$ and $\eta$ factor above related to “typical" behavior of $\Sigma^{\mathrm{qq}}$ and $\eta_{ijkl}$ respectively. In particular the upper bounds are quite subtle (see (\[eq125\])), and so each $\sigma_{\textsc{q}1,2,3,4}$ may have complicated $u$ dependence for $u\gtrsim\mathcal{Q}_0$, and we have written (\[boundseq\]) as we did to emphasize qualitative behavior, as discussed after (\[eq2\]).
(\[boundseq\]) proves that there is a crossover at $u\sim \mathcal{Q}_0$ between an coherent regime when $u \ll \mathcal{Q}_0$ (translational symmetry is weakly broken) and an incoherent regime when $u\gg\mathcal{Q}_0$ (translational symmetry is strongly broken), as depicted in Figure \[fig1\]. As discussed in the introduction, this is the physics found by mean field holographic models, and demonstration of this without a mean field treatment of disorder is a primary quantitative result of this paper.
Many of the statements which lead to (\[boundseq\]) can be made quite rigorously. The lower bounds on conductivity are derived carefully and will be valid in a wide variety of theories. The upper bounds which we derive are much more challenging to evaluate analytically when viscosity is not neglected, and so we have made heuristic arguments to understand the qualitative physics that are non-rigorous and may break down in some cases. Theories with very large fluctuations in $\Sigma$, $\eta$, or $\rho^\alpha$ could render the upper and lower bounds to be far enough apart (perhaps parametrically so) for (\[boundseq\]) to not be useful. Still, we propose that the coherent-to-incoherent crossover described by (\[boundseq\]) is generic, and will provide some intuition into why this occurs.
In (\[eq2\]), we ignored viscous effects, and in Figure \[fig1\], we depicted $\sigma$ saturating at $\sigma^*$ when $u\gg \mathcal{Q}_0$. (\[boundseq\]) generically confirms this picture, with $\sigma_{\textsc{q}1} \lesssim \sigma^* \lesssim \sigma_{\textsc{q}3}$, so long as $$\sigma_{\textsc{q}}\eta \gg \xi^2u^2. \label{5bound1}$$ This inequality may or may not be satisfied, and determines whether transport may become sensitive to viscous effects. For example, in a strongly interacting quantum critical system of dynamical exponent $z$, we expect $\sigma_{\textsc{q}} \sim T^{(d-2)/z}$, $\eta \sim \mathcal{S}\sim T^{d/z}$ [@kss], and $\xi\gg T^{-1/z}$. The requirement that (\[5bound1\]) is violated is $$u \gtrsim \sqrt{\frac{\sigma_{\textsc{q}}}{T^{(d-2)/z}}} \frac{T^{d/z}}{T^{1/z}\xi} \sim \sqrt{\frac{\sigma_{\textsc{q}}}{T^{(d-2)/z}}} \frac{\mathcal{S}}{T^{1/z}\xi}. \label{5bound2}$$ When $\mu\lesssim T$ (the regime of validity[^13] of the hydrodynamic approach in a typical quantum critical model), then it is reasonable to expect that $u \lesssim \mathcal{S}$, as most of the entropy will be associated with a background charge neutral plasma, and not with the deformation by a chemical potential. Rearranging (\[5bound2\]) we find $$1 \gtrsim \frac{u}{\mathcal{S}} \gtrsim \sqrt{\frac{\sigma_{\textsc{q}}}{T^{(d-2)/z}}} \frac{1}{T^{1/z}\xi}.$$ Recalling that $T^{1/z}\xi \gg 1$, this sequence of equalities is satisfied for disorder on the longest wavelengths. However, if $u\ll \mathcal{S}$ then it may be possible to have disorder on short enough wavelengths that this sequence of equalities is not satisfied. It is this regime where (\[eq2\]) and Figure \[fig1\] are valid. A viscous-dominated transport regime is not understood well, and a further understanding of this regime is an important goal for future work.
A second assumption that went into (\[eq2\]) is that $(\Sigma^{-1})^{\mathrm{qq}}$ is finite. This is true in the effective horizon fluid of holographic models, but need not be true in other quantum critical models. Transport in models where $(\Sigma^{-1})^{\mathrm{qq}}$ is infinite will be discussed elsewhere.
Similar bounds can be found for other transport coefficients. In particular, for bounds on $\bar\kappa$, one must simply replace $\sigma_{\textsc{q}}$ with $T\bar\kappa_{\textsc{q}}$, $\mathcal{Q}_0$ with $T\mathcal{S}_0=\mathbb{E}[T\mathcal{S}]$, and $u^2$ with $T^2 \mathrm{Var}[\mathcal{S}]$. It is more likely that thermal transport is sensitive to viscosity in a quantum critical system, as $\bar\kappa_{\textsc{q}} \rightarrow 0$ when $\bar \mu \ll T$ [@hkms].
One of the most important results we find is an exact inequality for an isotropic fluid: $$\sigma^{\alpha\alpha} \ge \frac{1}{\mathbb{E}\left[\left(\Sigma^{-1}\right)^{\alpha\alpha}_{ii}\right]} , \;\;\;\text{(no summation on }\alpha\text{ or } i). \label{saa}$$ This can be interpreted simply as the statement that a uniform charge or heat current could flow through the fluid, with no convective transport, encountering this effective conductivity. And so as long as a current can flow everywhere locally with a finite conductivity, so can a current flow globally. (\[saa\]) is incredibly powerful – in particular, if $\Sigma^{\mathrm{qq}}$ is strictly positive at all points in space, we have *proven* that the QFT described by this framework is a conductor. As mentioned above, the lower bounds in (\[boundseq\]) are derived quite carefully, and essentially follow from generalizations of (\[saa\]). Our proof that these fluids are conductors when $\Sigma^{\alpha\beta}_{ij}$ is finite generalize to anisotropic theories, though the bounds become more easily expressed as upper bounds on the matrix $\sigma^{-1}$.
The new approaches advocated in this paper, along with the existing mean-field literature, suggests that the fate of most holographic models at strong disorder – at fixed temperature $T$, and arbitrarily strong disorder – is to become an incoherent conductor, and not an interacting quantum glass. This is a remarkable and highly non-trivial prediction. In contrast, in metals described by (fermionic) free quantum field theories, there is a transition to an insulating phase at some critical disorder strength [@anderson], which is zero in $d\le 2$ [@abrahams]. Generically, interactions do lead to delocalized, conducting phases at weak disorder, with localization and insulating physics arising at stronger disorder strengths, in any $d$ [@giamarchi; @basko]. It is possible that this localization transition is not observable in classical holography, which only captures the leading order in $N\rightarrow \infty$, and so has taken the “coupling strength $\rightarrow\infty$" limit before the “disorder $\rightarrow\infty$" limit.
We have always referred to these hydrodynamic models as incoherent metals. Holographic “insulators" discussed in the literature typically rely on $\Sigma^{\mathrm{qq}}$ scaling as a positive power of $T$, in a homogeneous model. In [@rangamani], it is likely due to stripes of such decreasing $\Sigma^{\mathrm{qq}}$ arising at low $T$. More generally, such insulators arise from the percolation of locally insulating $\Sigma^{\mathrm{qq}}$ regions through the effective horizon fluid.[^14] This is not unlike the “metal-insulator" transition of a classical disordered resistor lattice, associated with percolation of $R=\infty$ resistors across the lattice [@kirkpatrick; @derrida]. This is a different mechanism from Anderson localization in typical condensed matter systems, which is related to destructive interference of quasiparticles scattering off of disorder. Of course, in holographic models, the percolation phenomenon on the horizon could emerge from “benign" disorder on the boundary, but from the point of view of the emergent horizon fluid the metal-insulator transition is simply a percolation transition. We emphasize that our hydrodynamic formalism is still mathematically valid for dc transport in holographic insulators, due to the remarkable mathematical results of [@donos1506; @donos1507]. The physical interpretation of such a fluid is an important question for future work, as emphasized in the previous section. There is to date no construction of a holographic metal-insulator transition that is unambiguously driven by (non-striped) disorder, and interpreting any such model in terms of hydrodynamic transport may lead to interesting insights. In simple holographic models, it has recently been shown that such a transition is impossible [@grozdanov], and so more complicated models with bulk scalar fields will be necessary.
In a non-holographic context, it is less clear whether or not our hydrodynamic formalism will be valid in a quantum system undergoing a metal-insulator transition, as the validity of hydrodynamics rests on the disorder being long wavelength. The classical “metal-insulator" transition realized by resistor networks [@kirkpatrick; @derrida] is a crude example of this phenomenon, but relies on the only hydrodynamic degree of freedom being charge.
Finally, as we are studying a strongly disordered system, it is also worthwhile to think about fluctuations in the transport coefficients between different realizations of the quenched disorder. As in [@lucas1411], we expect that these fluctuations are suppressed as the size of the sample increases with $L$ as $L^{-d/2}$, with possible deviations when distributions on the random coefficients $\rho$, $\Sigma$ and $\eta$ are heavy tailed. Such fluctuations are classical, but this is not surprising since the dc response of our QFTs are governed by classical hydrodynamics. This is analogous to weakly interacting theories at finite temperature [@leestone2]. In contrast, a free quantum field theory has universal conductance fluctuations at $T=0$ [@leestone; @altshuler; @imry], so it would be interesting to ask if the $T\rightarrow 0$ limit of holographic models (where hydrodynamics can still be a sensible approach [@davisoncold]) has anomalous fluctuations in transport coefficients, in disordered models.
Localization
============
As we previously mentioned, many free or weakly interacting quantum systems are described by a “localized" phase where transport is exponentially suppressed at low temperatures [@anderson], at strong disorder. Naively, one might think that a strong coupling analogue of localization – with the associated reduction in transport – would exist at strong disorder. Indeed, [@saremi] provided evidence for a possible connection in a holographic model. In seeming contrast to this, we have rigorously ruled out any insulating, localized phase in our framework (which includes many such holographic models), so long as the quantum critical conductivity is finite everywhere; the simple holographic models studied in the literature to date are described by our framework, with finite $\Sigma^{\mathrm{qq}}$ everywhere in space, in most models.
This is consistent with known results in elastic networks and other random resistor networks. Despite the localization of classical eigenfunctions [@anderson2; @john; @ludlam], diffusion and transport are possible even with localized eigenmodes of the linearized hydrodynamic equations. This has been shown in similar models without convective transport [@halperin2; @ziman2; @amir1; @amir2]. Localization is more subtle in these systems due to the presence of zero modes of the hydrodynamic operators, due to exact conservation laws. Together with modes of arbitrarily long correlation length with finite eigenvalues, transport is possible despite classical localization, and so the signatures observed in [@saremi] need not be important for dc transport.
The finite momentum or finite frequency response of the system may be more sensitive to localization. In a simple model of disordered RC circuits, interesting new universal phenomena arise [@amir3]. It is worthwhile to understand finite $\omega$ transport in the class of models in this paper as well. In particular, it is interesting to ask whether at strong disorder, the Drude peak found via memory matrices [@lucasMM] broadens out enough to look “incoherent" [@hartnoll1], at least at small frequencies, or whether more exotic phenomena emerge.
There is one other point worth making about localized eigenmodes. In a translationally invariant fluid, long-time tails in hydrodynamic correlation functions in $d\le 2$ spoil hydrodynamic descriptions of dc transport [@kovtunlec]. In particular, in $d=2$, the conductivity $\sigma(\omega)$ in an uncharged, translationally invariant system picks up a correction $\sim \log (T/|\omega|)$, which diverges as $\omega\rightarrow 0$ [@willwk]. In holography, it is known that such long time tails are quantum bulk effects [@caronhuot], and are thus completely suppressed in the models described in Section \[sec4\]. Since it has been argued that the memory matrix approach gives sensible predictions for realistic strange metal physics [@raghu2; @patel; @debanjan], and the memory matrix framework employed there can be interpreted hydrodynamically, one might be, a priori, concerned about whether long time tails can spoil dc transport in these models. If in the thermodynamic limit, all modes (except for the two zero modes) of the classical hydrodynamic operators are localized, as is believed in $d\le 2$ (where long time tails are problematic), then the standard argument for long time tails [@kovtunlec] seems to fail. It would be interesting to explore this point further in future work.
Conclusion
==========
In this paper, we have explored the consequences of hydrodynamics on the transport coefficients of a strongly coupled QFT, disordered on large length scales. We demonstrated that hydrodynamics can be used to understand the memory function computations of momentum relaxation times, which have previously been derived using an abstract and opaque formalism. It is also straightforward – at least in principle – to compute transport coefficients at higher orders in perturbation theory, whereas memory function formulas only give leading order transport coefficients. Remarkably, we also demonstrated that many non-perturbative holographic dc transport computations can be interpreted entirely by solving a hydrodynamic response problem of a new emergent horizon fluid. Thus, the technology of Appendix \[appa\] may be applied to these models, when exact solutions are not available. We still need specific microscopic theories to compare with $T$-scaling laws in experiments, but this work provides important physical transparency to a large body of recent literature on transport in strongly coupled QFTs. We emphasize again that the memory function formalism and holography are valid in regimes where hydrodynamics should formally break down, and so it is strange (but useful) that hydrodynamic technology (which is readily understandable) can be used to help interpet these results nonetheless.
The fact that this hydrodynamic framework can be used to interpret such a wide variety of results from memory function or holographic computations is suggestive of the fate of such theories at strong disorder. Shortly after this paper was released, it was proved in [@grozdanov] that all $\mathrm{AdS}_4$-Einstein-Maxwell holographic models are electrical conductors, using these hydrodynamic techniques (which are valid so long as the bulk black hole horizon is connected). Thus, we do *not* expect a holographic analogue of a many-body localized phase [@basko] to exist in many strongly disordered holographic systems. Strongly disordered black holes have been numerically constructed recently [@santosdisorder1; @santos; @santosdisorder2]; hence, dc transport coefficients in these backgrounds, along with finite momentum or frequency response, may be numerically computable in the near future.
More generally, we have also demonstrated – without recourse to mean field treatments of disorder, or to holography – a framework which generically gives rise to both a coherent metal at weak disorder, and an incoherent metal at strong disorder. Incoherent metallic physics has been proposed recently to be responsible for some of the exotic thermoelectric properties of the cuprate strange metals [@cupratescale1; @cupratescale2]. One can easily imagine taking realistic scaling laws from microscopic models appropriate for cuprates [@raghu2; @patel; @debanjan; @patel2] and making quantitative scaling predictions about the strong disorder regime using insights from our framework.
Hydrodynamics provides a valuable framework for interpreting more specific microscopic calculations. There are many natural extensions of this work: two examples are the study of hydrodynamic transport in disordered superfluids and superconductors, or the study of systems perturbed by further deformations than $\bar\mu$. Still, this framework has limitations. High frequency transport (in particular, $\omega \gtrsim T$) cannot be captured by hydrodynamics, and provides a unique opportunity for holography in particular to make experimentally relevant predictions about quantum critical dynamics [@willwk; @prokofev].
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Ariel Amir, Richard Davison, Blaise Goutéraux, Sarang Gopalakrishnan, Bertrand Halperin, Sean Hartnoll and Michael Knap for helpful discussions, and especially Subir Sachdev for critical discussions on presenting these ideas in a more transparent way.
Onsager Reciprocity {#apponsager}
===================
In this appendix we prove that the thermoelectric conductivity matrix $\sigma^{\alpha\beta}_{ij}$ is symmetric. This follows entirely from the symmetries of the diffusive transport coefficients (\[diffsym1\]) and (\[diffsym2\]), as well as the equations of motion (\[maineq1\]) and (\[maineq2\]).
To do this, we look for a periodic solution $\Phi^\alpha$ and $v_i$, up to the constant linear terms in $\Phi^\alpha$. In particular, we write
$$\begin{aligned}
\Phi^\alpha &= -F^\alpha_i x_i + \Phi^{\alpha\beta}_j F^\beta_j, \\
v_i &= v^\beta_{ij}F^\beta_j,
\end{aligned}$$
which we may always do, as the equations of motion are linear. (\[maineq1\]) and (\[maineq2\]) become:
\[onsagereq\]$$\begin{aligned}
\partial_i \left(\rho^\alpha v^\beta_{ij} - \Sigma^{\alpha\gamma}_{ik}\partial_k \Phi^{\gamma\beta}_j\right) &= -\partial_i \Sigma^{\alpha\beta}_{ij}, \\
\rho^\alpha \partial_i \Phi^{\alpha\beta}_j - \partial_m\left(\eta_{imkl}\partial_l v_{kj}^\beta\right) &= \rho^\beta{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}.
\end{aligned}$$
We further have $$\sigma^{\alpha\beta}_{ij} = \mathbb{E}\left[\rho^\alpha v^\beta_{ij} - \Sigma^{\alpha\gamma}_{ik}\partial_k \Phi^{\gamma\beta}_j + \Sigma^{\alpha\beta}_{ij}\right] = \mathbb{E}\left[\rho^\alpha v^\beta_{ij} + \Phi^{\gamma\beta}_j \partial_k\Sigma^{\alpha\gamma}_{ik} + \Sigma^{\alpha\beta}_{ij}\right] .$$ as we can now always integrate by parts inside of spatial averages. Now, let us employ (\[onsagereq\]) and (\[diffsym1\]) and write $$\sigma^{\alpha\beta}_{ij} = \mathbb{E}\left[ v^\beta_{kj} \rho^\gamma \partial_k \Phi^{\gamma \alpha}_{ki} + \eta_{klmn}\partial_l v_{kj}^\beta \partial_n v_{mi}^\alpha +\partial_k\Phi^{\gamma\beta}_j \left(\rho^\gamma v^\alpha_{ki} - \Sigma^{\gamma\delta}_{kl}\partial_l \Phi^{\delta\alpha}_i\right)+ \Sigma^{\alpha\beta}_{ij} \right].$$ Using (\[diffsym1\]) and (\[diffsym2\]) it is straightforward to see from the previous equation that $\sigma^{\alpha\beta}_{ij}$ is symmetric.
Perturbative Expansions {#apppert}
=======================
Let us describe how to extend the weak disorder calculations of Section \[sec3\] to arbitrarily high orders in perturbation theory, in the special case where the disorder is introduced entirely through $\bar \mu$. This also gives a flavor for how to “extend the memory matrix formalism" beyond leading order in perturbation theory.
Let us write $\mu = \mu_0 +\epsilon \hat\mu(\mathbf{x})$, with $\epsilon \ll 1$ a perturbatively small number. Within linear response, the fields $\Phi^\alpha$ and $v_i$ may be written as follows:
$$\begin{aligned}
\Phi^\alpha &= -F^\alpha_ix_i + \sum_{n=-1}^\infty \epsilon^n \Phi^\alpha_{(n)}, \\
v_i &= \sum_{n=-2}^\infty \epsilon^n \bar{v}_{i(n)} + \sum_{n=-1}^\infty \epsilon^n \tilde{v}_{i(n)}\end{aligned}$$
where $\mathbb{E}[\tilde{\mathbf{v}}]=\mathbf{0}$, $\bar{\mathbf{v}}$ a constant, and $\Phi^\alpha_{(n)}$ single-valued. We will justify the powers of $\epsilon$ above in our computation below, but for now let us emphasize that $\bar v_{i(n-1)}$, $\tilde{v}_{i(n)}$ and $\Phi^\alpha_{(n)}$ enter the computation at the same order. In addition, the hydrodynamic background becomes disordered:
$$\begin{aligned}
\rho^\alpha &= \rho_0^\alpha + \sum_{n=1}^\infty \epsilon^n \rho_{(n)}^\alpha, \\
\Sigma^{\alpha\beta}_{ij} &= \Sigma^{\alpha\beta}_0 {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} + \sum_{n=1}^\infty \epsilon^n \Sigma_{ij(n)}^{\alpha\beta}, \\
\eta_{ijkl} &= \eta_{0ijkl} + \sum_{n=1}^\infty \epsilon^n \eta_{ijkl(n)}.\end{aligned}$$
The background at $\mathrm{O}(\epsilon^0)$ is translation invariant, but not at higher orders. As in the main text, we will assume isotropy of the leading order transport coefficeints.
(\[maineq1\]) and (\[maineq2\]) may be perturbatively expanded in powers of $\epsilon$. We find the following equations in Fourier space:
$$\begin{aligned}
\mathrm{i}k_i \rho_{(1)}^\alpha(\mathbf{k}) \bar{v}_{i(n-1)} + \mathrm{i}k_i \rho^\alpha_0 \tilde{v}_{i(n)}(\mathbf{k}) + k^2\Sigma_0^{\alpha\beta}\Phi^\beta_{(n)}(\mathbf{k}) &= -X^{\alpha}_{(n)}(\mathbf{k}), \\
\mathrm{i}k_i \rho_0^\alpha \Phi^\alpha_{(n)}(\mathbf{k}) + \eta_0 k_i \left(k_i \tilde{v}_{j(n)}(\mathbf{k}) + k_j \tilde{v}_{i(n)}(\mathbf{k}) - \frac{2}{d}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}k_l\tilde{v}_{l(n)}(\mathbf{k})\right) + \zeta_0 {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}k_l\tilde{v}_{l(n)}(\mathbf{k}) &= -Y_{i(n)}(\mathbf{k}), \\
\mathrm{i}\sum_{\mathbf{k}} k_i \Phi^\alpha_{(n)}(\mathbf{k}) \rho_1^\alpha(-\mathbf{k}) &= -Z_{i(n)}\end{aligned}$$
with the third equation the zero mode of the second, and
$$\begin{aligned}
X^\alpha_{(-1)} &= 0 \\
Y_{i(-1)} &= 0 \\
Z_{i(-1)} &= -\rho_0^\alpha F^\alpha_i\end{aligned}$$
and, for $n\ge 0$:
$$\begin{aligned}
X^\alpha_{(n)} &= \mathrm{i}k_i \sum_{m=-2}^{n-2} \rho^\alpha_{(n-m)}(\mathbf{k}) \bar{v}_{i(m)}
+ \mathrm{i}k_i \sum_{m=-1}^{n-1} \rho^\alpha_{(n-m)}(\mathbf{k}-\mathbf{q}) \tilde{v}_{i(m)}(\mathbf{q}) \notag \\
&+ k_i \sum_{m=-1}^{n-1} \sum_{\mathbf{q}} \Sigma_{ij(n-m)}^{\alpha\beta}(\mathbf{k}-\mathbf{q}) q_j \Phi^\beta_{(m)}(\mathbf{q}) \\
Y_{i(n)} &= \sum_{m=-1}^{n-1} \sum_{\mathbf{q}} \left[ \mathrm{i}q_i \Phi^\alpha_{(m)}(\mathbf{q}) \rho^\alpha_{(n-m)}(\mathbf{k}-\mathbf{q}) + \eta_{(n-m)}(\mathbf{k}-\mathbf{q}) k_j q_j \tilde{v}_{i(m)}(\mathbf{q}) \right. \notag \\
&\left. + \left(\eta^\prime_{(n-m)}(\mathbf{k}-\mathbf{q}) - \eta_{(n-m)}(\mathbf{k}-\mathbf{q})\right) k_i q_j \tilde{v}_{j(m)}(\mathbf{q}) \right] \\
Z_{i(n)} &= -\mathrm{i}\sum_{\mathbf{k}} \sum_{m=-1}^{n-1} k_i \Phi^\alpha_{(m)}(\mathbf{k}) \rho^\alpha_{(n+1-m)}(-\mathbf{k}).\end{aligned}$$
Order by order in perturbation theory, these equations may be solved exactly:
$$\begin{aligned}
\bar{v}_{i(n-1)} &= -\Gamma^{-1}_{ij} \left(Z_{j(n)} -\mathrm{i} \sum_{\mathbf{k}} k_j \rho^\beta_{(1)}(-\mathbf{k})(\mathfrak{m}(k)^{-1})^{\alpha\beta} \left(X^\alpha_n(\mathbf{k}) - \mathrm{i}\rho_0^\alpha \frac{k_l Y_{l(n)}(\mathbf{k})}{\eta^\prime_0 k^2}\right)\right), \\
\Phi^\alpha_{(n)}(\mathbf{k}) &= -\mathrm{i}(\mathfrak{m}(k)^{-1})^{\alpha\beta}\left(k_i \bar{v}_{i(n-1)} \rho_{(1)}^\beta(\mathbf{k}) + X_{(n)}^\alpha(\mathbf{k}) - \mathrm{i}\rho_0^\beta \frac{k_i Y_{i(n)}(\mathbf{k})}{\eta^\prime_0 k^2}\right), \label{eqphim1} \\
\tilde{v}_{i(n)} &= -\frac{\mathrm{i}}{\eta^\prime_0 k^2} k_i \rho_0^\alpha \Phi^{\alpha}_{(n)}(\mathbf{k}) -\frac{1}{\eta_0 k^2} \left({\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} - \frac{\eta^\prime_0 - \eta_0}{\eta^\prime_0}\frac{k_ik_j}{k^2}\right) Y_{j(n)}(\mathbf{k}) . \end{aligned}$$
with $\Gamma_{ij}$ and $\mathfrak{m}^{\alpha\beta}$ given by (\[gammaij32\]), with $\hat\rho^\alpha\rightarrow \rho^\alpha_{(1)}$. These equations have a clearly nested structure and can be iteratively solved. At leading order, it is readily seen that the response of the fluid is simply $$\mathcal{J}^\alpha_{i(-2)} = \rho_0^\alpha \rho_0^\beta \Gamma^{-1}_{ij} F^\beta_j,$$ as claimed in the main text. We also stress that even at leading order, $\Phi^\alpha$ and $\tilde{v}_i$ are non-local functions.
At higher orders, the $X$, $Y$ and $Z$ corrections must be systematically accounted for, and this is overwhelming to process by hand, especially without specific equations of state. However, this tedious procedure does seem easier than attempting to generalize the memory matrix formalism to higher orders in perturbation theory, and indeed makes predictions for such an effort. So let us at least comment on, qualitatively, what happens at higher orders in perturbation theory. Many terms that contribute to $\mathcal{J}^\alpha_i$ at higher orders in $\epsilon$ are related to $\rho^\alpha_{(n)}$, $\Sigma^{\alpha\beta}_{ij(n)}$, and $\eta_{(n)}$ and $\zeta_{(n)}$, at higher powers of $n$. As we mentioned in Section \[sec32\], we can interpret $$\rho^\alpha_{(1)}(\mathbf{k}) = \mathrm{Re}\left[G^{\alpha\mathrm{q}}(\mathbf{k},\omega=0)\right] \hat\mu(\mathbf{k}) = \chi^{\alpha\mathrm{q}} \hat\mu(\mathbf{k}).$$ Namely, the response coefficients above are related to certain Green’s functions that can be computed in a microscopic model. Recall that the disorder is on such long wavelengths that we may neglect $\mathbf{k}$-dependence in the hydrodynamic Green’s functions. So it is tempting to interpret $$\begin{aligned}
\rho^\alpha_{(n)}(\mathbf{k}) &= \frac{1}{n!}\sum_{\mathbf{k}_1,\ldots,\mathbf{k}_{n-1}} \mathrm{Re}\left[ G^{\alpha\mathrm{q}\cdots\mathrm{q}}(\mathbf{k}_1,\ldots,\mathbf{k}_n, \omega=0)\right] \hat\mu(\mathbf{k}_1)\cdots \hat\mu(\mathbf{k}_n) {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},\mathbf{k}_1+\cdots+\mathbf{k}_n} \notag \\
&\approx \frac{\chi^{\alpha\mathrm{q}\cdots\mathrm{q}}}{n!} \sum \hat\mu(\mathbf{k}_1)\cdots \hat\mu(\mathbf{k}_n) {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},\mathbf{k}_1+\cdots+\mathbf{k}_n},\end{aligned}$$ with $G^{\alpha\mathrm{q}}$ an appropriate $n$-point Green’s function in the microscopic theory. Similar statements may be made for $\Sigma$ and $\eta$ by relating them properly to Green’s functions of $J_\mu$ and $T_{\mu\nu}$, as in [@kovtunlec]. In the last step, we have used the fact that disorder is long wavelength, and so we expect $\rho^\alpha_{(n)}$, $\Sigma^{\alpha\beta}_{ij(n)}$ and $\eta_{ijkl(n)}$ to be local functions of $\hat\mu$ in position space.
This provides a prediction of our hydrodynamic framework which may be compared with a memory matrix calculation at higher orders in perturbation theory (or another method). Of course, we should stress that in principle, memory matrix calculations can account for corrections beyond the regime of validity of hydrodynamics, though in the limits we identified in Section \[sec2\], one should find that only the contributions described above contribute to the conductivities.
In the above framework, it does not seem as though there are any natural cancellations between various terms at higher orders in perturbation theory. So this approach becomes rapidly unwieldy for computing transport coefficients past leading order in $\epsilon$. Holographic mean field phenomenology suggests that these corrections are all related to a single phenomenological coefficient – the Drude relaxation time $\tau \sim \epsilon^{-2}$ in (\[drude\]). It would be interesting to understand further under what circumstances the Green’s functions above undergo similar universal cancellations, and whether this is a sensible prediction of holography.
Examples of Variational Calculations {#appa}
====================================
Upper Bounds on the Resistance Matrix
-------------------------------------
A simple set of trial functions is
$$\begin{aligned}
\tilde{\mathcal{J}}^\alpha_i &= \text{constant}, \\
\tilde{v}_i &= 0.\end{aligned}$$
This is a guess corresponding to strong momentum relaxation, as the response of the metal is entirely in the diffusive sector. Employing (\[eqlowerbound\]) we obtain (\[saa\]).
In cases with weak disorder this bound is not strong enough to be useful, and we can do better by allowing for $\tilde{v}_i$ to be a constant ($\mathbf{x}$-independent) variational parameter. In this case, we obtain $$\tilde{\mathcal{P}}(\tilde{v}_i) = L^d\left[ A^{\alpha\beta}_{ij} \mathcal{J}^\alpha_i \mathcal{J}^\beta_j + 2 B^\beta_{ij} \mathcal{J}^\beta_j \tilde{v}_i + C_{ij}\tilde{v}_i \tilde{v}_j\right]$$where
$$\begin{aligned}
A^{\alpha\beta}_{ij} &= \mathbb{E}\left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\right], \\
B_{ij}^\beta &= -\mathbb{E}\left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij} \rho^\alpha\right], \\
C_{ij} &= \mathbb{E}\left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij} \rho^\alpha \rho^\beta\right].\end{aligned}$$
Minimizing $\tilde{\mathcal{P}}(\tilde{v}_i)$, we find $$\mathcal{J}^\alpha_i \left(\sigma^{-1}\right)^{\alpha\beta}_{ij} \mathcal{J}^\beta_j \le \mathcal{J}^\alpha_i\left[ A^{\alpha\beta}_{ij} - B^\alpha_{ik} B^\beta_{jl} C^{-1}_{kl} \right]\mathcal{J}^\beta_j \equiv \mathcal{J}^\alpha_i (\tilde{\sigma}^{-1})^{\alpha\beta}_{ij} \mathcal{J}^\beta_j . \label{difflower}$$
It is straightforward to see that the smallest eigenvalue of $\tilde \sigma^{-1}$ must be larger than the smallest eigenvalue of $\sigma^{-1}$. A generic consequence of this result is that if the components of $\tilde \sigma^{-1}$ are not parametrically small in the weak disorder limit, the components of $\tilde \sigma$ may be parametrically large in the weak disorder limit. A simple example analogous to Section \[sec32\] is the case where $\Sigma$ is a constant, isotropic matrix, and $\rho^\alpha = \rho_0^\alpha + \hat\rho^\alpha$, with $\rho_0^\alpha$ a constant and $\hat\rho^\alpha$ a small perturbation with $\mathbb{E}[\hat\rho^\alpha]=0$. In this case we find $$(\tilde{\sigma}^{-1})^{\alpha\beta}_{ij} = (\Sigma^{-1})^{\alpha\beta} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} - \frac{(\Sigma^{-1})^{\alpha\gamma}(\Sigma^{-1})^{\beta\delta} \rho_0^\gamma \rho_0^\delta }{(\Sigma^{-1})^{\eta\zeta}\left(\rho_0^\eta \rho_0^\zeta + \mathbb{E}\left[\hat\rho^\eta\hat\rho^\zeta\right]\right)}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} .$$ As $\hat\rho^\alpha \rightarrow 0$, one can check that $\rho_0^\beta$ becomes an eigenvector of $\tilde{\sigma}^{-1}$ with a parametrically small eigenvalue. Exactly at $\hat\rho^\alpha=0$, $\sigma^{\alpha\beta}$ will have an eigenvalue of $\infty$; as discussed previously, this follows on quite general principles from the fact that $\mathcal{S}$ and $\mathcal{Q}$ become constant. If we invert this matrix, we find $$\tilde{\sigma}^{\alpha\beta} \approx \frac{\rho_0^\alpha \rho_0^\beta}{\mathbb{E}[\hat\rho^\eta \hat\rho^\zeta] (\Sigma^{-1})^{\eta\zeta}} + \cdots \equiv \frac{\rho_0^\alpha \rho_0^\beta}{\tilde{\mathcal{C}}} + \cdots \label{sq2}$$ To compute this eigenvalue,[^15] it is easiest to compute $\rho_0^\alpha (\tilde{\sigma}^{-1})^{\alpha\beta}\rho_0^\beta$, and take the leading order coefficient in $\hat\rho^\alpha$. The subleading corrections correspond to diffusive transport and stay finite in the $\hat\rho^\alpha\rightarrow 0$ limit.
Let us compare to the exact results in the perturbative limit in Section \[sec32\]. Technically speaking, we are not guaranteed that $\tilde{\sigma}^{\alpha\beta} \le \sigma^{\alpha\beta}$, though this inequality is satisfied in this limit (assuming that $\rho_0^\alpha>0$). For we can write $\sigma^{\alpha\beta} \approx \rho_0^\alpha \rho_0^\beta/\mathcal{C}$, with $\mathcal{C}$ given by (\[gammaij32\]), and $$\mathcal{C} = \frac{1}{d}\sum_{\mathbf{k}} \hat\rho^\alpha(-\mathbf{k}) \left(\frac{\rho_0^\alpha \rho_0^\beta}{\eta^\prime k^2} + \Sigma^{\alpha\beta}\right)^{-1} \hat\rho^\beta(\mathbf{k}) \le \frac{1}{d}\sum_{\mathbf{k}} \hat\rho^\alpha(-\mathbf{k}) (\Sigma^{-1})^{\alpha\beta} \hat\rho^\beta(\mathbf{k}) = \frac{\tilde{\mathcal{C}}}{d}.$$ The first inequality here follows from the fact that for any vector $u_i$, and two positive definite matrices $A_{ij}$ and $B_{ij}$, the following inequality holds: $$u_i (A+B)^{-1}_{ij} u_j \le u_i A^{-1}_{ij}u_j. \label{matproof1}$$To prove this, let $\lambda>0$ be a positive coefficient, and $$\frac{\mathrm{d}}{\mathrm{d}\lambda} u_i (A+\lambda B)^{-1}_{ij} u_j = -u_i (A+\lambda B)^{-1}_{ik} B_{kl} (A+\lambda B)^{-1}_{lj} u_j < 0, \label{matproof2}$$with the latter inequality following from positive-definiteness of sums and products of positive definite matrices. Integrating (\[matproof2\]) from $\lambda=0$ to 1 proves (\[matproof1\]).
It is also possible to find viscosity-limited bounds on the resistance matrix which can be smaller than the diffusion-limited bound (\[difflower\]), where viscosity plays no role. For simplicity, let us focus on the specific case of computing thermal transport in an isotropic theory with $\mathcal{Q}=0$. (\[difflower\]) gives us that $$(\sigma^{-1})^{\mathrm{hh}} \le \mathbb{E}\left[(\Sigma^{-1})^{\mathrm{hh}}\right] -\frac{\mathbb{E}[(\Sigma^{-1})^{\mathrm{hh}}\mathcal{S}]^2}{\mathbb{E}[(\Sigma^{-1})^{\mathrm{hh}}\mathcal{S}^2]}. \label{thermb1}$$ A natural guess for a viscous-dominated bound is to assume that
$$\begin{aligned}
\tilde{\mathcal{J}}^{\mathrm{h}}_i &= \text{constant}, \\
\tilde{v}_i &= \frac{\tilde{\mathcal{J}}^{\mathrm{h}}_i}{T\mathcal{S}}.\end{aligned}$$
This directly leads to $$(\sigma^{-1})^{\mathrm{hh}} \le \mathbb{E}\left[ \frac{\eta^\prime}{d} \left(\frac{\partial_i\mathcal{S}}{T\mathcal{S}^2}\right)^2 \right]. \label{thermb2}$$ We may employ whichever of (\[thermb1\]) or (\[thermb2\]) is smaller.
Upper Bounds on the Conductivity Matrix
---------------------------------------
For simplicity, we focus on the bounding of $G^{\mathrm{qq}}_{ij}$; $G^{\mathrm{hh}}_{ij}$ may be bounded with an exactly analogous ansatz. Let us write the background charge density as $$\mathcal{Q} = \mathcal{Q}_0 + \hat{\mathcal{Q}},$$ with $\mathbb{E}[\mathcal{Q}]=\mathcal{Q}_0 \ne 0$ and $\mathbb{E}[\hat{\mathcal{Q}}]=0$. Let us also split $\Phi^{\mathrm{q}}$ into a linear term sourcing a background electric field, and a periodic response $\varphi$: $$\Phi^{\mathrm{q}} = \varphi + \sum_i V^{\mathrm{q}}_i \left(1-\frac{x_i}{L}\right).$$ It is easier to deal with (\[upcons\]) in Fourier space, so let us write (with $E_i = V^{\mathrm{q}}_i/L$): $$\mathrm{i}E_i \mathcal{Q}(\mathbf{k}) + \sum_{\mathbf{q}} q_i \varphi(\mathbf{q})\mathcal{Q}(\mathbf{k}-\mathbf{q}) = k_j \mathcal{T}_{ij}(\mathbf{k}).$$
Let us begin by assuming that $\eta\rightarrow\infty$, so that we may ignore the response of $\mathcal{T}_{ij}$ to $\Phi$ when computing (\[pupperbound\]). The only constraint we must impose in this limit is $$\mathbb{E}[\mathcal{Q} \partial_i \tilde\Phi^{\mathrm{q}}] = 0.$$ A natural guess for $\varphi$, inspired by exact results in the weak disorder limit in Section \[sec32\], is $$\varphi(\mathbf{k}) = -\mathrm{i} \frac{k_i E_i }{Ak^2} \hat{\mathcal{Q}}(\mathbf{k}). \label{eqcalf}$$with $A$ a positive constant constrained by (\[upcons\]): $$E_i \mathcal{Q}_0 = E_j \sum_{\mathbf{q}} \frac{q_iq_j}{Aq^2} |\hat{\mathcal{Q}}(\mathbf{q})|^2 \label{fconstraint}$$ So far, up to neglecting viscous contributions to $\mathcal{P}$, this is completely rigorous.
For simplicity, suppose that $\hat{\mathcal{Q}}(\mathbf{k})$ are disordered random variables, that are not drawn from a heavy-tailed distribution:
$$\begin{aligned}
\mathbb{E}_{\mathrm{d}}[\hat{\mathcal{Q}}(\mathbf{k})] &= 0, \\
\mathbb{E}_{\mathrm{d}}[\hat{\mathcal{Q}}(\mathbf{k})\hat{\mathcal{Q}}(\mathbf{q})] &= \frac{u^2}{N} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},-\mathbf{q}}.
\end{aligned}$$
This will allow us to extract meaningful qualitative information out of bounds which will be quite opaque in Fourier space. $A$ is fixed by (\[fconstraint\]) as $N\rightarrow\infty$: $$A = \frac{u^2}{d\mathcal{Q}_0}.$$ Fluctuations in $A$ are suppressed as $N^{-1/2}$ [@lucas1411].
Plugging this $\Phi$ into (\[pupperbound\]) we obtain $$\begin{aligned}
G^{\mathrm{qq}}_{ij} E_i E_j &\le L^{d-2} \mathbb{E}\left[ \Sigma^{\mathrm{qq}}_{ij}(\mathbf{x}) (E_i - \partial_i \hat \Phi)(E_j - \partial_j \hat \Phi) \right] \notag \\
&= L^{d-2}\left[ \mathbb{E}\left[\Sigma^{\mathrm{qq}}_{ij}\right] E_i E_j - 2\mathbb{E}\left[\Sigma^{\mathrm{qq}}_{ij}\partial_i \varphi \right] E_j + \mathbb{E}\left[\Sigma^{\mathrm{qq}}_{ij}\partial_i \varphi \partial_j \varphi \right]\right] .\end{aligned}$$ Now, recall that we expect $\Sigma^{\mathrm{qq}}$ is a function of $\bar \mu$, and $\hat{\mathcal{Q}}$ is a function of $\bar \mu$ as well, so let us just consider $\Sigma^{\mathrm{qq}}$ to be a function of $\hat{\mathcal{Q}}$. Then we obtain (exploiting isotropy): $$\begin{aligned}
\mathbb{E}_{\mathrm{d}}[\sigma^{\mathrm{qq}}_{ij}] &\le \frac{{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}}{d} \mathbb{E}_{\mathrm{d}} \left[d\Sigma^{\mathrm{qq}}(\hat{\mathcal{Q}}) - 2\sum_{\mathbf{k}}\Sigma^{\mathrm{qq}}(-\mathbf{k})\frac{\hat{\mathcal{Q}}(\mathbf{k})}{A} + \sum_{\mathbf{k}_1,\mathbf{k}_2}\Sigma^{\mathrm{qq}}(-\mathbf{k}_1-\mathbf{k}_2)\frac{\hat{\mathcal{Q}}(\mathbf{k}_1)\hat{\mathcal{Q}}(\mathbf{k}_2)}{A^2}\frac{(\mathbf{k}_1\cdot\mathbf{k}_2)^2}{k_1^2k_2^2}\right] \notag \\
&\le \frac{{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}}{d}\mathbb{E}_{\mathrm{d}} \left[d\Sigma^{\mathrm{qq}}(\hat{\mathcal{Q}}) - 2\sum_{\mathbf{k}}\Sigma^{\mathrm{qq}}(-\mathbf{k})\frac{\hat{\mathcal{Q}}(\mathbf{k})}{A} + \sum_{\mathbf{k}_1,\mathbf{k}_2}\Sigma^{\mathrm{qq}}(-\mathbf{k}_1-\mathbf{k}_2)\frac{\hat{\mathcal{Q}}(\mathbf{k}_1)\hat{\mathcal{Q}}(\mathbf{k}_2)}{A^2}\right] \notag \\
&= \mathbb{E}_{\mathrm{d}}\left[\Sigma^{\mathrm{qq}} - \frac{2\mathcal{Q}_0\hat{\mathcal{Q}}\Sigma^{\mathrm{qq}}}{u^2} + \frac{d\mathcal{Q}_0^2\hat{\mathcal{Q}}^2 \Sigma^{\mathrm{qq}}}{u^4}\right]{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \label{eq125}\end{aligned}$$ This equation holds for arbitrary functions $\Sigma^{\mathrm{qq}}(\hat{\mathcal{Q}})>0$, so long as viscosity is negligible. Our disorder-averaged bound on $\sigma_{ij}$ is also manifestly positive, as it must be.
In a theory with $\mathcal{Q}_0 \rightarrow 0$, and viscosity still negligible, our upper bound collapses to $\mathbb{E}[\Sigma^{\mathrm{qq}}]$. This bound may also be found at $\mathcal{Q}_0=0$ by directly plugging in the satisfactory ansatz $\Phi^{\mathrm{q}} = -E_ix_i$ into (\[pupperbound\]), and so our bound is actually valid for all $\mathcal{Q}_0$.
Now, let us consider the effects of finite viscosity. Henceforth, the discussion will be more qualitative, and we will not be particularly concerned with O(1) prefactors, as it turns out to be quite difficult to write down a good non-perturbative analytic solution to (\[upcons\]).
Let us see what happens if we simply use (\[eqcalf\]), along with a sensible ansatz for $\mathcal{T}_{ij}$. Denoting $$\mathrm{i}E_i \mathcal{Q}(\mathbf{k}) + \sum_{\mathbf{q}}q_i \varphi(\mathbf{q}) \mathcal{Q}(\mathbf{k}-\mathbf{q}) = \mathcal{A}_i(\mathbf{k}),$$ we pick in $d=1$: $$\mathcal{T}_{xx} = \frac{\mathcal{A}_x}{k_x},$$in $d=2$:
$$\begin{aligned}
\mathcal{T}_{xx} &= - \mathcal{T}_{yy} = \frac{k_x \mathcal{A}_x - k_y\mathcal{A}_y}{k_x^2+k_y^2}, \\
\mathcal{T}_{xy} &= \frac{k_y\mathcal{A}_x + k_x\mathcal{A}_y}{k_x^2+k_y^2},\end{aligned}$$
and in $d=3$:
$$\begin{aligned}
\mathcal{T}_{xy} &= \frac{k_x\mathcal{A}_{x}+k_y\mathcal{A}_{y}-k_z\mathcal{A}_{z}}{2k_xk_y}, \\
\mathcal{T}_{xz} &= \frac{k_x\mathcal{A}_{x}-k_y\mathcal{A}_{y}+k_z\mathcal{A}_{z}}{2k_xk_z}, \\
\mathcal{T}_{yz} &= \frac{-k_x\mathcal{A}_{x}+k_y\mathcal{A}_{y}+k_z\mathcal{A}_{z}}{2k_yk_z}.\end{aligned}$$
In all of the above cases, the equations are only valid at $\mathbf{k}=\mathbf{0}$, and we take the zero modes to vanish. In $d=3$, we make the stronger assumption that, e.g., $\mathcal{T}_{xy}=0$ whenever $k_x=0$ or $k_y=0$. It is now straightforward to (qualitatively) see what happens. The first contribution to the conductivity is unchaged from (\[eq125\]), and the average viscous power dissipated scales as $$\begin{aligned}
\mathbb{E}\left[\eta^{-1}_{ijkl}\mathcal{T}_{ij}\mathcal{T}_{kl}\right] &\sim \sum_{\mathbf{k}\ne \mathbf{0}} \eta^{-1}(\mathbf{0}) \frac{\mathcal{A}(\mathbf{k})\mathcal{A}(-\mathbf{k})}{k^2} + \sum_{\mathbf{k}_1,\mathbf{k}_2\ne \mathbf{0}} \eta^{-1}(-\mathbf{k}_1-\mathbf{k}_2)\frac{\mathcal{A}(\mathbf{k}_1)\mathcal{A}(\mathbf{k}_2)}{k_1k_2} \notag \\
&\sim \left[\frac{\xi^2 u^2}{\eta} + \frac{\xi^2\mathcal{Q}_0^2}{\eta} + \frac{\xi^2 \mathcal{Q}_0^4}{\eta u^2} \right]E_i E_i \label{eq130}\end{aligned}$$ where we have been schematic and neglected tensor indices on $\eta$. To obtain the final scaling law above, we have used that $\mathcal{A}(\mathbf{k}\ne \mathbf{0}) \sim \eta^{-1}(\mathbf{k}\ne \mathbf{0}) \sim 1/\sqrt{N}$. We have neglected in the above scaling the possibility that fluctuations in $\eta$ may be large.
Of course, the general framework can certainly account for this possibility, and one can directly plug in our ansatzes into (\[pupperbound\]) – however, we do not see general simplifcations that can be made, other than the crude scaling arguments here. Given our ansatzes, the expression in (\[eq130\]) is generally nonlocal and thus will not be as elegant as (\[eq125\]). Putting all of this together, we find (\[boundseq\]).
Essential to the scaling laws in (\[eq130\]) was that $\sum k^{-2} \sim N\xi^2$ above. However, this is not true in $d\le 2$, where the sum will diverge at small $k$. We now give a heuristic argument that the typical scaling behavior we found above need not be parametrically different in these dimensions, consistent with what we found previously (in Section \[sec32\], the form of the conductivities is the same in all dimensions at weak disorder). To do so, we need to argue that there is a small modification that we can make to $\varphi$ (so the contribution to the bound on conductivity from $\Sigma^{\mathrm{qq}}$ is qualitatively unchanged), yet which can remove the IR divergence from the viscous contribution. The natural guess is to modify $\varphi(\mathbf{k})$ so that $\mathcal{T}_{ij}(\mathbf{k}) =0 $ for $|\mathbf{k}| \xi \lesssim \delta$, with $\delta \ll 1$ a small constant. In this case, then we predict that $$\sum \frac{1}{k^2} \sim \left\lbrace\begin{array}{ll} N/\delta &\ d=1 \\ N\log(1/\delta) &\ d=2\end{array}\right.,$$ and so if we choose a “reasonable" $\delta$ (especially in $d=2$), we can get acceptably finite viscous contributions to the conductivity. This can be accomplished by writing down $$\varphi = -\mathrm{i}\frac{k_iE_i}{Ak^2}\left(\hat{\mathcal{Q}}(\mathbf{k}) + \delta \cdot Q(\mathbf{k})\right),$$with $\varphi_0$ given by (\[eqcalf\]), and $q\sim \hat{\mathcal{Q}}$, carrying no anomalous powers of $N$ or $\delta$. As typical elements of $\varphi$ changed by an amount $\sim \delta$, we expect that the conductivity in (\[eq125\]) has only changed by a small amount.
Let us verify this is possible. We wish to find a solution to the highly overdetermined equation $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \hat{\mathcal{Q}}(\mathbf{k}) (1-{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},\mathbf{0}}) - \mathcal{Q}_0 \frac{q_iq_j}{Aq^2}\hat{\mathcal{Q}}(\mathbf{k}) - \sum_{\mathbf{q}} \hat{\mathcal{Q}}(\mathbf{k}-\mathbf{q}) \frac{q_iq_j}{Aq^2} \hat{\mathcal{Q}}(\mathbf{q}) =\sum_{\mathbf{q}} \mathcal{Q}(\mathbf{k}-\mathbf{q}) \frac{q_iq_j}{Aq^2} \delta Q(\mathbf{q}),\;\;\;\; 0\le |\mathbf{k}| \le \frac{\delta}{\xi}. \label{eq128}$$ The left hand side of the above equation scales as $1/\sqrt{N}$ for typical disorder in $\hat{\mathcal{Q}}$. In the third term, this scaling follows from the lack of correlations between $\hat{\mathcal{Q}}$ at different momenta. So long as we choose $Q(\mathbf{k})=0$ in for $|\mathbf{k}|\xi \lesssim 2\delta$, then the “matrix elements" $\hat{\mathcal{Q}}$ on the right hand side are also small.
We now look for $Q(\mathbf{q})$ by solving a constrained optimization problem of the schematic form: $$\mathbf{a} = \mathsf{B}\mathbf{x},$$ where $\mathbf{a} \in \mathbb{R}^{n_1}$ and $\mathbf{x} \in \mathbb{R}^{n_2}$, $n_1 \sim \delta N \ll n_2\sim (1-2^d\delta)N\sim N$, and $\mathsf{B}$ is a rectangular matrix, such that $|\mathbf{x}|$ is smallest. $\mathbf{a}$ is analogous to the (known) left hand side of (\[eq128\]), $\mathsf{B}$ is a known, truncated convolution-like matrix in a Fourier basis, and $\mathbf{x}$ plays the role of the undetermined, high momentum modes of $Q(\mathbf{q})\cdot \delta $. This is a classic problem in constrained optimization with the solution [@boyd] $$\delta Q(\mathbf{q}) = \mathbf{a} \cdot (\mathsf{BB}^{\mathrm{T}})^{-1} \mathsf{B}.$$ Given that elements of $\mathbf{a}$ and $\mathsf{B}$ each scale as $\sqrt{1/N}$, we roughly estimate that $\delta Q(\mathbf{q}) \sim \delta$, and so indeed it is possible to pick a small correction to $\varphi$ which eliminates the divergence in the viscous contribution to the conductivity. Note that the matrix $\mathsf{BB}^{\mathrm{T}}$ is nearly diagonal (the off diagonal elements involve uncorrelated sums of random variables and so scale as $1/\sqrt{N}$ instead of 1), and so there are no concerns about parametrically small eigenvalues of $\mathsf{BB}^{\mathrm{T}}$.
Striped Models {#appstripe}
==============
Thinking about resistivities turns out to be most convenient for models in $d=1$, or with translational symmetry only broken in a single direction $x$, as noted in [@andreev]. This follows from the general arguments that we make in Appendix \[appa\]. Since we know that the current flow $\mathcal{J}^\alpha_x$ is a constant, to solve the variational problem exactly we need only vary (\[tildeplower\]) with respect to arbitrary $\tilde{v}$ – the global minimum will corresponds to the true velocity field $\bar v$. We find that $\bar v$ obeys the differential equation $$\partial_x \left(\eta_{xxxx}\partial_x \bar v\right) = \left(\Sigma^{-1}\right)_{xx}^{\alpha\beta}\left(\mathcal{J}^\beta_x - \rho^\beta \bar v\right)\rho^\alpha.$$ This second order linear differential equation cannot be solved exactly in general. We do not believe that closed form solutions exist in general for $\sigma^{\alpha\beta}_{xx}$, though they can be found in special cases – for example, if $\Sigma^{\mathrm{hh}}$ is the only non-vanishing diffusive transport coefficient (as in [@andreev], for a non-critical fluid), or if $\Sigma^{\mathrm{qq}}$ is the only non-vanishing coefficient, as in Section \[sec:holostripe\]. In both of these cases, $\Sigma^{\alpha\beta}$ is not an invertible matrix – the zero eigenvector then provides a constraint which fixes $v_x$ in terms of $\mathcal{J}^\alpha_x$.
In particular, let us carry out this computation explicitly for the holographic striped models with equations of state given in Section \[sec:holostripe\]. We must generalize the discussion to curved spaces, but this is not so difficult. The heat conservation equation implies (on a curved space) that $$\mathcal{J}^{\mathrm{h}} = \sqrt{\gamma}\gamma^{xx} T\mathcal{S} v_x = \mathrm{e}^{-B}T\mathcal{S}v_x = \text{constant}.$$ Note that $\sqrt{\gamma}=1$, which simplifies calculations. After an appropriate generalization to curved space, we use (\[tildeplower\]) (on the true solution) to compute the inverse thermoelectric conductivity matrix: $$\begin{aligned}
\left(\sigma^{-1}\right)^{\alpha\beta}\mathcal{J}^\alpha\mathcal{J}^\beta &= \mathbb{E}\left[\sqrt{\gamma}\left(\gamma_{xx}\left( \mathcal{J}^{\mathrm{q}} - \mathcal{Q} v_x\gamma^{xx}\sqrt{\gamma}\right)^2 + \frac{\eta}{2}\gamma^{ij}\gamma^{kl}s_{ik}s_{jl}\right)\right] \end{aligned}$$ where $$s^{ij} \equiv \gamma^{ik}\gamma^{jl} \left(\nabla_k v_l + \nabla_l v_k\right) - \gamma^{ij}\gamma^{kl}\nabla_k v_l.$$ For our set-up, parity symmetry in the $y$ direction ensures that $v_y=0$, and that the only non-vanishing components of $s^{ij}$ are $$\gamma_{xx} s^{xx} = -\gamma_{yy}s^{yy} = \mathrm{e}^{-B}\partial_x v_x.$$ Putting this together and using (\[donosdata2\]) to determine $\eta$, $\mathcal{S}$ and $\mathcal{Q}$, we obtain $$\left(\sigma^{-1}\right)^{\alpha\beta}_{xx}\mathcal{J}^\alpha\mathcal{J}^\beta = \mathbb{E}\left[\mathrm{e}^B \left(\mathcal{J}^{\mathrm{q}} - \frac{Sa_t}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}TSH_{tt}}\mathcal{J}^{\mathrm{h}}\right)^2 + S \left(\mathrm{e}^{-B}\partial_x \left(\frac{\mathrm{e}^B}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}TS}\right)\right)^2 \left(\mathcal{J}^{\mathrm{h}}\right)^2\right],$$which gives (\[donosresult\]).
[^1]: This is technically not quite right – there is one (set of) bulk scalar fields in these models which is of the form $\phi_i = kx_i$, but this choice maintains homogeneity in the sectors of the theory of interest.
[^2]: See [@davison15; @blake2] for recent updates on this particular holographic model.
[^3]: In graphene, for example, $\sigma_{\textsc{q}} \sim e^2/h$, with $e$ the charge of the electron.
[^4]: We can maintain an electric current without adding any energy by simply shifting to a moving reference frame.
[^5]: $|\bar\mu(\mathbf{x}_1)-\bar\mu(\mathbf{x}_2)|$ can be comparable to, or larger than, $|\bar\mu(\mathbf{x}_1)|$, so long as $|\mathbf{x}_1-\mathbf{x}_2|\gg l$.
[^6]: Note that ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{Q} = (\partial \mathcal{Q}/\partial \mu) {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu + (\partial \mathcal{Q}/\partial T){\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T$.
[^7]: In contrast, [@lucasMM] has recently used the memory matrix formalism to recover hydrodynamic transport, with expressions for the phenomenological $\tau$ and other thermodynamic coefficients expressed in terms of microscopic Green’s functions. This is a more complete approach to the problem, but does not generalize easily to higher orders in perturbation theory.
[^8]: Note that this linear term in perturbations is parametrically large in $h$ – but it is linear in $F^\alpha_i$. Our perturbative parameter is first and foremost $F^\alpha_i$, since we are computing a linear response transport coefficient. And while the background may also be treated perturbatively in disorder strength, we must take $F^\alpha_i\rightarrow 0$ *before* $h\rightarrow 0$.
[^9]: This expectation value is also encoded near the boundary of AdS.
[^10]: Note that their results simplify, in some special cases, to analytic results derived in [@chesler; @peet]. The case where $\mathcal{Q}=0$ was also studied in [@ugajin].
[^11]: Of course, this would lead to temperature growth at second order in perturbation theory, so that the energy conservation equation (up to external sources) exactly holds at all orders.
[^12]: For example, this may correspond in a conformal fluid to deforming the equation of state with a non-zero bulk viscosity.
[^13]: When $\mu \gg T$, generally Fermi liquid theory is valid.
[^14]: As the percolation problem is trivial in $d=1$, the study of holographic insulators may be quite a bit richer in models with translational symmetry broken in multiple spatial dimensions.
[^15]: To compute an eigenvalue, one should first properly “re-dimensionalize" $\hat\rho^\alpha$ so that all matrix elements of $\tilde{\sigma}^{\alpha\beta}$ have the same dimension.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'An interior point method for the structural topology optimization is proposed. The linear systems arising in the method are solved by the conjugate gradient method preconditioned by geometric multigrid. The resulting method is then compared with the so-called optimality condition method, an established technique in topology optimization. This method is also equipped with the multigrid preconditioned conjugate gradient algorithm. We conclude that, for large scale problems, the interior point method with an inexact iterative linear solver is superior to any other variant studied in the paper.'
author:
- 'Michal Kočvara[^1]'
- 'Sudaba Mohammed[^2]'
bibliography:
- 'topo\_mgm\_paper.bib'
title: |
Primal-dual interior-point multigrid method for\
topology optimization[^3]
---
#### Keywords:
topology optimization, multigrid methods, interior point methods, preconditioners for iterative methods
#### MSC2010:
65N55, 35Q93, 90C51, 65F08
Introduction
============
The discipline of topology optimization offers challenging problems to researchers working in large scale numerical optimization. The results are essentially colors of pixels in a 2d or 3d “pictures”. Hence, in order to obtain high-quality results, i.e., fine pictures capturing all details, a very large number of variables is essential. In this article we only consider the discretized, finite dimensional topology optimization problem. For its derivation and for general introduction to topology optimization, see, e.g., [@bendsoe-sigmund].
We will consider the basic problem of topology optimization: minimization of compliance under equilibrium equation constraints and the most basic linear constraints on the design variables: $$\begin{aligned}
&\min_{x\in\R^m\!,\,u\in\R^n} \frac{1}{2}f^Tu\label{eq:to}\\
&\mbox{subject to}\nonumber\\
&\qquad K(x) u = f\nonumber\\
&\qquad \sum_{i=1}^m x_i = V\nonumber\\
&\qquad x_i\geq 0,\quad i=1,\ldots,m \nonumber\\
&\qquad x_i\leq \overline{x},\quad i=1,\ldots,m\nonumber\end{aligned}$$ where $K(x) = \sum_{i=1}^m x_i K_i$, $K_i\in\RR^{n\times n}$ and $f\in\RR^n$. We assume that $K_i$ are symmetric and positive semidefinite and that $\sum_{i=1}^m K_i$ is sparse and positive definite. We also assume that the data $V\in\RR$ and $\overline{x}\in\RR^m$ is chosen such that the problem is strictly feasible. For further reference, we will call the design variable $x$ the *density*.
The most established and commonly used optimization methods to solve this problem are the Optimality Conditions (OC) method ([@bendsoe-sigmund p.308]) and the Method of Moving Asymptotes (MMA) by Svanberg [@svanberg]. In both methods, the computational bottleneck consists of the solution of a large scale linear system with a sparse symmetric positive definite matrix (the equilibrium equation). This is traditionally used by a direct solver, such as the Cholesky decomposition. Recently, several authors proposed the use of iterative solvers, mostly preconditioned Krylov subspace solvers, such as Conjugate Gradients (CG), MINRES or GMRES. These have one big advantage which is specific for their use within optimization algorithms: in the early (or even not-so-early) stages of the optimization method, only a very low accuracy of the linear solver is needed. They also have one big disadvantage: in the late stages of the optimization method, the linear solvers become very ill-conditioned and thus a vanilla iterative method can come into extreme difficulties.
It is therefore essential to use a good preconditioner for the Krylov subspace method. The difficulty lies in the fact that as we approach the optimal solution of the topology optimization problem, the condition number of the stiffness matrices increases significantly. In fact, it is only controlled by an artificial lower bound on the variable—if this bound was zero, the stiffness matrix would be singular. Wang et al. [@wang] studied the dependence of the condition number on the variables and concluded that it is a combination of the ratio of maximum and minimum density and the conditioning of a corresponding problem with constant density. Consequently, they proposed a rescaling of the stiffness matrix combined with incomplete Cholesky preconditioner. The rescaling results in constant order of condition number during the optimization iterations. For large scale example still hundreds of MINRES iterations are needed and hence the authors use recycling of certain Krylov subspaces from previous iterations of the optimization method. Recently, Amir et al. [@amir] proposed a multigrid preconditioner for the systems resulting from OC or MMA methods and demonstrated that the resulting linear system solver keeps its efficiency also for rapidly varying coefficient of the underlying PDE, i.e., rapidly varying $x$ in (\[eq:to\]).
While OC and MMA methods are the most popular methods in topology optimization, they may not be the most efficient. The basic problem (\[eq:to\]) is convex (more precisely, it is equivalent to a convex problem) and we may thus expect interior point methods to be highly efficient (see, e.g., [@nocedal-wright]). Indeed, Jarre et al. [@jarre-kocvara-zowe] proposed an interior point method for the truss topology optimization problem that is equivalent to the discretized problem (\[eq:to\]), with the exception that the stiffness matrix may be dense. They reported high efficiency of the method and ability to solve large scale problems; they also proved convergence of the proposed method. Maar and Schulz [@maar-schulz] studied interior point methods for problem (\[eq:to\]) with sparse stiffness matrices and proposed to use a multigrid preconditioner for the GMRES method to solve the arising indefinite linear systems.
A new comprehensive numerical study of optimization methods for topology optimization can be found in [@rojas-stolpe]. The authors compare the efficiency of different methods, including general purpose optimization solvers such as SNOPT [@snopt].
In this article we follow the path outlined by Jarre et al.[@jarre-kocvara-zowe] and by Maar and Schulz [@maar-schulz]. We use the same interior point method as in [@jarre-kocvara-zowe] and, unlike in [@maar-schulz], reduce the linear systems to obtain positive definite matrices. This allows us to use standard conjugate gradient method preconditioned by standard V-cycle multigrid. We further use the same linear solver in the OC method (in the same way as suggested in [@amir]) to get a comparison with our interior point method. We will see that in both cases the inexact multigrid preconditioned CG method leads to a very efficient optimization solver. Most notably, in case of the interior point method we obtain an approximately constant number of CG iterations needed to solve the full problem which is independent of the size of the problem. In case of the OC method, the total number of OC iterations is increasing with the problem size; however, for a given problem size, the number of CG steps per one linear systems remains almost constant, and very low, in all OC iterations, notwithstanding the condition number of the stiffness matrix.
In this paper, we primarily consider the so-called variable thickness sheet problem (\[eq:to\]) and not its more popular cousin, the SIMP problem [@bendsoe-sigmund]. The reason is the (hidden) convexity and existence of solution of (\[eq:to\]) (see [@ben1996hidden] and [@bendsoe-sigmund p.272–274]). The goal of the paper is to study and compare numerical methods for optimization problems. This can be done in a fair way if the problem is convex; by introducing non-convexity, as in the SIMP formulation, any such comparison is further influenced by many additional factors. To demonstrate these difficulties and the fact that the iterative solver is still a viable option in this context, we have added a brief section on the SIMP model.
Finally, the methodology proposed in this paper is fully based on (typically vectorizable and/or parallelizable) iterative schemes. It could thus be of benefit to the methods of distributed optimization [@boyd2011distributed] and optimization on vector processors, namely GPU [@schmidt20112589; @zegard2013toward], not only in the context of topology optimization.
Newton systems for KKT conditions
=================================
Let $\mu\in\R^n$, $\lambda\in\R$, $\varphi\in\R^m$ and $\psi\in\R^m$ denote the respective Lagrangian multipliers for constraints in (\[eq:to\]). The Karusch-Kuhn-Tucker (KKT) first order optimality conditions for (\[eq:to\]) can be written as $$\begin{aligned}
-{\rm Res}^{(1)}:= &\ K(x) u - f = 0\label{eq:KKT1}\\
-{\rm Res}^{(2)}:= &\ \sum_{i=1}^m x_i - V = 0\label{eq:KKT2}\\
-{\rm Res}^{(3)}:= &\ -\frac{1}{2}u^TK_iu - \lambda - \varphi_i + \psi_i = 0,\quad i=1,\ldots,m\label{eq:KKT3}\\
&\ \varphi_i x_i = 0,\quad i=1,\ldots,m\label{eq:KKT4}\\
&\ \psi_i(\overline{x}-x_i)= 0,\quad i=1,\ldots,m\label{eq:KKT5}\\
&\ x_i\geq 0,\quad \overline{x}-x_i\geq 0,\quad \varphi_i\geq 0,\quad \psi\geq 0\label{eq:KKT6}\end{aligned}$$ We will perturb the complementarity constraints (\[eq:KKT4\]) and (\[eq:KKT5\]) by barrier parameters $s,r>0$: $$\begin{aligned}
-{\rm Res}^{(4)}&:= \varphi_i x_i - s = 0,\quad i=1,\ldots,m\label{eq:KKT4p}\\
-{\rm Res}^{(5)}&:= \psi_i(\overline{x}-x_i) - r= 0,\quad i=1,\ldots,m\label{eq:KKT5p}\end{aligned}$$ and apply Newton’s method to the system of nonlinear equations (\[eq:KKT1\]), (\[eq:KKT2\]), (\[eq:KKT3\]), (\[eq:KKT4p\]), (\[eq:KKT5p\]). In every step of the Newton method, we have to solve the linear system $$\label{eq:nwt}
\begin{bmatrix} K(x) & 0 & B(u) & 0 & 0\\
0 & 0 & e^T & 0 & 0\\
B(u)^T & e & 0 & I & -I\\
0 & 0 & \Phi & X & 0 \\
0 & 0 & -\Psi & 0 & \wX \end{bmatrix}
\begin{bmatrix} d_u\\d_\lambda\\d_x\\d_\varphi\\d_\psi\end{bmatrix}
=
\begin{bmatrix}{\rm Res}^{(1)}\\ {\rm Res}^{(2)}\\ {\rm Res}^{(3)}\\
{\rm Res}^{(4)}\\ {\rm Res}^{(5)} \end{bmatrix}\,.$$ Here $B(u) = (K_1u, K_2u,\ldots,K_mu)$, $e$ is a vector of all ones and $$X={\rm diag}(x),\quad \wX={\rm diag}(\overline{x}-x),\quad
\Phi = {\rm diag}(\varphi),\quad \Psi = {\rm diag}(\psi)$$ are diagonal matrices with the corresponding vectors on the diagonal.
Because the last two equations only involve diagonal matrices, we can eliminate $d_\varphi$ and $d_\psi$: $$\begin{aligned}
d_\varphi &=X^{-1}({\rm Res}^{(4)} - \Phi d_t)\label{eq:phipsi1}\\
d_\psi &=\wX^{-1}({\rm Res}^{(5)} - \Psi d_t)\,.\label{eq:phipsi2}\end{aligned}$$ This will reduce the system (\[eq:nwt\]) to $$\label{eq:nwtr}
\begin{bmatrix} K(x) & 0 & B(u) \\
0 & 0 & e^T \\
B(u)^T & e & -(X^{-1}\Phi+\wX^{-1}\Psi) \end{bmatrix}
\begin{bmatrix} d_u\\d_\lambda\\d_x\end{bmatrix}
=
\begin{bmatrix}{\rm Res}^{(1)}\\ {\rm Res}^{(2)}\\ \widetilde{\rm Res}^{(3)} \end{bmatrix}$$ with $$\widetilde{\rm Res}^{(3)} ={\rm Res}^{(3)}-X^{-1}{\rm Res}^{(4)}+\wX^{-1}{\rm Res}^{(5)}\,.$$
We can now follow two strategies. Firstly, we can solve the system (\[eq:nwtr\]) as it is, i.e., an indefinite system of dimension $m+n+1$. To simplify things, we can still eliminate the multipliers $\varphi$ and $\psi$ as $$\varphi_i = s/x_i,\quad \psi_i = r/(\overline{x}-x_i),\quad i=1,\ldots,m$$ to get $$\label{eq:nwtr1}
\begin{bmatrix} K(x) & 0 & B(u) \\
0 & 0 & e^T \\
B(u)^T & e & -(sX^{-2}+r\wX^{-2}) \end{bmatrix}
\begin{bmatrix} d_u\\d_\lambda\\d_x\end{bmatrix}
=
\begin{bmatrix}{\rm Res}^{(1)}\\ {\rm Res}^{(2)}\\ \widetilde{\rm Res}^{(3)} \end{bmatrix}\,.$$
[**Remark.**]{} System (\[eq:nwtr1\]) could be obtained directly as a Newton system for optimality conditions of the following “penalized” problem: $$\begin{aligned}
&\min_u \frac{1}{2}f^Tu + s\sum_{i=1}^m\log x_i + r\sum_{i=1}^m\log (\overline{x}-x_i)\\
&\qquad{\rm s.t.}\quad K(x)u=f,\quad\sum_{i=1}^m x_i = V\,;\end{aligned}$$ see, e.g., [@nocedal-wright Ch.19.1].
Secondly, we can further reduce the Newton system (\[eq:nwtr\]). As the (3,3)-block matrix in (\[eq:nwtr\]) is diagonal, we will compute the Schur complement to the leading block to get $$\label{eq:nwtr2}
Z \begin{bmatrix} d_u\\d_\lambda\end{bmatrix} =
{\rm Res}^{(Z)} \,,$$ with $$\label{eq:nwtr2Z}
Z = \begin{bmatrix} K(x) & 0 \\
0 & 0 \end{bmatrix} + \begin{bmatrix}B(u) \\ e^T \end{bmatrix}(X^{-1}\Phi+\wX^{-1}\Psi)^{-1}
\begin{bmatrix}B(u)^T & e \end{bmatrix}$$ and $$\label{eq:nwtr2rhs}
{\rm Res}^{(Z)} =
\begin{bmatrix}{\rm Res}^{(1)}\\ {\rm Res}^{(2)} \end{bmatrix}
+ \begin{bmatrix}B(u) \\ e^T \end{bmatrix}(X^{-1}\Phi+\wX^{-1}\Psi)^{-1}\widetilde{\rm Res}^{(3)}
\,.$$ The remaining part of the solution, $d_x$, is then computed by $$\label{eq:nwtr2Za}
d_x = (X^{-1}\Phi+\wX^{-1}\Psi)^{-1}\left(\widetilde{\rm Res}^{(3)} -
B^T{\rm Res}^{(1)} - e{\rm Res}^{(2)}\right)\,.$$
Interior point method
=====================
Once we have derived the Newton systems, the interior point algorithm is straightforward (see, e.g., [@nocedal-wright Ch.19]). The details of the individual steps of the algorithm will be given in subsequent paragraphs.
The algorithm
-------------
Denote $z=(u,\lambda,x,\varphi,\psi)^T$. Set $x_i=V/m,\ i=1,\ldots,m$, $u=K(x)^{-1}f$, $\lambda=1,\ \varphi=e,\
\psi=e$. Set $s=1,\ r=1,\ \sigma_s, \sigma_r\in(0,1)$. Do until convergence:
1. Solve either system (\[eq:nwtr\]) or (\[eq:nwtr2\]) and compute the remaining components of vector $d$ from (\[eq:nwt\]).
2. Find the step length $\alpha$.
3. Update the solution $$z = z + \alpha d\,.$$
4. If the stopping criterium for the Newton method is satisfied, update the barrier parameters $$s = \sigma_s\cdot s,\quad r=\sigma_r\cdot r\,.$$ Otherwise, keep current values of $s$ and $r$.\
Return to Step 1.
Barrier parameter update
------------------------
We use a fixed update of both parameters $s$ and $r$ with $$\sigma_s = \sigma_r = 0.2 \,.$$ This update leads to long steps and, consequently, small number of interior point iterations. The value of the update parameter is a result of testing and leads, in average, to the smallest overall number of Newton steps. A more sophisticated version of the algorithm, with an adaptive choice of the barrier parameters $s$ and $r$ can be found in [@jarre-kocvara-zowe].
Step length
-----------
We cannot take the full Newton step $$z_{\rm new} = z + d$$ because some variables could become infeasible with respect to the inequality constraints (\[eq:KKT6\]). We thus need to shorten the step in order to stay strictly feasible with some “buffer” to the boundary of the feasible domain. A simple step-length procedure is described below (see also [@nocedal-wright Ch.19.2]).
Find $\alpha_l$ such that $x_i+(d_x)_i> 0$ for $i\in\{j:(d_x)_j<0\}$ and $\alpha_u$ such that $x_i+(d_x)_i< \overline{x}$ for $i\in\{j:(d_x)_j>0\}$ using the following formulas: $$\alpha_l = 0.9\cdot\min_{i:(d_x)_i<0}\left\{-\frac{x_i}{(d_x)_i}\right\},\quad
\alpha_u = 0.9\cdot\min_{i:(d_x)_i>0}\left\{\frac{\overline{x}-x_i}{(d_x)_i}\right\}\,.$$ The constant 0.9 guarantees the shortening of the step in the interior of the feasible domain. Now take the smaller of these numbers and, if applicable, reduce it to 1: $$\alpha = \min\{\alpha_l,\alpha_u,1\}\,.$$ A more sophisticated (and complicated) line-search procedure is described in [@jarre-kocvara-zowe].
It is worth noticing that for a properly chosen initial barrier parameter and its update, the step-length reduction is almost never needed; this was, at least, the case of our numerical examples and our choice of the parameters.
Stopping rules
--------------
Following [@jarre-kocvara-zowe], we terminate the Newton method whenever $$\frac{\|{\rm Res}^{(1)}\|}{\|f\|} + \frac{\|\widetilde{\rm Res}^{(3)}\|}{\|\varphi\|+\|\psi\|} \leq \tau_{\scriptscriptstyle\rm NWT}.$$
The full interior point method is stopped as soon as both parameters $s$ and $r$ are smaller than a prescribed tolerance: $$\label{crit:ip}
\max\{s,r\}\leq \tau_{\scriptscriptstyle\rm IP}\,.$$
In our numerical experiments, we have used the values $\tau_{\scriptscriptstyle\rm NWT}=10^{-1}$ and $\tau_{\scriptscriptstyle\rm
IP}=10^{-8}$.
[A more established criterium for terminating the interior point algorithm would be to stop whenever all (scaled) residua are below some tolerance, i.e., $$\frac{\|{\rm Res}^{(1)}\|}{\|f\|} + \frac{\|\widetilde{\rm Res}^{(3)}\|}{\|\varphi\|+\|\psi\|}
+\frac{\varphi^Tx}{\|\varphi\|\|x\|} +\frac{\psi^T(\overline{x}-x)}{\|\varphi\|\|x\|}
\leq \tau_{\scriptscriptstyle\rm IP}.$$ This criterium, however, leads to almost the same results as (\[crit:ip\]), hence we opted for the simpler and more predictable one.]{}
[The parameter $\tau_{\scriptscriptstyle\rm NWT}$ is kept constant in our implementation, unlike in classic path-following methods. We will return to this point later in Section \[sec:exactIP\]. ]{}
Optimality Conditions method
============================
One of the goals of this paper is to compare the interior point method with the established and commonly used Optimality Condition (OC) method. We will therefore briefly introduce the basic algorithm and its new variant. For more details, see ([@bendsoe-sigmund p.308]) and the references therein.
OC algorithm
------------
Assume for the moment that the bound constraints in (\[eq:to\]) are not present. Then the KKT condition (\[eq:KKT3\]) would read as $$-u^T K_i u + \lambda = 0\,, \quad i=1,\ldots,m\,.$$ (For convenience, we multiplied $\lambda$ from (\[eq:KKT3\]) by $-\frac{1}{2}$.) Multiplying both sides by $x_i$, we get $$x_i\lambda = x_i u^T K_i u\,, \quad i=1,\ldots,m$$ which leads to the following iterative scheme: $$x_i^{\rm NEW} = \displaystyle\frac{1}{\lambda}\ x_i u^T K_i u\, , \quad i=1,\ldots,m\,.$$ The new value of $x$ is then projected on the feasible set given by the bound constraints. The value of $\lambda$ should be chosen such that $\sum_{i=1}^m
x_i^{\rm NEW} = V$ and is obtained by a simple bisection algorithm. Hence we obtain the following algorithm called the OC method:
#### Algorithm OC
Let $x\in\RR^m$ be given such that $\sum_{i=1}^m x_i = V$, $x\geq 0$. Repeat until convergence:
1. $u=(K(x))^{-1} f$
2. $\overline{\lambda}=10000$, $\underline{\lambda}=0$
3. While $\overline{\lambda}-\underline{\lambda}>\tau_{\scriptscriptstyle \lambda}$
1. $\lambda = (\overline{\lambda}+\underline{\lambda})/2$
2. $x_i^{\rm NEW} = \min\left\{x_i \displaystyle\frac{u^T K_i u}{\lambda}\
, \overline{x}\right\}\,, \quad i=1,\ldots,m$
3. $x = x^{\rm NEW}$
4. if $\sum_{i=1}^m x_i>V$ then set $\underline{\lambda}=\lambda$; else if $\sum_{i=1}^m x_i\leq V$ then set $\overline{\lambda}=\lambda$
The value of the bisection stopping criterium $\tau_{\scriptscriptstyle
\lambda}$ has been set to $10^{-11}$.
Notice that, due to positive semidefiniteness of $K_i$, the update in step 3(b) is always non-negative and thus the lower-bound constraint in the original problem (\[eq:to\]) is automatically satisfied.
The basic version of the OC method converges (there are no known counter examples) but is extremely slow. The reason for this is that, from the very first iterations, the method is zig-zagging between two clusters of points. However, the following two modifications lead to a substantial improvement. To the best of our knowledge, the second modification called Averaged OC is new.
Damped OC
---------
#### Algorithm DOC
Let $x\in\RR^m$ be given such that $\sum x_i = V$, $x\geq 0$. Repeat:
1. $u=(K(x))^{-1} f$
2. $x_i^{\rm NEW} = \min\left\{x_i \displaystyle\frac{(u^T K_i u)^q}{\lambda}\
, \overline{x}\right\}\,, \quad i=1,\ldots,m$
3. $x = x^{\rm NEW}$
Here $q$ is called the damping parameter; the typical choice is $q=1/2$. This version of the method is widely used among the structural engineers.
Averaged OC
-----------
Let us define an operator $OC(\cdot)$ as a result of one step of the standard OC algorithm.
#### Algorithm AOC
Let $x\in\RR^m$ be given such that $\sum x_i = V$, $x\geq 0$. Repeat:
1. $x^{(1)} = OC(x)$
2. $x^{(2)} = OC(x^{(1)})$
3. $x = \frac{1}{2}(x^{(1)} + x^{(2)})$
Numerical experiments suggest that Algorithm AOC is slightly faster than Algorithm DOC. This modification seems to be new, at least we did not find it in the existing literature.
Multigrid conjugate gradient method
===================================
In both optimization algorithms introduced above, we repeatedly need to solve systems of linear equations. In this section, we will introduce an efficient iterative method that seems to be most suitable for these problems. Throughout this section, we assume that we want to solve the problem $$\label{eq:lineq}
Az=b$$ where $b\in\RR^n$ and $A$ is a $n\times n$ symmetric positive definite matrix.
Multigrid method for linear systems
-----------------------------------
Recall first the Correction Scheme (CS) version of the multigrid algorithm (see, e.g., [@hackbusch]). Let $opt$ denote a (typically but not necessarily) convergent iterative algorithm for (\[eq:lineq\]): $$z_{\rm new} = opt(A,b;z,\epsilon,\nu)\,,$$ where, on input, $z$ is the initial approximation of the solution, $\epsilon$ is the required precision and $\nu$ the maximum number of iterations allowed. This will be called the *smoother*. A typical example is the Gauss-Seidel iterative method.
Assume that there exist $\ell$ linear operators $I_k^{k-1}:\RR^{n_k}\to\RR^{n_{k-1}}$, $k=2,\ldots,\ell$, with $n:=n_\ell>n_{\ell-1}>\cdots>n_2>n_1$ and let $I_{k-1}^k:=(I_k^{k-1})^T$. These are either constructed from finite element or finite difference refinements of some original coarse grid (geometric multigrid) or from the matrix $A$ (algebraic multigrid); see [@briggs2000multigrid] for details.
Define the “coarse level” problems $$A_k z_k=b_k,\quad k=1,\ldots,\ell-1$$ with $$A_{k-1} = I_k^{k-1} (A_k) I_{k-1}^k,\quad b_{k-1} = I_k^{k-1} (b_k),\quad k=2,\ldots,\ell\,.$$
#### Algorithm MG
(V-cycle correction scheme multigrid)
Set $\epsilon,\epsilon_0$. Initialize $z^{(\ell)}$.
for $i=1:niter$
$z^{(\ell)} := mgm(\ell,z^{(\ell)},b_\ell)$
test convergence
end
function $z^{(k)}=mgm(k,z^{(k)},r_k)$
if $k=0$
$z^{(k)}:= opt(A_1,b_1;z^{(k)},\epsilon_0,\nu_0)$(coarsest grid solution)
else
$z^{(k)}:= opt(A_k,b_k;z^{(k)},\epsilon,\nu_1)$ (pre-smoothing)
$r_{k-1} = I_k^{k-1} (r_k - A_k z^{(k)})$(restricted residuum)
$v^{(k-1)} =
mgm(k-1,0_{n_{k-1}},r_{k-1})$(coarse grid correction)
$z^{(k)} := z^{(k)} + I_{k-1}^k v^{(k-1)}$(solution update)
$z^{(k)}:= opt(A_k,b_k;z^{(k)},\epsilon,\nu_2)$ (post-smoothing)
end
Multigrid preconditioned conjugate gradient method
--------------------------------------------------
Although the multigrid method described above is very efficient, an even more efficient tool for solving (\[eq:lineq\]) may be the preconditioned conjugate gradient (CG) method, whereas the preconditioner consist of one step of the V-cycle multigrid method. The algorithm is described below (see, e.g., [@golub-vanloan]).
#### Algorithm PCG
Given initial $z$, set $r := Az-b$
$y := mgm(\ell,0_{n},r)$
Set $p:=-y$
for $i=1:niter$
$\alpha:=\displaystyle\frac{r^T y}{p^TAp}$
$z:= z + \alpha p$
$\tilde{r} := r + \alpha Ap$
$\tilde{y} := mgm(\ell,0_{n},\tilde{r})$
$\beta := \displaystyle\frac{\tilde{r}^T\,\tilde{y}}{r^T\, y}$
$p:= -y +\beta p$
$r:= \tilde{r}$, $y:= \tilde{y}$
test convergence
end
Multigrid conjugate gradients for IP and OC methods
===================================================
The main goal of this section (and of the whole article) is to study the effect of the multigrid preconditioned CG method in the IP and OC algorithms. We will also compare them to their counterparts, IP and OC with direct solvers.
The details on discretization and the choice of prolongation and restriction operators will be given in Section \[sec:examples\].
Multigrid conjugate gradients for IP
------------------------------------
Our goal is to solve the linear systems arising in the Newton method, by the conjugate gradient method preconditioned by one V-type multigrid step. We can choose one of the three equivalent systems to solve, namely the full system (\[eq:nwt\]), the reduced saddle-point system (\[eq:nwtr\]) and the so-called augmented system (\[eq:nwtr2\]). We prefer the last one for the following reasons.
- The matrix $Z$ in (\[eq:nwtr2\]) is positive definite and we can thus readily apply the standard conjugate gradient method together with the standard V-cycle as a preconditioner. We could, of course, use GMRES or MINRES for the indefinite systems in (\[eq:nwt\]) and (\[eq:nwtr\]), however, the multigrid preconditioner, in particular the smoother, would become more complicated in this case; see [@maar-schulz], who used so-called transforming smoothers introduced by Wittum [@wittum].
- In order to use the multigrid preconditioner, we have to define prolongation/restriction operators for the involved variables. This can be easily done in case of the system (\[eq:nwtr2\]) that only involves the displacement variable $u\in\RR^n$ plus one additional variable $\lambda$, the Lagrangian multiplier associated with the volume constraint; see the next Section \[sec:examples\] for details.
If, on the other hand, we decided to solve (\[eq:nwt\]) or (\[eq:nwtr\]), we would have to select an additional restriction operator for the variables associated with the finite elements; this operator should then be “compatible” with the nodal-based restriction operator. This is a rather non-trivial task and can be simply avoided by choosing system (\[eq:nwtr2\]).
The matrix $Z$ from (\[eq:nwtr2\]) is positive definite, sparse and typically has an arrow-type sparsity structure: it is banded apart from the last full row and column; see Figure \[fig:matrix\]-left. The bandwidth grows, approximately, with the square root of the problem size. At the same time, the number of non-zeros in each row is always the same, notwithstanding the problem size.
#### Stopping rule
It is a big advantage of iterative methods, over direct solvers, that they allow us to control the precision of the approximate solution and stop whenever even a low required precision is reached. In our implementation, the PCG method is stopped whenever $$\label{eq:cgstop}
\|\rho\|\,\|b\|\leq 10^{-2}$$ where $\rho$ is residuum and $b$ the right-hand side of the linear system, respectively. In this way we only compute an approximate Newton direction; it is shown, e.g., in [@dembo] that the resulting method converges once the approximate Newton direction is “close enough” (though not infinitesimally close in the limit) to the exact solution of the Newton system. Furthermore, for convex quadratic programming problems, Gondzio [@gondzio] has shown that when the PCG method is stopped as soon as $\|\rho\|\leq 0.05 s$ ($s$ being the barrier parameter), the theoretical complexity of the interior point method is the same as with the exact linear solver. Inexact iterative solvers in the context of other optimization problems and algorithms were further studied, e.g., in [@conn-gould-toint; @pennon-iter; @mizuno-jarre; @toh].
In our case, the value of $10^{-2}$ proved to be a good compromise between the overall number of Newton steps and the overall number of PCG iterations within the IP method. With this stopping criterium, the IP methods requires, typically, 2–4 PCG iterations in the initial and in many subsequent IP steps. Only when we get close to the required accuracy, in the last 2–3 IP steps, the conditioning of the matrix $Z$ increases significantly and so does the number of PCG steps, typically to 10–30; see the next section for detailed numerical results.
Multigrid conjugate gradients for OC {#sec:CGOC}
------------------------------------
Within the OC algorithm, the multigrid CG method will be used to solve the discretized equilibrium equation $Ku=f$. Recall that $K$ is assumed to be a positive definite matrix. Moreover $K$ is very sparse and, if a reasonably good numbering of the nodes is used, banded. A typical non-zero structure of $K$ is shown in Figure \[fig:matrix\]-right: it is exactly the same as for the matrix in (\[eq:nwtr2\]) in the IP method, apart from the additional last column and row in the augmented matrix in (\[eq:nwtr2\]). The only degrees of freedom in the resulting algorithm are the stopping criteria for the OC method and for the multigrid CG method.
#### The overall stopping criterium
As the dual information (Lagrangian multipliers associated with the bound constraints) is not readily available, so far the only practical (and widely used) stopping criterium for the OC method is the difference in the objective function value in two subsequent iterations. Needless to say that, unless we have an estimate for the rate of convergence, this criterium can be misleading and may terminate the iteration process long before some expected approximation of the optimum has been reached. Nevertheless, many numerical experiments suggest that this criterium is not as bad as it seems and serves its purpose for the OC method.
Hence the OC method is typically stopped as soon as $$\label{eq:OCstop}
|f^T u_k - f^T u_{k-1}| \leq \tau_{\scriptscriptstyle\rm OC}$$ where $k$ is the iteration index. In our numerical experiments we have used $\tau_{\scriptscriptstyle\rm OC}=10^{-5}$; this value has been chosen such that the OC results are comparable to the IP results, in the number of valid digits both in the objective function and in the variables; see Section \[sec:exact\] for more details.
#### Stopping criterium for the multigrid CG method
As already mentioned above, one of the advantages of an iterative method is the fact that an exact solution to the linear system is not always needed. In such a case, we can stop the iterative method after reaching a relatively low accuracy solution. The required accuracy of these solutions (such that the overall convergence is maintained) is well documented and theoretically supported in case of the IP method; it is, however an unknown in case of the OC method; see [@amir] for detailed discussion. Clearly, if the linear systems in the OC method are solved too inaccurately, the whole method may diverge or just oscillate around a point away from the solution.
We have opted for the following heuristics that guarantees the (assumed) overall convergence of the OC method. Notice that the OC method is a feasible descent algorithm. That means that every iteration is feasible and the objective function value in the $k$-th iteration is smaller than that in the $(k-1)$-st iteration. Hence
- we start with $\tau=10^{-4}$;
- if $f^T u_k > f^T u_{k-1}$, we update $\tau := 0.1\, \tau$.
In our numerical tests, the update had to be done only in few cases and the smallest value of $\tau$ needed was $\tau=10^{-6}$. Recall that this is due to our relatively mild overall stopping criterium (\[eq:OCstop\]). In the next section, we will see that this heuristics serves its purpose, as the number of OC iterations is almost always the same, whether we use an iterative or a direct solver for the linear systems.
Numerical experiments {#sec:examples}
=====================
This section contains detailed results of three numerical examples. All codes were written entirely in MATLAB. Notice, however, that when we refer to a direct solver for the solution of linear system, we mean the backslash operation in MATLAB which, for our symmetric positive definite systems, calls the CHOLMOD implementation of the Cholesky method [@cholmod]. This implementation is highly tuned, very efficient and written in the C language. So whenever we compare CPU times of the iterative solver with the direct solver, we should keep this in mind. These comparisons are given solely to show the tendency in the CPU time when increasing the problem size. All problems were solved on an Intel Core i5-3570 CPU at 3.4GHz with 8GB RAM, using MATLAB version 8.0.0 (2012b) running in 64 bit Windows 7.
In all examples, we use square finite elements with bilinear basis functions for the displacement variable $u$ and constant basis functions for the thickness variable $x$, as it is standard in topology optimization. The prolongation operators $I_{k-1}^k$ for the variable $u$ are based on the nine-point interpolation scheme defined by the stencil $\begin{pmatrix}
\frac{1}{4}&\frac{1}{2}&\frac{1}{4}\\
\frac{1}{2}&1&\frac{1}{2}\\
\frac{1}{4}&\frac{1}{2}&\frac{1}{4}\\
\end{pmatrix}$; see, e.g., [@hackbusch]. When solving the linear system (\[eq:nwtr2\]) in the interior point method, we also need to prolong and restrict the single additional variable $\lambda$; here we simply use the identity.
The examples are solved with isotropic material with Young’s modulus equal to 1 and Poisson’s ratio 0.3. The physical dimensions of the computational domain are given by the coarsest mesh, whereas the coarse level element has dimension $1\times 1$. The upper bound on the variable $x$ is set to $\overline{x}=2$. The load is always defined on the finest discretization level on edges of two elements sharing a node on the boundary specified in each example. The load always acts in vertical direction. Thus the non-zero elements of the discretized load vector will be $(-\frac{1}{2}, -1, -\frac{1}{2})$, associated with the vertical components of the specified boundary nodes its two immediate neighbours on this boundary.
The meaning of the captions in the following tables:
: [**problem**]{}…the first two numbers describe the dimension of the computational domain, the last number is the number of mesh refinements
: [**variables**]{}…number of variables in the linear systems
: [**feval**]{}…total number of function evaluations (equal to the number of linear systems solved)
: [**total CG iters**]{}…total number of CG iterations in the optimization process
: [**solver CPU time**]{}…total CPU time spent in the solution of linear systems
: [**average CG iters**]{}…average number of CG iterations per one linear system
Example 1
---------
We consider a square computational domain with the coarsest mesh consisting of $2\times 2$ elements. All nodes on the left-hand side are fixed, the right-hand middle node is subject to a vertical load; see Figure \[fig:11\]. We use up to nine refinements levels with the finest mesh having 262144 elements and 525312 nodal variables (after elimination of the fixed nodes).
Table \[tab:1\] presents the results of the interior point method. We can see that, with increasing size of the problem, the total number of CG iterations is actually decreasing. This is due to our specific stopping criterium explained in the previous section. We also observe that the average number of CG iterations per linear system is very low and, in particular, *is not increasing with the problem size*, the result of the multigrid preconditioner.
--------- ----------- ------- ---------- ---------- ----------
total solver average
problem variables feval CG iters CPU time CG iters
223 145 31 253 0.18 8.16
224 545 30 281 0.44 9.37
225 2113 29 197 0.91 6.79
226 8321 28 139 2.79 4.96
227 33025 27 119 12.7 4.41
228 131585 25 104 45.8 4.16
229 525313 27 85 156.0 3.15
--------- ----------- ------- ---------- ---------- ----------
: Example 1, interior point method with iterative solver
\[tab:1\]
Let us now compare these results with those for the OC method where the linear system is just the equilibrium equation; see Table \[tab:2\]. As expected, the number of OC iterations (and thus the number of linear systems and the total number of CG iterations) *grows* with the size of the problem. Also in this case the average number of CG iterations is almost constant, notwithstanding the size of the problem.
--------- ----------- ------- ---------- ---------- ----------
total solver average
problem variables feval CG iters CPU time CG iters
223 144 19 56 0.04 2.95
224 544 33 100 0.14 3.03
225 2112 55 164 0.65 2.98
226 8320 85 254 4.84 2.99
227 33024 111 332 30.8 2.99
228 131584 119 362 133.0 3.04
229 525312 123 368 636.0 2.99
--------- ----------- ------- ---------- ---------- ----------
: Example 1, OC method with iterative solver
\[tab:2\]
The comparison of the interior point method with the OC method is graphically presented in Figure \[fig:12\] (left). Here we can see, in the log-log scale, the total CPU time spent in the linear solver, growing with the size of the problem. While initially worse than the OC method, the interior point method grows slower and soon catches up and overtakes the OC method. For both methods, the growth is almost linear for the larger problems, so that we can estimate the growth in the CPU time as a polynomial function $cn^d$ of the problem dimension $n$. For the interior point method, the degree $d=0.907$ while for the OC method $d=1.09$. This means that the overall computational complexity of the IP method with inexact Newton and inexact multigrid CG methods is slightly *sublinear*. For the OC method, it is just a bit worse than linear.
In Figure \[fig:12\] (right) we compare the iterative solver used in the interior point method with a direct Cholesky solver (see the warning at beginning of this section!). We can clearly see that the time for the (C coded) direct solver grows quicker than for the (MATLAB coded) iterative solver.
Example 2
---------
The next example is similar to the previous one, only the computational domain is “longer” in the horizontal direction; the coarsest mesh consists of $4\times 2$ elements. It is well known that the conditioning of this kind of examples grows with the slenderness of the domain. As before, all nodes on the left-hand side are fixed, the right-hand middle node is subject to a vertical load; see Figure \[fig:21\]. Again, we use up to nine refinements levels with the finest mesh having 524288 elements and 1050624 nodal variables (after elimination of the fixed nodes).
We first show the results of the interior point method in Table \[tab:3\]. Just as in the previous example, the total number of CG iterations is decreasing with the increasing size of the problem. Again, the average number of CG iterations per linear system is very low and not increasing.
--------- ----------- ------- ---------- ---------- ----------
total solver average
problem variables feval CG iters CPU time CG iters
423 288 33 265 0.24 8.03
424 1088 32 342 0.87 10.69
425 4224 31 207 1.89 6.68
426 16640 30 160 7.77 5.33
427 66048 29 139 31.1 4.79
428 263168 27 123 119.0 4.56
429 1050624 27 101 385.0 3.74
--------- ----------- ------- ---------- ---------- ----------
: Example 2, IP method with iterative solver
\[tab:3\]
Compare this with the OC solver results in Table \[tab:4\]. In this case, we only consider eight refinement levels, as the largest problem would take too much time on our computer. Contrary to the previous example, the average number of CG iterations is slightly increasing due to the worse conditioning.
--------- ----------- ------- ---------- ---------- ---------- --
total solver average
problem variables feval CG iters CPU time CG iters
423 288 39 117 0.13 3.00
424 1088 45 144 0.34 3.20
425 4224 77 262 2.10 3.40
426 16640 123 423 16.2 3.44
427 66048 157 542 97.1 3.45
428 263168 165 739 552 4.48
--------- ----------- ------- ---------- ---------- ---------- --
: Example 2, OC method with iterative solver
\[tab:4\]
Figure \[fig:22\] (left) gives the comparison of the interior point with the OC method. We can see even more clearly than in the previous example the faster growth of the OC method. When we calculate the degree of the assumed polynomial function $cn^d$ of the problem dimension $n$ from the larger examples, we will obtain $d=0.944$ for the interior point method (so a linear growth) and $d=1.28$ for the OC method.
Figure \[fig:22\] (right) compares the iterative solver used in the interior point method with the Cholesky solver (see the beginning of this section), giving the same picture as in the previous example.
Finally in Figure \[fig:24\] we compare the average number of CG steps per linear system in the interior point and the OC solver. We can see that while the graph is decreasing for the IP method, it is slowly increasing in case of the OC method. The reason for that is that, in this example, we had to decrease the stopping criterium for the CG solver in the OC method, in order to guarantee its convergence (see Section \[sec:CGOC\] for explanation).
Example 3
---------
The computational domain for our final example is a rectangle, initially discretized by $8\times 2$ finite elements. The two corner points on the lower edge are fixed and a vertical load is applied in the middle point of this edge; see Figure \[fig:31\]. We use up to eight refinement levels with the finest mesh having 262144 elements and 568850 nodal variables (after elimination of the fixed nodes).
\
The results of the interior point method are shown in Table \[tab:5\]. Yet again, the total number of CG iterations is decreasing with the increasing size of the problem and the average number of CG iterations per linear system is very low and not increasing. The negative complexity factor is caused by the exceptional difficulties of the CG method in the last interior point step in problem 823.
--------- ----------- ------- ---------- ---------- ---------- --
total solver average
problem variables feval CG iters CPU time CG iters
822 170 33 284 0.21 8.61
823 594 31 383 0.78 12.35
824 2210 32 121 0.60 3.78
825 8514 31 166 3.41 5.35
826 33410 26 140 14.8 5.38
827 132354 26 133 78.8 5.12
828 526850 25 121 217.0 4.84
--------- ----------- ------- ---------- ---------- ---------- --
: Example 3, IP method with iterative solver
\[tab:5\]
Table \[tab:3\] presents the results of the OC method. As in Example 2, the average number of CG iterations is increasing due to the worse conditioning.
--------- ----------- ------- ---------- ---------- ---------- --
total solver average
problem variables feval CG iters CPU time CG iters
822 170 23 69 0.04 3.00
823 594 37 147 0.21 3.97
824 2210 57 267 1.16 4.68
825 8514 75 374 7.40 4.99
826 33410 99 495 51.8 5.00
827 132354 111 665 290.0 5.99
828 526850 113 677 1250.0 5.99
--------- ----------- ------- ---------- ---------- ---------- --
: Example 3, OC method with iterative solver
\[tab:6\]
Figure \[fig:32\] (left) compares of the interior point with the OC method. Yet again, the interior point method is a clear winner, both in the absolute timing as in the growth tendency. Calculating the degree of the assumed polynomial function $cn^d$ of the problem dimension $n$ from the larger examples, we get $d=1.09$ for the interior point method and $d=1.24$ for the OC method.
In Figure \[fig:32\] (right) we compare the iterative solver used in the interior point method with the Cholesky solver (see the beginning of this section). Finally in Figure \[fig:34\] we compare the average number of CG steps per linear system in the interior point and the OC solver. We can see that while the graph for the IP method has a decreasing tendency, it is increasing in case of the OC method. As before, the reason for that is that we had to decrease the stopping criterium for the CG solver, in order to guarantee its convergence (see Section \[sec:CGOC\]).
How exact is ‘exact’? {#sec:exact}
=====================
Interior point method {#sec:exactIP}
---------------------
In this article we are using slightly nonstandard stopping criteria within the interior point method. In particular, with the decreasing barrier parameters $r,s$ we do *not* decrease the stopping tolerances $\tau_{\scriptscriptstyle\rm NWT}$ and $\tau_{\scriptscriptstyle\rm CG}$ for the Newton method and for the conjugate gradients, respectively, although both is required for the theoretical convergence proof. In Figure \[fig:41\] we try to give a schematic explanation. Here we depict the feasible region and three points $x_1,x_2,x_3$ on the central path, corresponding to three values of the barrier parameter $r_1>r_2>r_3$. The exact solution lies in the corner of the feasible region. The circle around each of these points depict the region of stopping tolerance of the Newton method, once we get within, the Newton method will stop. The radius of these circles is decreasing, even though $\tau_{\scriptscriptstyle\rm NWT}$ is kept constant.
The idea is now obvious: it is “better” to stay within the tolerance circle of $x_3$ rather than to get very close to $x_2$.
In the lemma below, $x^*$ is a point on the central path corresponding to a barrier parameter $s$ and $x$ an approximation of $x^*$ resulting from inexact Newton method. We will show that, even with a fixed stopping criterium for the Newton method, $x$ must converge to $x^*$ with $s$ going to zero. For simplicity of notation, we will just verify it for the lower bound complementarity part of $\widetilde{\rm Res}^{(3)}$.
Let $x^*>0$ satisfies the perturbed scaled complementary condition $$\label{eq:111}
\frac{\varphi_i x^*_i - s}{x_i} = 0,\quad i=1,\ldots,m$$ and let $x>0$ be an approximation of $x^*$ satisfying $$\label{eq:112}
\|z\|\leq \tau,\quad z_i=\frac{\varphi_i x_i - s}{x_i}$$ with some $\tau>0$. Then there is an $\varepsilon>0$ depending on $s$ and $\tau$ such that $\|x^*-x\|\leq\varepsilon$. Moreover, if $s$ tends to zero then also $\varepsilon$ tends to zero.
From (\[eq:111\]) we have that $\varphi_i = \frac{s}{x_i^*}$ and thus (\[eq:112\]) can be written as $$\sum_{i=1}^m\left(\frac{s}{x_i^*} - \frac{s}{x_i}\right)^2\leq \tau^2$$ which is, in particular, means that $$\left|\frac{s}{x_i^*} - \frac{s}{x_i}\right| \leq \tau,\quad i=1,\ldots,m\,,$$ i.e., $$\frac{|x_i^*-x_i|}{x^*_ix_i} \leq s\tau,\quad i=1,\ldots,m\,.$$ Clearly, when $s$ tends to zero, $x$ must tend to $x^*$.
How good solution can we get when replacing the (“exact”) direct solver by an inexact iterative method for the solution of the Newton systems? We may expect that, with the ever decreasing barrier parameter, the inexact version will get into numerical difficulties sooner than the exact one. Table \[tab:IP\] answers this question. In topology optimization, the important variable is $x$, the “density”. With lower bound equal to zero, the quality of the solution may be characterized by the closeness of components of $x$ to this lower bound (that is, in examples where the lower bound is expected to be reached, such as in Example 1 with sufficiently fine discretization). In Table \[tab:IP\] we display the smallest component of $x$, denoted by $x_{\rm min}$ for Example 1 with 6 refinements levels, i.e., example 226 from Table \[tab:2\]. The meaning of other columns in Table \[tab:IP\] is the following:
: [**barrier**]{}…the smallest value of the barrier parameters $s,r$ before the interior point algorithm was terminated;
: [**IP,NWT,CG**]{}…the total number of iterations of the interior point method, the Newton method and conjugate gradients, respectively;
: [**Cholesky**]{}…the linear system was solved by the CHOLMOD implementation of the Cholesky method;
: [**CG tol fixed**]{}… the linear system was solved by the multigrid preconditioned conjugate gradient method with a fixed stopping criterium $\|r\|\,\|b\|\leq 10^{-2}$; see (\[eq:cgstop\]);
: [**CG tol decreasing**]{}… as above but with a variable stopping criterium $\|r\|\,\|b\|\leq\tau_{\scriptscriptstyle\rm CG}$, where $\tau_{\scriptscriptstyle\rm CG}$ is initially equal to $10^{-2}$ and is then multiplied by 0.5 after each major iteration of the interior point method.
----------------------------------- ---- ----- --------------------- ----- ------- --------------------- ----- ------- ---------------------
$\tau_{\scriptscriptstyle\rm IP}$ IP NWT $x_{\rm min}$ NWT CG $x_{\rm min}$ NWT CG $x_{\rm min}$
$10^{-8}$ 12 28 $1.6\cdot 10^{-5}$ 28 139 $1.8\cdot 10^{-5}$ 28 587 $1.6\cdot 10^{-5}$
$10^{-10}$ 15 34 $1.0\cdot 10^{-7}$ 35 291 $2.7\cdot 10^{-7}$ 34 4285 $1.0\cdot 10^{-7}$
$10^{-12}$ 18 40 $1.0\cdot 10^{-9}$ 72 2832 $2.4\cdot 10^{-9}$ 40 10042 $1.5\cdot 10^{-9}$
$10^{-14}$ 21 46 $6.4\cdot 10^{-12}$ 296 63674 $1.9\cdot 10^{-11}$ 53 23042 $1.9\cdot 10^{-11}$
$10^{-16}$ 24 52 $6.2\cdot 10^{-14}$ 489 88684 $1.4\cdot 10^{-13}$ 82 52042 $1.4\cdot 10^{-13}$
----------------------------------- ---- ----- --------------------- ----- ------- --------------------- ----- ------- ---------------------
: Number of iterations and error in the IP solution for different values of $\tau_{\scriptscriptstyle\rm IP}$ and three different linear solvers.
\[tab:IP\]
We can see that all three algorithms were able to solve the problem to very high accuracy. However, both versions of the CG method had problems with very low values of the barrier parameter. The “CG tol fixed” version needed very high number of the Newton steps, while the “CG tol decreasing” version needed very high number of the CG steps to reach the increased accuracy. (Notice that the maximum number of CG iterations for one system was limited to 1000.) On the other hand, for barrier parameter equal to $10^{-8}$ (our choice in the numerical examples above), both inexact solvers were on par with the exact one and, due to the lower accuracy required and thus lower number of CG steps, the “CG tol fixed” version is the method of choice.
OC method
---------
In the OC method, we have to solve the equilibrium problem with the stiffness matrix $K(x)$; that means, $K(x)$ must not be singular. A common way how to approach this is to assume that $x$ is strictly positive, though very small. Typically, one would modify the lower bound constraint to $0<\underline{x}\leq
x_i$, $i=1,\ldots,m$ with $\underline{x}=10^{-6}$, for instance. Once the OC method is terminated, all values of $x$ with $x_i=\underline{x}$ are set to zero. This is usually considered a weakness of the OC method, because we do not exactly solve the original problem, only its approximation (see [@achtziger]). Somewhat surprisingly, in the examples we solved using our MATLAB code, the value of $\underline{x}$ could be actually very low, such as $\underline{x}=10^{-30}$. The stiffness matrix $K(x)$ will, consequently, become extremely ill-conditioned (in the above case the condition number will be of order $10^{30}$), nevertheless, CHOLMOD does not seem to have a problem with that and the OC method converges in about the same number of iterations as if we set $\underline{x}=10^{-6}$.
The main question is how does the quality of the solution depends on the heuristic stopping criterium (\[eq:OCstop\]). Our next Table \[tab:exact1\] sheds some light on this. We solve the example 226 from Table \[tab:2\] for various values of the stopping criterium $\tau_{\scriptscriptstyle\rm OC}$ and two different values of the lower bound $\underline{x}$. We then compute, pair-wise, the norm of the difference of these solution. The notation $\tau_{\scriptscriptstyle\rm OC}=10^{-\inf}$ is used for the case when the stopping criterion (\[eq:OCstop\]) is ignored and the OC method is terminated after a very high number of iterations, in this case 5000 (i.e., 10000 solutions of the linear system). The resulting solution serves as the best approximation of the exact solution that can be obtained within the computational framework. So looking at Table \[tab:exact1\], we can see that, for instance, the maximum norm of the difference between the solutions with $\tau_{\scriptscriptstyle\rm OC}=10^{-5}$ and $\tau_{\scriptscriptstyle\rm
OC}=10^{-\inf}$ is $\|x_{-5}-x_{-\inf}\|_\infty=0.126$, while $\|x_{-9}-x_{-\inf}\|_\infty=0.016$. Notice that the norm is not scaled, e.g., by the dimension of $x$, hence the numbers are relatively large. Also, to get a clearer picture, we used a direct linear system solver.
lower bound
------------- -- ------- ------- ------------------ ------------------ ------------------ ------- ------- ------------------
-5 -7 -9 ${-\inf}$ -5 -7 -9 ${-\inf}$
0 1.09 1.19 1.26 $2\cdot 10^{-6}$ 1.10 1.18 1.26
0.114 0 0.116 0.281 1.09 0.013 0.110 0.281
0.123 0.01 0 0.190 1.19 0.103 0.009 0.190
0.126 0.025 0.016 0 1.26 0.271 0.198 $4\cdot 10^{-6}$
2e-7 0.114 0.123 0.126 0 1.10 1.19 1.26
0.116 0.001 0.009 0.024 0.116 0 0.094 0.271
0.123 0.009 $8\cdot 10^{-4}$ 0.017 0.123 0.008 0 0.198
0.126 0.025 0.016 $3\cdot 10^{-7}$ 0.126 0.024 0.017 0
: The norm of difference of two OC solutions $x$ for various values of the stopping criterium $\tau_{\scriptscriptstyle\rm OC}=10^{-5},10^{-7},10^{-9},10^{-\inf}$, and for two values of the lower bound $\underline{x}=10^{-7}$ and $\underline{x}=10^{-17}$. Upper triangle shows the 2-norm, lower triangle the infinity norm.
\[tab:exact1\]
Interior point versus OC method
-------------------------------
We again solve example 226 from Table \[tab:2\], this time by the interior point method with an exact linear solver and various stopping parameters $\tau_{\scriptscriptstyle\rm IP}$. In Table \[tab:exact2\], these solutions are compared (in two different norms), to the ‘exact’ solution obtained in the previous section by 5000 iterations of the OC method with $\underline{x}=10^{-17}$. Comparing these numbers to those in Table \[tab:exact1\], we can see that the IP method delivers very good solution already for our standard value $\tau_{\scriptscriptstyle\rm
IP}=10^{-8}$; this is comparable to OC solution with $\tau_{\scriptscriptstyle\rm OC}=10^{-7}$. Moreover, decrease of $\tau_{\scriptscriptstyle\rm IP}$ leads to a rapid decrease of the error, unlike in the OC method.
$\tau_{\scriptscriptstyle\rm IP}$ $\|x-x^*\|_2$ $\|x-x^*\|_{\infty}$
----------------------------------- --------------------- ----------------------
$10^{-8}$ $2.47\cdot 10^{-1}$ $2.49\cdot 10^{-2}$
$10^{-10}$ $9.60\cdot 10^{-3}$ $1.50\cdot 10^{-3}$
$10^{-12}$ $2.07\cdot 10^{-4}$ $4.95\cdot 10^{-5}$
$10^{-14}$ $1.70\cdot 10^{-6}$ $4.66\cdot 10^{-7}$
: Two different norms of the error of the IP method in variable $x$ for different values of the stopping parameter $\tau_{\scriptscriptstyle\rm IP}$. As an ‘exact’ solution $x^*$ we take the OC solution after 5000 iterations with lower bound $\underline{x}=10^{-17}$.
\[tab:exact2\]
The SIMP model of topology optimization
=======================================
A natural question arises about the applicability of the presented approach to a more popular model of topology optimization, namely, the Solid Isotropic Material with Penalisation (SIMP) [@bendsoe-sigmund]. When used with a suitable filtering, one can guarantee at least the existence of a solution of the infinite-dimensional problem and convergence of the finite element discretization to this solution [@bourdin2001filters]. However, the problem is non-convex and exhibits many local minima, as demonstrated in Figure \[fig:101\]. There we show solutions obtained by the code [top88]{} [@top88] from various initial points (with a load vector modified to the original code). The calling sequence of the code was [top88(512,64,1,3,1.5,2)]{}. The ordering of the plots in Figure \[fig:101\] is $\small\begin{bmatrix}(a) &(b)\\(c)&(d)
\end{bmatrix}$ and the respective values of the computed optimal compliance are (a) 128.402 (from the default starting point), (b) 126.773, (c) 134.359, (d) 129.002. Notice that we strengthened the stopping criterion of [top88]{} from $10^{-2}$ to $10^{-3}$.
\
It is thus difficult to compare the efficiency of various algorithms, as each may converge to a different local solution (see also [@rojas-stolpe]). Moreover, starting from two different initial points, an algorithm may converge to two different solutions and the number of iterations needed to find the respective solutions can differ substantially. For these reasons, we only give a brief comparison of the IP method with an exact solver and with an iterative solver, demonstrating that the iterative method is still a viable and efficient option. (A similar comparison for the OC method can be found in [@amir].)
The SIMP model with the so-called density filter consists in a modification of our original problem (\[eq:to\]) where we replace the equilibrium equation $
K(x) u = f
$ by $$\widehat{K}(\tilde{x}) u = f\qquad\mbox{with} \qquad \widehat{K}(\tilde{x}) = \sum_{i=1}^m \tilde{x}_i^p K_i\,,$$ where $\tilde{x}_i$ is computed as a weighted average of the values of $x$ in a close neighborhood of the $i$-th element. More precisely, $$\tilde{x} = W x \quad{\rm with}\quad
W_{:i} = \frac{1}{\sum_{j=1}^m \widehat{W}_{ij}} \widehat{W}_{:i}
\quad{\rm and}\quad
\widehat{W}_{ij} = \max(0,r_{\rm min} - {\rm dist}(i,j))\,,$$ where $W_{ij}$ is the $(i,j)$-th element of matrix $W$ and $W_{:i}$ denotes the $i$-th column of $W$. Here ${\rm dist}(i,j)$ is a function measuring the distance between the $i$-th and $j$-th element (e.g. Manhattan distance or Euclidean distance of element centers), and $r_{\rm min}$ is a given radius of the density filter. The typical choice of $p$ is $p=3$.
The interior point method from Sections 2 and 3 has to be adapted to the SIMP model. In particular, the KKT condition (\[eq:KKT3\]) will change to $$-\frac{1}{2}u^T \left[ p\sum_{j=1}^m W_{ji}(Wx)_j^{p-1}K_j\right] u
- \lambda - \varphi_i + \psi_i = 0,\quad i=1,\ldots,m$$ which means that the matrix $B(u)$ in the linear system (\[eq:nwtr\]) (and thus (\[eq:nwtr2\])) will be replaced by $$\widehat{B}(u)=\left(p\sum_{j=1}^m W_{j1}(Wx)_j^{p-1}K_j u, \ldots,
p\sum_{j=1}^m W_{jm}(Wx)_j^{p-1}K_j u\right)\,.$$ The consequence of using the density filter is that the matrix $\widehat{B}(u)\widehat{B}(u)^T$ (and thus the matrix $Z$ in (\[eq:nwtr2\])) will have bigger band-width and thus the Cholesky factors will have more non-zero elements.
The purpose of the next example is merely to show that the iterative solver is still a viable alternative to the direct one, even for the non-convex problem. The reader should not be too concerned with the behaviour of the interior point method, as it still uses the vanilla algorithm and explicit choice of step length that were suitable for the convex problem but should be upgraded for the non-convex one. However, this was not the goal of this paper.
Example 4
---------
Consider again the problem from Example 2 with an upper bound $\overline{x}=3$, this time solved using the SIMP model with density filter. The filter uses Manhattan distance with $r_{\rm min}=2$. Figure \[fig:102\] shows the optimal results for 8 refinement levels (problem [428]{}) when using the iterative solver (left) and the Cholesky method (right). We can see that also in this example the two versions of the code converge to different local minima with almost identical objective function values (see Table \[tab:201\]).
The numbers presented in Table \[tab:201\] for problems [427]{} and [428]{} show the total number of Newton steps (feval), the CPU time only needed by the linear solver and the optimal objective function value (obj). The number of Newton steps needed by the IP method is not significantly influenced by the choice of the solver. Also the number of CG iterations is still kept very low with average 3.5 per linear system. To demonstrate that was the purpose of this example.
--------- ----------- ------- ---------- ------ -------- ------- ------ --------
problem variables feval CG iters time obj feval time obj
427 66048 151 514 316 3.5659 128 146 3.5681
428 263168 262 934 2410 3.4709 231 1500 3.4762
--------- ----------- ------- ---------- ------ -------- ------- ------ --------
: Example 4, IP method with iterative and Cholesky solver
\[tab:201\]
Finally, in Table \[tab:202\] we present the number of Newton steps needed in every major IP iteration. The first row shows the value of the barrier parameter $s$ (reduced in every iteration by factor 0.2). The next two rows refer to problem [427]{} and show the number of Newton steps first when using Cholesky method and then for the iterative solver. The final two rows are for problem [428]{}. As we can see, most effort is spent in the last iterations, unlike in the convex case, where the number of Newton steps was almost constant. As mentioned before, a more sophisticated version of the IP method would be needed for the non-convex case to avoid this behaviour.
$s$ $5^0$ $5^{-1}$ $5^{-2}$ $5^{-3}$ $5^{-4}$ $5^{-5}$ $5^{-6}$ $5^{-7}$ $5^{-8}$ $5^{-9}$ $5^{-10}$ $5^{-11}$
-- ------ ------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------- -----------
Chol 2 2 2 4 4 6 9 11 24 22 21 21
CG 2 2 3 3 4 6 9 12 24 23 25 38
Chol 2 2 2 3 3 4 7 11 21 44 65 67
CG 2 2 2 3 3 4 7 11 21 53 68 86
: Example 4, the barrier parameter $s$ and the corresponding number of Newton steps needed at every major IP iteration with the Cholesky and the CG solver, respectively.
\[tab:202\]
Conclusions
===========
Based on the results of our numerical experiments, we make the following conclusions. These conclusions only concern the convex problem.
- The interior point method clearly outperforms the OC method on large-scale problems. The larger the problem, the bigger the difference. This is independent of the fact whether direct or iterative solver is used for the linear system. It is also independent of the fact whether the linear systems (in both methods) are solve exactly or inexactly.
- The inexact multigrid preconditioned CG method outperforms even a very sophisticated direct solver, at least for large to very-large scale problems. This holds for both, the interior point and the OC method.
- The behaviour of the interior point method is very predictable. More surprisingly, also the behaviour of the chosen iterative method, the multigrid preconditioned conjugate gradients, is also very predictable and *independent on the size of the problem*.
- Also in the OC method, the multigrid preconditioned CG algorithm is predictable and very stable, both with respect to the size of the problem and of the OC iteration (and thus of the condition number of the stiffness matrix). Perhaps rather surprisingly, not more than 10 CG iterations are needed, even when high precision of the OC method is required. This is the effect of the multigrid preconditioner: notice that in [@wang] the authors report about 100–200 CG steps needed (with a different preconditioner) and thus propose to use so-called recycling of the Krylov subspaces, in order to accelerate CG convergence speed. This is just not needed here, given the very low number of CG steps.
- The OC method has one noticeable advantage over the interior point method. It can quickly identify the “very strongly” active constraints, those with large Lagrangian multiplier. Due to the projection of variables on the feasible set, the active variables are then exactly equal to the bounds. Contrary to that, the interior point method only approaches the boundary. This may be particularly significant in case of lower bounds, when the user has to decide which values are cut off and considered zero (and thus interpreted as void). Clearly, the lower bound for the OC method has to be positive but it can be set very low (e.g., $10^{-17}$) and is then exactly reached.
From the above, it seems to be obvious to recommend the interior point method with multigrid preconditioned CG solver as the method of choice for large scale topology optimization problem. However, we should keep in mind that the use of multigrid is rather restricted by the assumed existence of regularly refined finite element meshes. This is easily accomplished when using “academic” examples with regular computational domains such as squares, rectangles, prisms and unions of these. For geometrically complex domains appearing in practical examples, multigrid may not be so suitable or may even be unusable. In these cases, we can resort to domain decomposition preconditioners. In [@turner-kocvara-loghin] it was shown that, in connection with the interior point method, they also lead to very efficient techniques for topology optimization problems.
[^1]: School of Mathematics, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK, and Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod vodárenskou věží 4, 18208 Praha 8, Czech Republic
[^2]: School of Mathematics, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK, on leave from Department of Mathematics, University of Kirkuk, Iraq
[^3]: This work has been partly supported by Iraqi Ministry of Higher Education and Scientific Research, Republic of Iraq and by the EU FP7 project no. 313781 AMAZE.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We demonstrate an experimental method for measuring energy-time entanglement over almost $80\rm\,nm$ spectral bandwidth in a single shot with a quantum bit error rate below 0.5%. Our scheme is extremely cost-effective and efficient in terms of resources as it employs only one source of entangled photons and one fixed unbalanced interferometer per phase-coded analysis basis. We show that the maximum analysis spectral bandwidth is obtained when the analysis interferometers are properly unbalanced, a strategy which can be straightforwardly applied to most of today’s experiments based on energy-time and time-bin entanglement. Our scheme has therefore a great potential for boosting bit rates and reducing the resource overhead of future entanglement-based quantum key distribution systems.'
author:
- 'F. Kaiser, D. Aktas, B. Fedrici, T. Lunghi, L. Labonté, and S. Tanzilli'
title: 'Optimal analysis of ultra broadband energy-time entanglement for high bit-rate dense wavelength division multiplexed quantum networks'
---
Introduction
============
To date, quantum key distribution (QKD) provides a powerful means to establish provably secure communication [@Gisin02_QC; @Scarani09_securityQKD; @Vazirani_PRL_14]. In this perspective, QKD systems have already been commercialized, and laboratory demonstrations have achieved bit rates up to $\sim 1\rm\,Mbit/s$ at a distance of 50km [@Lucamarini_50km_1Mbit_2013; @Zhong_50km_1Mbit_2015], extendible to up to 307km [@Korzh_QKD300km_2015]. Most of the reported approaches are based on laser pulses attenuated down to the single photon level.
In order to increase these rates, several multiplexing techniques can be exploited [@Walborn_hyper_2008; @Lucamarini_50km_1Mbit_2013; @DWDM-QKD-Proposal; @Ghalbouni_DWDM_OL_2013; @Aktas_source_2016; @Reimer_comb_2016]. Here, we focus on dense wavelength division multiplexing (DWDM), where similarly to today’s classical telecommunication systems, $N$ signals at separate wavelengths can be multiplexed in, and demultiplexed out of, a fiber link, thus increasing the achievable bit rate by a factor $N$. However, for protocols based on faint laser pulses or single photons, this requires employing $N$ sources, and, depending on the protocol, up to $N$ analyzers, which strongly increases the resource overhead [@Faint_Pulses_DWDM].
In this perspective, entanglement-based DWDM QKD has the potential of increasing bit rates with significantly less technological resources. Actually, particular *single* sources of entangled photon pairs can naturally generate a broadband flux of wavelength correlated photon pairs which can be demultiplexed into $N$ correlated pairs of wavelength channels [@DWDM-QKD-Proposal; @Ghalbouni_DWDM_OL_2013; @Aktas_source_2016; @Reimer_comb_2016]. Additionally, in comparison to QKD schemes based on laser pulses, entanglement-based approaches are compatible with device-independent network-enabling protocols [@Brunner_Bell_14; @SimonRepeater11; @Kocsis_Heralded_Amplification; @AnthoAmpli13; @Simon07]. Moreover, this architecture is immune to side channel attacks, making it more robust for secure communication [@Vazirani_PRL_14].
Although long distance entanglement distribution has already been demonstrated [@Takesue13_300km; @Aktas_source_2016], only a few experiments have considered DWDM QKD with, to date, in up to eight channel pairs [@Aktas_source_2016]. However, in all of these realizations, the optimal analyzer settings showed a strong wavelength dependence, such that entanglement has been measured in multiple channel pairs sequentially [@Lim10; @Monolithic_DWDM; @Aktas_source_2016], *i.e.* the analyzer settings had to be adapted for each individual channel pair. Considering that QKD implementations require measuring entanglement in two orthogonal bases, this implies that each user has to employ $2 \times N$ long-term stable individual analyzers, which is both impractical and resource demanding.
In principle, entanglement can be distributed using any observable. However, long-distance implementations often rely on energy-time or time-bin entanglement [@Tittel_balanced_1999; @Cuevas_balanced_2013; @Antho12_CrossTB; @Franson; @Kwiat_EnergyTime; @TimeBin_50km], due to their immunity against polarization mode dispersion and drifts [@hubel_high-fidelity_2007].
In this paper, we demonstrate a scheme that requires only one entangled photon pair source and one analyzer per user and analysis basis to measure energy-time entanglement with less than 0.5% error rate in a single shot over a spectral bandwidth of $\sim 80\rm\,nm$, corresponding to $N=46$ standard 100GHz telecommunication channel pairs. We find that, compared to the standard configuration with identically unbalanced interferometers [@Kwiat_EnergyTime; @DWDM-QKD-Proposal], the number of exploitable channel pairs can be augmented by three times when properly detuning one of the analyzers. This represents a significant step towards cost-effective entanglement-based high bit rate QKD in DWDM networks.
Experimental setup
==================
The experimental setup is shown in \[Setup\]. A continuous-wave laser operating at $\lambda_{\rm p}=770\, \rm nm$ with a coherence length of $\sim 250\rm\,m$ pumps a periodically poled lithium niobate waveguide (PPLN/W) in which energy-time entangled photon pairs are generated around the degenerate wavelength of 1540nm by spontaneous parametric down-conversion (SPDC). The emitted photon pairs are directly collected into a butt-coupled single-mode fiber. The corresponding emission spectrum is shown in \[Spectrum\], for which quasi phase-matching engineering leads to a bandwidth of 55nm $(\leftrightarrow 7\rm \, THz)$ which fully covers the commonly used telecom C-band ($1530-1565\rm\,nm)$, as well as parts of the adjacent S- ($1460-1530\rm\,nm$) and L-bands ($1565-1625\rm\,nm$).
![**Experimental setup based on the Franson configuration[@Franson].** The energy-time entangled photon pair source is made of a pump laser and a PPLN/W. Long (short) wavelength photons are sent to Alice (Bob) using two FBGs. Each user employs an unbalanced fiber interferometer for entanglement analysis. The interferometers’ path length differences can be fine tuned using piezoelectric fiber stretchers. \[Setup\]](Setup.eps){width="8cm"}
![**Emission spectrum of the PPLN/W.** The 55nm broad emission spectrum covers the full telecom C-band, as well as parts of the adjacent S- and L-bands. Photon pairs are generated pairwise symmetrically apart from the degenerate wavelength of 1540nm. Photons above and below 1540nm are sent to Alice and Bob, respectively. \[Spectrum\]](Spectrum2b.eps){width="7cm"}
Due to conservation of the energy during the SPDC process, the wavelengths of the paired photons $(\lambda_{\rm A,B})$ are related to the pump laser wavelength through the following relation: $$\frac{1}{\lambda_{\rm p}} = \frac{1}{\lambda_{\rm A}} + \frac{1}{\lambda_{\rm B}}.\label{Energy}$$ In other words, the photons are generated pairwise symmetrically apart from the degenerate wavelength. The pairs are deterministically separated by sending long (short) wavelength photons to Alice (Bob) using a set of two broadband fiber Bragg gratings (FBG) and associated circulators (C). Further dynamic wavelength filtering is achieved using two tunable filters with a 0.8nm ($\leftrightarrow 100\rm\,GHz$) flat-top transmission profile, mimicking standard 100GHz dense wavelength division multiplexers. To reveal energy-time entanglement, Alice and Bob use each an unbalanced fiber interferometer (Franson configuration [@Franson]), made of a beam-splitter and two Faraday mirrors. Both interferometers have a path length difference of $\Delta L_{\rm A,B} \approx 6.7\rm\,cm$, can be fine tuned, and are actively stabilized using piezoelectric fiber stretchers [@TheKaiser14Long]. At the interferometer output, Alice detects her photons using a free-running InGaAs single photon detector (SPD, IDQ id220, 20% detection efficiency). At Bob’s site, we use an additional circulator through which we can detect photons at both interferometer outputs using gated InGaAs SPDs (IDQ id201, 25% detection efficiency).\
Concerning the quantum state of the photon pairs at the interferometers’ outputs, four contributions have to be considered. Either both photons take the short or long paths ($s_{\rm A}-s_{\rm B}$ or $l_{\rm A}-l_{\rm B}$), or both photons take opposite paths ($s_{\rm A}-l_{\rm B}$ or $l_{\rm A}-s_{\rm B}$). Due to the spontaneous character of the photon pair generation process, the pair creation time in the PPLN/W remains unknown. This makes the contributions $s_{\rm A}-s_{\rm B}$ and $l_{\rm A}-l_{\rm B}$ indistinguishable, which leads to the observation of entanglement [@Kwiat_EnergyTime]. These contributions are selected using a fast coincidence logic, leading to a reduced quantum state $$|\psi \rangle_{\rm post} = \frac{1}{\sqrt{2}} \left( | s_{\rm A} \rangle | s_{\rm B} \rangle + {\rm e}^{{\rm i} \, \phi} | l_{\rm A} \rangle | l_{\rm B} \rangle\right),$$ where $\phi = \phi_{\rm A} + \phi_{\rm B}$ stands as the two-photon phase. The individual contributions, $\phi_{\rm A,B}$, are related to the interferometers’ path length differences by $$\phi_{\rm A,B} = \frac{2\pi\,\Delta L_{\rm A,B} \, n(\lambda_{\rm A,B})}{\lambda_{\rm A,B}}.\label{Phases}$$ Here, $n(\lambda_{\rm A,B})$ is the wavelength-dependent refractive index of the fibers in the interferometers. According to reference [@Kwiat_EnergyTime], the rate of coincidences between detectors $\rm SPD_A$ and $\rm SPD_{B_1}$ follows $R_{\rm AB_1} \propto 1 + \cos \phi$, while the rate between detectors $\rm SPD_A$ and $\rm SPD_{B_2}$ follows $R_{\rm AB_2} \propto 1 - \cos \phi$. For entanglement-based QKD using the Ekert protocol [@Ekert91], the analysis bases are defined by the following settings: $$\begin{aligned}
\phi_{\rm A} + \phi_{\rm B} &=& 0\label{ZeroPhase}\\
\phi'_{\rm A} + \phi'_{\rm B} &=& \pi \qquad {\rm with} \qquad \phi'_{\rm A} = \phi_{\rm A} + \frac{\pi}{2}.\label{SecondBasis}\end{aligned}$$ In general, these conditions cannot be fulfilled over a large spectral bandwidth for fixed path length differences ($\Delta L_{\rm A,B}$), which results in a wavelength dependent two-photon phase shift. Considering the setting given in equation \[ZeroPhase\], this leads to an undesired non-zero anti-coincidence rate $R_{\rm AB_2}$. The associated quantum bit error rate (QBER) of the QKD link is then given by $${\rm QBER} = \frac{R_{\rm AB_2}}{R_{\rm AB_1}+R_{\rm AB_2}} = \sin^2 \left( \frac{\phi}{2}\right).\label{QBER}$$ Although there exist several algorithms for QBER correction [@Gisin02_QC; @Scarani09_securityQKD], they usually require additional resources, having repercussions on the attainable bit rate of the QKD link. Therefore, it is commonly acknowledged to keep the QBER as low as possible [@Gisin02_QC; @Scarani09_securityQKD; @Vazirani_PRL_14]. Additionally, it has been demonstrated that high-dimensional QKD is only efficient at a QBER very close to zero [@Lucamarini_50km_1Mbit_2013]. For this reason, we fix the maximum allowed QBER induced by improper interferometer settings to a stringent value of 0.5%, corresponding to an acceptable two-photon phase shift of $\phi = \pm 0.14\,\rm rad$.\
\
**Spectral dependence of the two-photon phase $\phi$**
To calculate the spectral dependence of $\phi$, we first express $n(\lambda)$ by a second order Taylor series, which reads $$n(\lambda) \approx n_0 + \frac{dn}{d \lambda} \cdot \Delta \lambda + \frac{1}{2} \frac{d^2n}{d \lambda^2} \cdot (\Delta \lambda)^2.\label{RefIndex}$$ Here, $n_0$, $\frac{dn}{d \lambda}$, and $\frac{d^2n}{d \lambda^2}$ are the fiber refractive index, the first, and second order derivatives, respectively, at the degenerate wavelength ($2\lambda_{\rm p}$). All coefficients can be inferred from Sellmeier equations [@leviton_temperature-dependent_2008]. By inserting equation \[RefIndex\] into equation \[Phases\], and respecting equation \[Energy\], we obtain $$\phi = \phi \left(n_0,\frac{dn}{d \lambda},\frac{d^2n}{d \lambda^2},\Delta L_{\rm A},\Delta L_{\rm B},\lambda_{\rm A},\lambda_{\rm p} \right).\label{TotalPhase}$$ It is often considered that the QBER is minimized for $\Delta L_{\rm A} = \Delta L_{\rm B}$ [@Tittel_balanced_1999; @Cuevas_balanced_2013]. However, in section \[Non\_identical\_theory\] we show that the optimal settings are rather obtained for identical path travel time differences. In this case of $\Delta L_{\rm A} = \Delta L_{\rm B}$, equation \[TotalPhase\] simplifies to $$\phi = \frac{d^2n}{d \lambda^2} \cdot \frac{\pi (\lambda_{\rm A} - 2\lambda_{\rm p})^2}{\lambda_{\rm A} - \lambda_{\rm p}} \cdot \Delta L_{\rm A} + \mathcal{C},\label{BalancedPhase}$$ in which $\mathcal{C} = \frac{2 \pi \cdot n_0 \cdot \Delta L_{\rm A}}{\lambda_{\rm p}}$ is a constant as it is independent of the wavelengths of the paired photons ($\lambda_{\rm A,B}$).
Results with identical analyzers
================================
For different wavelengths $\lambda_{\rm A}$ (and symmetrically associated wavelengths $\lambda_{\rm B}$), we infer the two-photon phase $\phi$ by measuring the QBER and solving equation \[QBER\]. We first align both interferometers to exactly $\Delta L_{\rm A} = \Delta L_{\rm B}$. This is done by using an iterative procedure that amounts to infer the wavelength dependence of the QBER for different $\Delta L_{\rm A}$ until a flat distribution is found around 1540nm. Experimental results for $\Delta L_{\rm A} = \Delta L_{\rm B}$ are shown in \[ResultsEqual\]. Starting with $\phi=0$ at $\lambda_{\rm A} = 1540\rm\,nm$, we reach the threshold phase shift $\phi=-0.14\,\rm rad$ at $\lambda_{\rm A} \sim 1553\rm\,nm$ ($\lambda_{\rm B} \sim 1527\rm\,nm$). Consequently, these interferometers can be used to analyse entanglement with a $\rm QBER<0.5\%$ for $1540\,{\rm nm} < \lambda_{\rm A} < 1553\,{\rm nm}$ ($1527\,{\rm nm} < \lambda_{\rm B} < 1540\,{\rm nm}$) simultaneously, which corresponds to 16 pairs of standard 100GHz telecommunication channels [@ITU; @Aktas_source_2016].
![**Two-photon phase shift for identical analysis interferometers.** The yellow shaded area indicates the region in which the QBER stays below 0.5%. The exploitable bandwidth covers 16 pairs of standard 100GHz telecommunication channels. Error bars assume poissonian photon number statistics. The fit to the data is obtained with equation \[BalancedPhase\].\[ResultsEqual\]](Equilibrated_Interferometersb.eps){width="7cm"}
Note that fully exploiting the emission bandwidth of our photon pair source requires analysing entanglement in at least 40 pairs of 100GHz channels simultaneously. A straightforward solution would be to employ analysis interferometers made of custom-made components to shift or compensate dispersion. For example, by employing hybrid interferometers, made of single-mode and dispersion compensation fibers, instead of fully single-mode fiber interferometers, a 1.4% increase in the interference visibility was observed over a spectral bandwidth of 1.6nm [@Zhong_nonlocal_cancellation_2013].
Results with optimal analyzers \[Non\_identical\_theory\]
=========================================================
In the following, we demonstrate a much cheaper and simpler approach which can be applied without changing any of the components in the standard setup. We note that our strategy is not limited to fiber-based analysis interferometers only. For example, planar lightwave circuits [@Korzh_QKD300km_2015], where dispersion compensation is not straightforward, could also benefit from the proposed method. Let us consider the different central wavelengths of Alice’s and Bob’s photons, $\lambda^*_{\rm A} \sim 1560\,\rm nm$ and $\lambda^*_{\rm B} \sim 1521\,\rm nm$. Note that these spectral contributions show different group velocities $$v_{\rm A,B} = \frac{c}{n_0-\dfrac{d n}{d \lambda}\cdot \lambda^*_{A,B}}.$$ As a consequence, the wavepackets in Alice’s and Bob’s interferometers show non-identical path travel time differences between short and long arms. To match these time differences, the following equation needs to be fulfilled $$\dfrac{\Delta L_{\rm A}}{v_{\rm A}} = \dfrac{\Delta L_{\rm B}}{v_{\rm B}}. \label{GroupMatchedInterferometers}$$ Using Sellmeier equations [@leviton_temperature-dependent_2008] to infer $n_0$ and $\frac{dn}{d \lambda}$, and $\Delta L_{\rm B} = 6.7\rm\,cm$, we fulfill equation \[GroupMatchedInterferometers\] by setting the path length difference of Alice’s interferometer to $\Delta L_{\rm A} = \left(\Delta L_{\rm B} - 12\rm\,\mu m \right)$.
![**Two-photon phase shift for non-identical analysis interferometers.** The path length difference of Alice’s interferometer has been reduced by $\sim 12\rm\,\mu m$ compared to Bob’s. Now, the region in which the QBER is below 0.5% covers 46 pairs of standard telecommunication channels, meaning that the full telecom C-band, as well as some parts of the adjacent S- and L-bands, can be exploited simultaneously for entanglement analysis.\[ResultsUnequal\]](Unequilibrated_Interferometersb.eps){width="7cm"}
The associated experimental results are shown in \[ResultsUnequal\]. We have essentially shifted the curve in \[ResultsEqual\] by about $20\,\rm nm$ to $\lambda^*_{\rm A} \sim 1560\,\rm nm$ ($\lambda^*_{\rm B} \sim 1521\,\rm nm$). This way, we are now able to keep the two-photon phase within $\phi = \pm 0.14\,\rm rad$ for the full emission bandwidth of our source ($1541\,{\rm nm} < \lambda_{\rm A} < 1579\,{\rm nm}$ and $1503\,{\rm nm} < \lambda_{\rm B} < 1539\,{\rm nm}$), allowing to analyze entanglement with a QBER below 0.5% in 46 pairs of standard 100GHz telecommunication channels in a single shot. We note that the improved bandwidth fully covers the most commonly used telecom C-band, as well as some parts of the adjacent S- and L-bands.
We envision the following configuration for realizing future DWDM QKD links. The tunable bandpass filters will be removed and Alice is also supplied with an interferometer having two outputs (as Bob’s in our current configuration, see \[Setup\]). Wavelength division multiplexing is performed after the interferometers using standard telecom $N$-channel DWDMs. Granting security of DWDM QKD protocols requires using a second analysis basis (see equation \[SecondBasis\]). This can be implemented by providing each Alice and Bob with a second interferometer for which the path length differences are set to $\Delta L'_{\rm A,B} = \Delta L_{\rm A,B} + \frac{\lambda_{\rm p}}{2\cdot n_0}$. In this case, group velocity dispersion causes a slight additional error in $\phi'$ over the full bandwidth of the source. From equation \[Phases\] we calculate that it will be below $\frac{2\pi}{300}$, which is essentially negligible for choosing two complementary bases. In a long-distance scenario, group velocity dispersion in a standard fiber distribution link causes a broadening of the coincidence peaks such that the contributions $s_{\rm A} - s_{\rm B}$ and $l_{\rm A} - l_{\rm B}$ cannot be properly post-selected. However, this problem can be conveniently overcome using dispersion compensation [@Fasel_30km_2004; @Aktas_source_2016] and/or dispersion shifted fibers [@Marcikic_Tele_2003].
Finally, we stress that our approach can be also applied to optimize DWDM QKD with polarization entangled photon pairs where wavelength dependent birefringence in the half-wave plates is an issue [@Lim10]. Analogously to the strategy for energy-time entangled photon pairs, this problem could be partially compensated by either using thicker/thinner half-wave plates (optimized for longer/shorter wavelengths), or by simply tilting the existing ones.
Conclusion
==========
In conclusion, we analysed energy-time entanglement of a broadband photon pair source using fixed unbalanced fiber interferometers, in the perspective of DWDM QKD. In the standard configuration, with identical analysis interferometers at Alice’s and Bob’s sites, group velocity dispersion limits the analysis bandwidth to 16 standard 100GHz channel pairs at a QBER threshold of 0.5%.
Without replacing any components in the experimental setup, solely by properly unbalancing one of the two interferometers, we improved the analysis bandwidth to 46 channel pairs, covering not only the commonly used telecom C-band, but also some of the adjacent S- and L-bands.
We stress that the number of channel pairs could be increased to 368 by using the 12.5GHz ultra-dense DWDM grid, which underlines the tremendous potential for high bit rate entanglement-based DWDM QKD.
Therefore, we believe that our work will have a great impact for the optimal exploitation of current and future high bit rate DWDM QKD systems, especially in combination with other multiplexing techniques.
Acknowledgments
===============
The authors acknowledge financial support from the Foundation Simone & Cino Del Duca, the European Commission for the FP7-ITN PICQUE project (grant agreement No 608062), l’Agence Nationale de la Recherche (ANR) for the CONNEQT and SPOCQ projects (grants ANR-EMMA-002-01 and ANR-14- CE32-0019, respectively), and the iXCore Research Foundation.
[34]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase
10.1103/RevModPhys.74.145) [****, ()](\doibase
10.1103/RevModPhys.81.1301) [****, ()](\doibase
10.1103/PhysRevLett.113.140501) [****, ()](\doibase 10.1364/OE.21.024550) [****, ()](http://stacks.iop.org/1367-2630/17/i=2/a=022002) [****, ()](\doibase 10.1038/nphoton.2014.327), @noop [****, ()]{} @noop [ ()]{} [****, ()](\doibase 10.1364/OL.38.000034) @noop [****, ()]{} [****, ()](\doibase 10.1126/science.aad8532) [****, ()](\doibase
10.1109/JQE.2012.2187327) [****, ()](\doibase
10.1103/RevModPhys.86.419) [****, ()](\doibase 10.1103/RevModPhys.83.33) @noop [****, ()]{} [****, ()](\doibase
10.1088/1367-2630/15/9/093002) [****, ()](\doibase
10.1103/PhysRevLett.98.190503) [****, ()](\doibase 10.1364/OE.21.023241) [****, ()](\doibase 10.1364/OE.16.016052) @noop [ ()]{} [****, ()](\doibase 10.1103/PhysRevA.59.4150) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevA.87.020301) [****, ()](\doibase 10.1103/PhysRevLett.62.2205) [****, ()](\doibase 10.1103/PhysRevA.47.R2472) [****, ()](\doibase
10.1103/PhysRevLett.93.180502) [****, ()](\doibase
10.1364/OE.15.007853) [****, ()](\doibase
10.1016/j.optcom.2014.03.056) [****, ()](\doibase
10.1103/PhysRevLett.67.661) [ ()](http://arxiv.org/abs/0805.0091) @noop [ ]{} [****, ()](\doibase 10.1103/PhysRevA.88.020103) [****, ()](\doibase
10.1140/epjd/e2004-00080-8) @noop [****, ()]{}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function), we establish new results on generalized k-fractional integral inequalities by considering the extended Chebyshev functional in case of synchronous function and some other inequalities.'
---
On Chebyshev type Inequalities using Generalized k-Fractional Integral Operator
$Vaijanath \, L. Chinchane $
Department of Mathematics,\
Deogiri Institute of Engineering and Management\
Studies Aurangabad-431005, INDIA\
chinchane85@gmail.com
**Keywords :** Chebyshev inequality, generalized k-fractional integral.\
**Mathematics Subject Classification :** 26D10, 26A33.
Introduction
=============
####
In recent years, many authors have worked on fractional integral inequalities by using different fractional integral operator such as Riemann-Liouville, Hadamard, Saigo and Erdelyi-Kober, see [@A; @BA; @BE; @C1; @C2; @C3; @C4; @C5; @D1; @KA; @P1; @YI]. In [@KI2] S. Kilinc and H. Yildirim establish new generalized k-fractional integral inequalities involving Gauss hypergeometric function related to Chebyshev functional. In [@C2; @D2] authors gave the following fractional integral inequalities, using the Hadamard and Riemann-Liouville fractional integral for extended Chebyshev functional.
Let $f$ and $g$ be two synchronous function on $[0,\infty[$, and $r,p,q:[0,\infty)\rightarrow [0,\infty)$. Then for all $t>0$, $\alpha>0$, we have $$\begin{split}
&2_{H}D_{1,t}^{-\alpha}r(t) \left[_{H}D_{1,t}^{-\alpha}p(t) _{H}D_{1,t}^{-\alpha}(qfg)(t)+
_{H}D_{1,t}^{-\alpha}q(t)_{H}D_{1,t}^{-\alpha}(pfg)(t)\right]+\\
&2 _{H}D_{1,t}^{-\alpha}p(t)_{H}D_{1,t}^{-\alpha}q(t)_{H}D_{1,t}^{-\alpha}(rfg)(t)\geq\\
&_{H}D_{1,t}^{-\alpha}r(t) \left[_{H}D_{1,t}^{-\alpha}(pf)(t)_{H}D_{1,t}^{-\alpha}(qg)(t)+_{H}D_{1,t}^{-\alpha}(qf)(t)_{H}D_{1,t}^{-\alpha}(pg)(t)\right]+\\
&_{H}D_{1,t}^{-\alpha}p(t)\left[_{H}D_{1,t}^{-\alpha}(rf)(t)_{H}D_{1,t}^{-\alpha}(qg)(t)+_{H}D_{1,t}^{-\alpha}(qf)(t)_{H}D_{1,t}^{-\alpha}(rg)(t)\right]+\\
&_{H}D_{1,t}^{-\alpha}q(t)\left[_{H}D_{1,t}^{-\alpha}(rf)(t)_{H}D_{1,t}^{-\alpha}(pg)(t)+_{H}D_{1,t}^{-\alpha}(pf)(t)_{H}D_{1,t}^{-\alpha}(rg)(t)\right]
\end{split}$$
Let $f$ and $g$ be two synchronous function on $[0,\infty[$, and $r,p,q:[0,\infty)\rightarrow [0,\infty)$. Then for all $t>0$, $\alpha>0$, we have: $$\begin{split}
&_{H}D_{1,t}^{-\alpha}r(t)\times\\
& \left[_{H}D_{1,t}^{-\alpha}q(t) _{H}D_{1,t}^{-\beta}(pfg)(t)+2
_{H}D_{1,t}^{-\alpha}p(t)_{H}D_{1,t}^{-\beta}(qfg)(t)+_{H}D_{1,t}^{-\beta}q(t)_{H}D_{1,t}^{-\alpha}(pfg)(t)\right]\\
&+\left[_{H}D_{1,t}^{-\alpha}p(t)_{H}D_{1,t}^{-\beta}q(t)+_{H}D_{1,t}^{-\beta}p(t)_{H}D_{1,t}^{-\alpha}q(t)\right]_{H}D_{1,t}^{-\alpha}(rfg)(t)\geq\\
&_{H}D_{1,t}^{-\alpha}r(t) \left[_{H}D_{1,t}^{-\alpha}(pf)(t)_{H}D_{1,t}^{-\beta}(qg)(t)+_{H}D_{1,t}^{-\beta}(qf)(t)_{H}D_{1,t}^{-\alpha}(pg)(t)\right]+\\
&_{H}D_{1,t}^{-\alpha}p(t)\left[_{H}D_{1,t}^{-\alpha}(rf)(t)_{H}D_{1,t}^{-\beta}(qg)(t)+_{H}D_{1,t}^{-\beta}(qf)(t)_{H}D_{1,t}^{-\alpha}(rg)(t)\right]+\\
&_{H}D_{1,t}^{-\alpha}q(t)\left[_{H}D_{1,t}^{-\alpha}(rf)(t)_{H}D_{1,t}^{-\beta}(pg)(t)+_{H}D_{1,t}^{-\beta}(pf)(t)_{H}D_{1,t}^{-\alpha}(rg)(t)\right].
\end{split}$$
####
The main objective of this paper is to establish some Chebyshev type inequalities and some other inequalities using generalized k-fractional integral operator. The paper has been organized as follows. In Section 2, we define basic definitions related to generalized k-fractional integral operator. In section 3, we obtain Chebyshev type inequalities using generalized k-fractional. In Section 4 , we prove some inequalities for positive continuous functions.
Preliminaries
==============
####
In this section, we present some definitions which will be used later discussion.
Two function $f$ and $g$ are said to synchronous (asynchronous) on $[a,b],$ if $$\left((f(u)-f(v))(g(u)-g(v))\right)\geq (\leq)0,$$ for all $ u, v \in [0,\infty)$.
[@KI2; @YI] The function $f(x)$, for all $x>0$ is said to be in the $L_{p,k}[0,\infty),$ if $$L_{p,k}[0,\infty)=\left\{f: \|f\|_{L_{p,k}[0,\infty)}=\left(\int_{0}^{\infty}|f(x)|^{p}x^{k}dx\right)^{\frac{1}{p}} < \infty \, \, 1 \leq p < \infty \, k \geq 0\right\},$$
[@KI2; @SAO; @YI] Let $f \in L_{1,k}[0,\infty),$. The generalized Riemann-Liouville fractional integral $I^{\alpha,k}f(x)$ of order $\alpha, k \geq 0$ is defined by $$I^{\alpha,k}f(x)= \frac{(k+1)^{1-\alpha}}{\Gamma (\alpha)}\int_{0}^{x}(x^{k+1}-t^{k+1})^{\alpha-1}t^{k} f(t)dt.$$
[@KI2; @YI] Let $k\geq0,\alpha>0 \mu >-1$ and $\beta, \eta \in R $. The generalized k-fractional integral $I^{\alpha,\beta,\eta,\mu}_{t,k}$ (in terms of the Gauss hypergeometric function)of order $\alpha$ for real-valued continuous function $f(t)$ is defined by $$\begin{split}
I^{\alpha,\beta,\eta,\mu}_{t,k}[f(t)]&
= \frac{(k+1)^{\mu+\beta+1}t^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{t}\tau^{(k+1)\mu}(t^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{t})^{k+1})\tau^{k} f(\tau)d\tau.
\end{split}$$
where, the function $_{2}F_{1}(-)$ in the right-hand side of (2.4) is the Gaussian hypergeometric function defined by $$_{2}F_{1} (a, b; c; t)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}} \frac{t^{n}}{n!},$$ and $(a)_{n}$ is the Pochhammer symbol\
$$(a)_{n}=a(a+1)...(a+n-1)=\frac{\Gamma(a+n)}{\Gamma(a)}, \,\,\,(a)_{0}=1.$$ Consider the function $$\begin{split}
F(t,\tau)&= \frac{(k+1)^{\mu+\beta+1}t^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\tau^{(k+1)\mu}\\
&(t^{k+1}-\tau^{k+1})^{\alpha-1} \times _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{t})^{k+1})\\
&=\sum_{n=0}^{\infty}\frac{(\alpha+\beta+\mu)_{n}(-n)_{n}}{\Gamma(\alpha+n)n!}t^{(k+1)(-\alpha-\beta-2\mu-\eta)}\tau^{(k+1)\mu}(t^{k+1}-\tau^{k+1})^{\alpha-1+n}(k+1)^{\mu+\beta+1}\\
&=\frac{\tau^{(k+1)\mu}(t^{k+1}-\tau^{k+1})^{\alpha-1}(k+1)^{\mu+\beta+1}}{t^{k+1}(\alpha+\beta+2\mu)\Gamma(\alpha)}+\\
&\frac{\tau^{(k+1)\mu}(t^{k+1}-\tau^{k+1})^{\alpha}(k+1)^{\mu+\beta+1}(\alpha+\beta+\mu)(-n)}{t^{k+1}(\alpha+\beta+2\mu+1)\Gamma(\alpha+1)}+\\
&\frac{\tau^{(k+1)\mu}(t^{k+1}-\tau^{k+1})^{\alpha+1}(k+1)^{\mu+\beta+1}(\alpha+\beta+\mu)(\alpha+\beta+\mu+1)(-n)(-n+1)}{t^{k+1}(\alpha+\beta+2\mu+1)\Gamma(\alpha+2)2!}+...
\end{split}$$ It is clear that $F(t,\tau)$ is positive because for all $\tau \in (0, t)$ , $(t>0)$ since each term of the (2.6) is positive.
Fractional Integral Inequalities for Extended Chebyshev Functional
==================================================================
In this section, we establish some Chebyshev type fractional integral inequalities by using the generalized k-fractional integral (in terms of the Gauss hypergeometric function) operator. The following lemma is used for the our main result.
Let $f$ and $g$ be two synchronous function on $[0,\infty[,$ and $x,y:[0,\infty)\rightarrow$ $[0,\infty)$ be two nonnegative functions. Then for all $k \geq 0,$ $t>0$, $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0,$ we have, $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}x(t) I^{\alpha,\beta,\eta,\mu}_{t,k}(yfg)(t)+ I^{\alpha,\beta,\eta,\mu}_{t,k}y(t) I^{\alpha,\beta,\eta,\mu}_{t,k}(xfg)(t)\geq \\
&I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(yg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(yf)(t) I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t).
\end{split}$$
**Proof**: Since $f$ and $g$ are synchronous on $[0,\infty[$ for all $\tau \geq 0$, $\rho\geq 0$, we have $$(f(\tau)-f(\rho)) (g(\tau)-g(\rho))\geq 0.$$ From (3.2), $$f(\tau)g(\tau)+f(\rho)g(\rho)\geq f(\tau)g(\rho)+f(\rho)g(\tau).$$ Now, multiplying both side of (3.3) by $ \tau^{k}x(\tau)F(t,\tau)$, $\tau \in (0,t)$, $t>0$. Then the integrating resulting identity with respect to $\tau$ from $0$ to $t$, we obtain by definition (2.4) $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}(xfg)(t)+f(\rho)g(\rho) I^{\alpha,\beta,\eta,\mu}_{t,k}(x)(t)\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}(yg)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)+f(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t).
\end{split}$$ Now, multiplying both side of (3.4) by $ \rho^{k}y(\rho)F(t,\rho)$, $\rho \in (0,t)$, $t>0$, where $F(t,\rho)$ defined in view of (2.6). Then the integrating resulting identity with respect to $\rho$ from $0$ to $t$, we obtain by definition (2.4) $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}y(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xfg)(t)+ I^{\alpha,\beta,\eta,\mu}_{t,k}(yfg)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(x)(t)\\
&\geq g(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(yf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t).
\end{split}$$ This complete the proof of (3.1)\
Now, we gave our main result here.
Let $f$ and $g$ be two synchronous function on $[0,\infty[$, and $r,p,q:[0,\infty)\rightarrow [0,\infty)$. Then for all $k \geq 0,$ $t>0$, $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0,$ we have, $$\begin{split}
&2I^{\alpha,\beta,\eta,\mu}_{t,k}r(t) \left[I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(qfg)(t)+
I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pfg)(t)\right]+\\
&2 I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rfg)(t)\geq\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}r(t) \left[I^{\alpha,\beta,\eta,\mu}_{t,k}(pf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(qg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pg)(t)\right]+\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(qg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t)\right]+\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(pf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t)\right]
\end{split}$$
**Proof**: To prove above theorem, putting $x=p, \ y=q$, and using lemma 3.1, we get $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(qfg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pfg)(t)\geq \\ &I^{\alpha,\beta,\eta,\mu}_{t,k}(pf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(qg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pg)(t).
\end{split}$$ Now, multiplying both side by (3.7) $I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)$, we have $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}p(t) I^{\alpha,\beta,\eta,\mu}_{t,k}(qfg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pfg)(t)\right]\geq \\ &I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(pf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(qg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pg)(t)\right],
\end{split}$$ putting $x=r, y=q$, and using lemma 3.1, we get $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}r(t) I^{\alpha,\beta,\eta,\mu}_{t,k}(qfg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rfg)(t)\geq \\
&I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(qg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t),
\end{split}$$ multiplying both side by (3.9) $I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)$, we have $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}r(t) I^{\alpha,\beta,\eta,\mu}_{t,k}(qfg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rfg)(t) \right]\geq\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(qg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t)\right].
\end{split}$$ With the same arguments as before, we can write $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pfg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rfg)(t)\right]\geq\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pg)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(pf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t)\right].
\end{split}$$ Adding the inequalities (3.8), (3.10) and (3.11), we get required inequality (3.6).\
Here, we give the lemma which is useful to prove our second main result.
Let $f$ and $g$ be two synchronous function on $[0,\infty[$. and $x,y:[0,\infty[\rightarrow$ $[0,\infty[$. Then for all $k \geq 0,$ $t>0$, $\alpha > max\{0,-\beta-\mu\}$,$\gamma> max\{0,-\delta-\upsilon\}$ $\beta,\delta < 1,$ $\upsilon,\mu >-1,$ $\beta -1< \eta <0,$ $\delta-1<\zeta <0,$ we have, $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}x(t) I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yfg)(t)+ I^{\gamma,\delta,\zeta,\upsilon}_{t,k}y(t) I^{\alpha,\beta,\eta,\mu}_{t,k}(xfg)(t)\geq \\
&I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t) I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yf)(t) I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t).
\end{split}$$
**Proof**: Now multiplying both side of (3.4) by $$\begin{split}
&\frac{(k+1)^{\upsilon+\delta+1}t^{(k+1)(-\delta-\gamma-2\upsilon)}}{\Gamma (\gamma)}\rho^{(k+1)\upsilon}y(\rho)\\
&(t^{k+1}-\rho^{k+1})^{\gamma-1} \times _{2}F_{1} (\gamma+ \delta+\upsilon, -\zeta; \gamma; 1-(\frac{\rho}{t})^{k+1})\rho^{k}
\end{split}$$ which remains positive in view of the condition stated in (3.12), $\rho \in (0,t)$, $t>0$, we obtain $$\begin{split}
&\frac{(k+1)^{\upsilon+\delta+1}t^{(k+1)(-\delta-\gamma-2\upsilon)}}{\Gamma (\gamma)}\rho^{(k+1)\upsilon}y(\rho)\\
&(t^{k+1}-\rho^{k+1})^{\gamma-1} \times _{2}F_{1} (\gamma+ \delta+\upsilon, -\zeta; \gamma; 1-(\frac{\rho}{t})^{k+1})\rho^{k}
I^{\alpha,\beta,\eta,\mu}_{t,k}(xfg)(t)\\
&+\frac{(k+1)^{\upsilon+\delta+1}t^{(k+1)(-\delta-\gamma-2\upsilon)}}{\Gamma (\gamma)}\rho^{(k+1)\upsilon}y(\rho)f(\rho)g(\rho)\\
&(t^{k+1}-\rho^{k+1})^{\gamma-1} \times _{2}F_{1} (\gamma+ \delta+\upsilon, -\zeta; \gamma; 1-(\frac{\rho}{t})^{k+1})\rho^{k}
I^{\alpha,\beta,\eta,\mu}_{t,k}x(t)\geq \\
&\frac{(k+1)^{\upsilon+\delta+1}t^{(k+1)(-\delta-\gamma-2\upsilon)}}{\Gamma (\gamma)}\rho^{(k+1)\upsilon}y(\rho)g(\rho)\\
&(t^{k+1}-\rho^{k+1})^{\gamma-1} \times _{2}F_{1} (\gamma+ \delta+\upsilon, -\zeta; \gamma; 1-(\frac{\rho}{t})^{k+1})\rho^{k}
I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)\\
&+\frac{(k+1)^{\upsilon+\delta+1}t^{(k+1)(-\delta-\gamma-2\upsilon)}}{\Gamma (\gamma)}\rho^{(k+1)\upsilon}y(\rho)f(\rho)\\
&(t^{k+1}-\rho^{k+1})^{\gamma-1} \times _{2}F_{1} (\gamma+ \delta+\upsilon, -\zeta; \gamma; 1-(\frac{\rho}{t})^{k+1})\rho^{k}
I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t),
\end{split}$$ then integrating (3.14) over (0,t), we obtain $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}(xfg)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}y(t)+
I^{\alpha,\beta,\eta,\mu}_{t,k}(x)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yfg)(t)\\
&\geq I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}yg(t)
+I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}yf(t),
\end{split}$$ this ends the proof of inequality (3.12).
Let $f$ and $g$ be two synchronous function on $[0,\infty[$, and $r,p,q:[0,\infty)\rightarrow [0,\infty)$. Then for all $t>0$, $\alpha>0$, we have: $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)\times\\
& \left[I^{\alpha,\beta,\eta,\mu}_{t,k}q(t) I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(pfg)(t)+2
I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qfg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pfg)(t)\right]\\
&+\left[I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}q(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}p(t)I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)\right]I^{\alpha,\beta,\eta,\mu}_{t,k}(rfg)(t)\geq\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}r(t) \left[I^{\alpha,\beta,\eta,\mu}_{t,k}(pf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pg)(t)\right]+\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t)\right]+\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(pg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(pf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t)\right].
\end{split}$$
**Proof**: To prove above theorem, putting $x=p, \ y=q$, and using lemma 3.3 we get $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}p(t) I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qfg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pfg)(t)\geq \\ &I^{\alpha,\beta,\eta,\mu}_{t,k}(pf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pg)(t).
\end{split}$$ Now, multiplying both side by (3.17) $I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)$, we have $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}p(t) I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qfg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pfg)(t)\right]\geq \\ &I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(pf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(pg)(t)\right],
\end{split}$$ putting $x=r, \ y=q$, and using lemma 3.3, we get $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}r(t) I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qfg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rfg)(t)\geq \\
&I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t),
\end{split}$$ multiplying both side by (3.19) $I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)$, we have $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}r(t) I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qfg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}q(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rfg)(t)\geq \right]\\
&I^{\alpha,\beta,\eta,\mu}_{t,k}p(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(qf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t)\right].
\end{split}$$ With the same argument as before, we obtain $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}r(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(pfg)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}p(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rfg)(t)\right]\geq \\
&I^{\alpha,\beta,\eta,\mu}_{t,k}q(t)\left[I^{\alpha,\beta,\eta,\mu}_{t,k}(rf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(pg)(t)+(pf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(rg)(t)\right].
\end{split}$$ Adding the inequalities (3.18), (3.20) and (3.21), we follows the inequality (3.16).
If $ f,g,r,p \ and\ q $ satisfies the following condition,
1. The function f and g is asynchronous on $[0,\infty)$.
2. The function r,p,q are negative on $[0,\infty)$.
3. Two of the function r,p,q are positive and the third is negative on $[0,\infty)$.
then the inequality 3.6 and 3.16 are reversed.
Other fractional integral inequalities
======================================
In this section, we proved some fractional integral inequalities for positive and continuous functions which as follows:
Suppose that $f$, $g$ and $h$ be three positive and continuous functions on $[0,\infty[$, such that $$(f(\tau)-f(\rho))(g(\tau)-g(\tau))(h(\tau)+h(\rho))\geq 0; \ \tau, \rho \in(0,t)\ \ t>0,$$ and $x$ be a nonnegative function on $[0,\infty)$. Then for all $k \geq 0,$ $t>0$, $\alpha > max\{0,-\beta-\mu\}$,$\gamma> max\{0,-\delta-\upsilon\}$ $\beta,\delta < 1,$ $\upsilon,\mu >-1,$ $\beta -1< \eta <0,$ $\delta-1<\zeta <0,$ we have, $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}(x)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xfgh)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(xh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xfg)(t)\\
&+I^{\alpha,\beta,\eta,\mu}_{t,k}(xfg)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xh)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(xfgh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(x)(t)\\
& \geq I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xgh)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xfh)(t)\\
&+I^{\alpha,\beta,\eta,\mu}_{t,k}(xgh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xf)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(xfh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xg)(t).
\end{split}$$
**Proof**: Since $f$, $g$ and $h$ be three positive and continuous functions on $[0,\infty[$ by (4.1), we can write $$\begin{split}
&f(\tau)g(\tau)h(\tau)+f(\rho)g(\rho)h(\rho)+f(\tau)g(\tau)h(\rho)+f(\rho)g(\rho)h(\tau)\\
&\geq f(\tau)g(\rho)h(\tau)+f(\tau)g(\rho)h(\rho)+f(\rho)g(\tau)h(\tau)+f(\rho)g(\tau)h(\rho).
\end{split}$$ Now, multiplying both side of (4.3) by $ \tau^{k}x(\tau)F(t,\tau)$, $\tau \in (0,t)$, $t>0$. Then the integrating resulting identity with respect to $\tau$ from $0$ to $t$, we obtain by definition (2.4) $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}(xfgh)(t)+f(\rho)g(\rho)h(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}x(t)+g(\tau)h(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)\\
&+f(\rho)g(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xh)(t)\geq g(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xfh)(t)+g(\rho)h(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)\\
&+f(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xgh)(t)+f(\rho)h(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t).
\end{split}$$
Now multiplying both side of (4.4) by $$\begin{split}
&\frac{(k+1)^{\upsilon+\delta+1}t^{(k+1)(-\delta-\gamma-2\upsilon)}}{\Gamma (\gamma)}\rho^{(k+1)\upsilon}x(\rho)\\
&(t^{k+1}-\rho^{k+1})^{\gamma-1} \times _{2}F_{1} (\gamma+ \delta+\upsilon, -\zeta; \gamma; 1-(\frac{\rho}{t})^{k+1})\rho^{k}
\end{split}$$ which remains positive in view of the condition stated in (4.2), $\rho \in (0,t)$, $t>0$ and integrating resulting identity with respective $\rho $ from $0$ to $t$, we obtain $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}(xfgh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}x(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xfgh)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}x(t)\\
&+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xh)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xgf)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xfg)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xh)(t)\\
&\geq I^{\gamma,\delta,\zeta,\upsilon}_{t,k}xg(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xfh)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xgh)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)\\
&+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xgh)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(xfh)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t).\end{split}$$ which implies the proof inequality 4.2.\
Here, we give another inequality which is as follows.
Let $f$, $g$ and $h$ be three positive and continuous functions on $[0,\infty[$, which satisfying the condition (4.1) and $x$ and $y$ be two nonnegative functions on $[0,\infty)$. Then for all $k \geq 0,$ $t>0$, $\alpha > max\{0,-\beta-\mu\}$,$\gamma> max\{0,-\delta-\upsilon\}$ $\beta,\delta < 1,$ $\upsilon,\mu >-1,$ $\beta -1< \eta <0,$ $\delta-1<\zeta <0,$ we have, $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}(x)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yfgh)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(xh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yfg)(t)\\
&+I^{\alpha,\beta,\eta,\mu}_{t,k}(xfg)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yh)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(xfgh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}y(t)\\
& \geq I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(ygh)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yfh)(t)\\
&+I^{\alpha,\beta,\eta,\mu}_{t,k}(xgh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yf)(t)+I^{\alpha,\beta,\eta,\mu}_{t,k}(xfh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yg)(t).
\end{split}$$
**Proof**: Multiplying both side of (4.3) by $ \tau^{k}x(\tau)F(t,\tau)$, $\tau \in (0,t)$, $t>0$, where $F(t,\tau)$ defined by (2.6). Then the integrating resulting identity with respect to $\tau$ from $0$ to $t$, we obtain by definition (2.4) $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}(xfgh)(t)+f(\rho)g(\rho)h(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}x(t)+g(\tau)h(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)\\
&+f(\rho)g(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xh)(t)\geq g(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xfh)(t)+g(\rho)h(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)\\
&+f(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xgh)(t)+f(\rho)h(\rho)I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t).
\end{split}$$
Now multiplying both side of (4.8) by $$\begin{split}
&\frac{(k+1)^{\upsilon+\delta+1}t^{(k+1)(-\delta-\gamma-2\upsilon)}}{\Gamma (\gamma)}\rho^{(k+1)\upsilon}y(\rho)\\
&(t^{k+1}-\rho^{k+1})^{\gamma-1} \times _{2}F_{1} (\gamma+ \delta+\upsilon, -\zeta; \gamma; 1-(\frac{\rho}{t})^{k+1})\rho^{k}
\end{split}$$ which remains positive in view of the condition stated in (4.7), $\rho \in (0,t)$, $t>0$ and integrating resulting identity with respective $\rho $ from $0$ to $t$, we obtain $$\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{t,k}(xfgh)(t)I^{\gamma,\delta,\zeta,\upsilon}_{t,k}y(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yfgh)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}x(t)\\
&+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yh)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xgf)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yfg)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xh)(t)\\
&\geq I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yg)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xfh)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(ygh)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xf)(t)\\
&+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yf)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xgh)(t)+I^{\gamma,\delta,\zeta,\upsilon}_{t,k}(yfh)(t)I^{\alpha,\beta,\eta,\mu}_{t,k}(xg)(t).
\end{split}$$ which implies the proof inequality 4.7.
[00]{} G. A. Anastassiou, *Fractional Differentiation Inequalities,* Springer Publishing Company, Incorporated, New York, NY, 2009. D.Baleanu, J.A.T.Machado and C.J.Luo, *Fractional Dynamic and Control,* Springer, 2012, pp.159-171. S. Belarbi and Z. Dahmani, *On some new fractional integral inequality,* J. Inequal. Pure and Appl. Math., 10(3)(2009), Art.86, 5 pp. V. L. Chinchane and D. B. Pachpatte, *A note on some integral inequalities via Hadamard integral,* J. Fractional Calculus Appl. 4(11)(2013), 1-5. V. L. Chinchane and D. B. Pachpatte, *On some integral inequalities using Hadamard fractional integral,* Malaya J. Math. 1(1)(2012), 62-66. V.L. Chinchane and D. B. Pachpatte, *Some new integral inequalities using Hadamard fractional integral operator,* Adv. Inequal. Appl. 2014, 2014:12. V.L. Chinchane and D. B. Pachpatte, *New fractional inequalities via Hadamard fractional integral,* Internat. J. Functional Analyisis, Operator Theory and Application, 5, 3(2013), 165-176. V.L. Chinchane and D. B. Pachpatte, *On some new Gr$\ddot{u}$ss-type inequality using Hadamard fractional integral operator,* J. Fractional Calculus Appl. Vol. 5(3S) No. 12, pp. 1-10. Z. Dahmani, *The Riemann-Liouville operator to genarate some new inequalities,* Int. J. Nonlinear Sci. volume 12(2011), No.4, pp.452-455. Z. Dahmani, *Some results associate with fractional integrals involving the extended Chebyshev,* Acta Univ. Apulensis Math. Inform. 27(2011), pp.217-224. S.L.Kalla and A. Rao, *On Gr$\ddot{u}$ss type inequality for hypergeometric fractional integrals,* Matematiche (Catania) 66(1)(2011), 57-64. Amersterdam, 2006. S.Kilinc and H.Yildirim, *Generalized fractional integral inequalities involving Hypergeometic operators,* International Journal of Pure and Applied Mathematics, 101(1), 2015, 71-82. S. D. Purohit and R. K. Raina, *Chebyshev type inequalities for the Saigo fractional integral and their q- analogues,* J. Math. Inequal., 7(2),(2013), 239-249. S.G.Somko, A.A.Kilbas and O.I.Marichev, *Fractional Integral and Derivative Theory and Application,* Gordon and Breach, Yverdon, Switzerland, 1993. H. Yildirim and Z. Kirtay, *Ostrowski inequality for generalized fractional integral and related equalities,*Malaya J. Mat. 2(3), (2014), 322-329.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We describe the TreePM method for carrying out large N-Body simulations to study formation and evolution of the large scale structure in the Universe. This method is a combination of Barnes and Hut tree code and Particle-Mesh code. It combines the automatic inclusion of periodic boundary conditions of PM simulations with the high resolution of tree codes. This is done by splitting the gravitational force into a short range and a long range component. We describe the splitting of force between these two parts. We outline the key differences between TreePM and some other N-Body methods.'
author:
- |
J.S.Bagla\
Harish-Chandra Research Institute, Chhatnag Road, Jhunsi,\
Allahabad 211019, INDIA\
e-mail:jasjeet@mri.ernet.in
date: 'Received 2002 June 13; accepted 2002 November 14'
title: 'TreePM: A code for Cosmological N-Body Simulations'
---
\[firstpage\]
gravitation, methods: numerical, cosmology: large scale structure of the universe
Introduction
============
Observations suggest that the present universe is populated by very large structures like galaxies, clusters of galaxies etc. Current models for formation of these structures are based on the assumption that gravitational amplification of density perturbations resulted in the formation of large scale structures. In absence of analytical methods for computing quantities of interest, numerical simulations are the only tool available for study of clustering in the non-linear regime. Last two decades have seen a rapid development of techniques and computing power for cosmological simulations and the results of these simulations have provided valuable insight into the study of structure formation.
The simplest N-Body method that has been used for studying clustering of large scale structure is the Particle Mesh method (PM hereafter). The genesis of this method is in the realisation that the Poisson equation is an algebraic equation in Fourier space, hence if we have a tool for switching to Fourier space and back, we can calculate the gravitational potential and the force with very little effort. It has two elegant features in that it provides periodic boundary conditions by default, and the force is softened naturally so as to ensure collisionless evolution of the particle distribution. However, softening of force done at grid scale implies that the force resolution is very poor. This limits the dynamic range over which we can trust the results of the code between a few grid cells and about a quarter of the simulation box (Bouchet and Kandrup, 1985; Bagla and Padmanabhan, 1997. Many efforts have been made to get around this problem, mainly in the form of P$^3$M (Particle-Particle Particle Mesh) codes (Efstathiou et al, 1985; Couchman 1991). In these codes, the force computed by the particle mesh part of the code is supplemented by adding the short range contribution of nearby particles, to improve force resolution. The main problem with this approach is that the particle-particle summation of the short range force takes a lot of time in highly clustered situations. Another, more subtle problem is that the force computed using the PM method has anisotropies and errors in force at grid scale – these errors are still present in the force calculated by combining the PM force with short range corrections (Bouchet and Kandrup, 1985).
A completely different approach to the problem of computing force are codes based on the tree method. In this approach we consider groups of particles at a large distance to be a single entity and compute the force due to the group rather than sum over individual particles. There are different ways of defining a group, but by far the most popular method is that due to Barnes and Hut (1986). Applications of this method to Cosmological simulations require including periodic boundary conditions. This has been done using Ewald’s method (Ewald, 1921; Rybicki, 1986; Hernquist, Bouchet and Suto, 1991; Springel, Yoshida and White, 2001). Ewald’s method is used to tabulate the correction to the force due to periodic boundary conditions. This correction term is stored on a grid (in relative separation of a pair of particles) and the interpolated value is added to the pairwise force.
Some attempts have been made to combine the high resolution of a tree code with the natural inclusion of periodic boundary conditions in a PM code by simply extending the P$^3$M method and replacing the particle-particle part for short range correction with a local tree (Xu, 1995).
In this paper we present a hybrid N-Body method that attempts to combine the good features of the PM and the tree method, while avoiding the problems of the P$^3$M and the TPM methods. Our approach is to divide force into long and short range components using partitioning of unity, instead of taking the PM force as given. This allows us greater control over errors, as we shall see below.
The plan of the paper is as follows: §[2]{} introduces the basic formalism of both the tree and PM codes. §[2.3]{} gives the mathematical model for the TreePM code. We analyse errors in force for the TreePM code in §[3]{}. Computational requirements of our implementation of the TreePM code are discussed in §[4]{}. A discussion of the relative merits of the TreePM method with respect to other N-Body methods follows in §[5]{}.
The TreePM Method
=================
Tree Code
---------
We use the approach followed by Barnes and Hut (1986). In this, the simulation volume is taken to be a cube. The tree structure is built out of cells and particles. Cells may contain smaller cells (subcells) within them. Subcells can have even smaller cells within them, or they can contain a particle. We start with the simulation volume and add particles to it. If two particles end up in the same subcell, the subcell is geometrically divided into smaller subcells until each subcell contains either subcells or at most one particle. The cubic simulation volume is the root cell. In three dimensions, each cubic cell is divided into eight cubic subcells. Cells, as structures, have attributes like total mass, location of centre of mass and pointers to subcells. Particles, on the other hand have the traditional attributes like position, velocity and mass. More details can be found in the original paper (Barnes and Hut, 1986).
Force on a particle is computed by adding contribution of other particles or of cells. A cell that is sufficiently far away can be considered as a single entity and we can just add the force due to the total mass contained in the cell from its centre of mass. If the cell is not sufficiently far away then we must consider its constituents, subcells and particles. Whether a cell can be accepted as a single entity for force calculation is decided by the cell acceptance criterion (CAC). We compute the ratio of the size of the cell $d$ and the distance $r$ from the particle in question to its centre of mass and compare it with a threshold value $$\theta = \frac{d}{r} \leq \theta_c \label{trwalk}$$ The error in force increases with $\theta_c$. There are some potentially serious problems associated with using $\theta_c
\geq 1/\sqrt{3}$, a discussion of these is given in Salmon and Warren (1994). One can also work with completely different definitions of the CAC (Salmon and Warren, 1994; Springel, Yoshida and White, 2001). Irrespective of the criterion used, the number of terms that contribute to the force on a particle is much smaller than the total number of particles, and this is where a tree code gains in terms of speed over direct summation.
We will use the Barnes and Hut tree code and we include periodic boundary conditions for computing the short range force of particles near the boundaries of the simulation cube. Another change to the standard tree walk is that we do not consider cells that do not have any spatial overlap with the region within which the short range force is calculated. We also use an optimisation technique to speed up force calculation (Barnes, 1990).
Particle Mesh Code
------------------
A PM code is the obvious choice for computing long range interactions. Much has been written about the use of these in cosmological simulations (e.g., see Hockney and Eastwood, 1988) so we will not go into details here. PM codes solve for the gravitational potential in the Fourier space. These use Fast Fourier Transforms (FFT) to compute Fourier transforms, and as FFT requires data to be defined on a regular grid the concept of mesh is introduced. The density field represented by particles is interpolated onto the mesh. Poisson equation is solved in Fourier space and an inverse transform gives the potential (or force) on the grid. This is then differentiated and interpolated to the position of each particle in order to calculate the displacements. Use of a grid implies that forces are not accurate at the scale smaller than the grid cells. A discussion of errors in force in a PM code can be found in Efstathiou et al (1985) and elsewhere (Bouchet and Kandrup, 1985; Bagla and Padmanabhan, 1997). The error in force can be very large at small scales but it drops to an acceptable number beyond a few grid cells, and is negligible at large scales.
We use the Cloud-in-Cell weight function for interpolation. We solve the Poisson equation using the natural kernel, $-1/k^2$; this is called the poor man’s Poisson solver (Hockney and Eastwood, 1988). We compute the gradient of the potential in Fourier space.
TreePM Code
-----------
We now turn to the question of combining the tree and the PM code. We wish to split the inverse square force into a long range force and a short range force. The gravitational potential can be split into two parts in Fourier space (Ewald, 1921). $$\begin{aligned}
\varphi_k &=& - \frac{4 \pi G \varrho_k}{k^2} \label{pm_std}\\
&=& - \frac{4 \pi G \varrho_k}{k^2} \exp\left(-k^2 r_s^2\right) -
\frac{4 \pi G \varrho_k}{k^2} \left(1 - \exp\left(-k^2
r_s^2\right)\right)\nonumber\\
&=& \varphi_k^l + \varphi_k^s \nonumber \\
\varphi_k^l &=& - \frac{4 \pi G \varrho_k}{k^2} \exp\left(-k^2
r_s^2\right) \label{longr}\\
\varphi_k^s &=& - \frac{4 \pi G \varrho_k}{k^2} \left(1 - \exp\left(-k^2
r_s^2\right)\right) \label{shortr}\end{aligned}$$ where $\varphi^l$ and $\varphi^s$ are the long range and the short range potentials, respectively. The splitting is done at the scale $r_s$. $G$ is the gravitational coupling constant and $\varrho$ is density. The expression for the short range force in real space is: $${\bf f}^s({\bf r}) = - \frac{G m {\bf r}}{r^3} \left({\rm
erfc}\left(\frac{r}{2 r_s}\right) + \frac{r}{r_s \sqrt{\pi}}
\exp\left(-\frac{r^2}{4 r_s^2}\right)\right) \label{fshort}$$ Here, ${\rm erfc}$ is the complementary error function. These equations describe the mathematical model for force in the TreePM code. The long range potential is computed in the Fourier space, just as in a PM code, but using eqn.(\[longr\]) instead of eqn.(\[pm\_std\]). This potential is then used to compute the long range force. The short range force is computed directly in real space using eqn.(\[fshort\]). In the TreePM method this is computed using the tree approximation. The short range force falls rapidly at scales $r \gg r_s$, and hence we need to take this into account only in a small region around each particle.
=4truein
We have plotted the long range and the short range force (eqn.(\[fshort\])) as a function of $r$ in fig.1 to show their dependence on scale. We have chosen $r_s=1$ here. The short range force closely follows the total force up to about $2 r_s$ and then falls rapidly, its magnitude falls below $1\%$ of the total force by $5 r_s$. The long range force reaches a peak around $2 r_s$. It makes up most of the total force beyond $3.5 r_s$. It falls with scale below $2 r_s$, becoming negligible below $r_s / 2$.
Evaluation of special functions for calculating the short range force can be time consuming. To save time, we compute an array containing the magnitude of the short range force. The force between any two objects, particle-cell or particle-particle, is computed by linearly interpolating between the nearby array elements multiplied by the unit vector ${\bf r}$. It is necessary for the array to sample the force at sufficiently closely spaced values of $r$ in order to keep error in interpolation small.
Error Estimation
================
In this section we will study errors in force introduced by various components of the TreePM code. We will only list salient points here and the reader is referred to a more comprehensive study for details (Bagla and Ray, 2002).
We start by estimating the error in force due to one particle. The long range force of a particle is calculated using the PM method, but using eqn.(\[longr\]) instead of eqn.(\[pm\_std\]). The cutoff at high wave numbers largely removes the effect of the grid and we find that the dispersion in the long range force is very small, e.g. for $r_s \geq 1$ grid length the dispersion is smaller than $1\%$ of the total force at all scales. There is a systematic offset in the long range force that is larger than the dispersion. This offset is induced by the interpolating function, and can be corrected (White, 2000; Bagla and Ray, 2002) by de-convolving the square of the interpolating function (we need to interpolate twice). This deconvolution does not affect the dispersion in any significant manner.
There are no errors in computing the short range force for one particle, hence the only source of errors is in the calculation of the long range force in this case. All the errors arise due to anisotropies in the long range force. The errors in the long range force increase as we approach small scales, but the contribution of the long range force to the total force falls sharply below $2r_s$ and hence the errors also drop rapidly. There is a peak in errors around $2r_s$–$3r_s$, and for $r_s=1$ maximum rms error in force of one particle is $1\%$ of the total force.
In calculating the total force, we added the short range force to the long range force at all scales. However, this is not necessary as beyond some scale, the contribution of small scale force to the total force drops to a negligible fraction of the total force. We will call the scale upto which we add the small scale force as $r_{cut}$. The short range force is just below $1\%$ of the total force at $r_{cut}=5r_s$. We choose this value of $r_{cut}$ for the TreePM code.
=4truein
The other source of error is the tree approximation that we use for computing the short range force. The first correction term is due to the quadrapole moment of the particle distribution in the cell, however the magnitude of this error is larger than in the inverse square force due to a more rapid variation in force with distance. In the worst case, this error can be more than twice the error in the corresponding case of inverse square force (Bagla and Ray, 2002). In more generic cases, errors due to this effect tend to cancel out and the net error is small.
Apart from this effect, there is also a dispersion introduced by the tree approximation. The magnitude of this dispersion varies monotonically with $\theta_c$.
One factor that we have to weigh in is that the execution time is small for large $\theta_c$ and small $r_{cut}$. Given these considerations, the obvious solution is to choose the smallest $r_s$ and the largest $\theta_c$ that gives us a sufficiently accurate force field.
It is important to estimate the errors in a realistic situation, even though we do not expect errors to add up coherently in most situations. We test errors for two distributions of particles: a homogeneous distribution and a clumpy distribution. For the homogeneous distribution, we use randomly distributed particles in a box. We use $262144$ particles in a $64^3$ box for this distribution. We compute the force using a reference setup ($r_s=4$, $r_{cut}=6
r_s$, $\theta_c=0$) and the setup we wish to test ($r_s=1$, $r_{cut}=5
r_s$, $\theta_c=0.5$). It can be shown that the errors in the reference setup are well below $0.5\%$ for the entire range of scales (Bagla and Ray, 2002). We compute the fractional error in force acting on each particle, this is defined as, $$\epsilon = \frac{\left\vert {\bf f} - {\bf f}_{ref}
\right\vert}{\left\vert {\bf f}_{ref} \right\vert} .$$ Fig.2 shows the cumulative distribution of fractional errors. The curves show the fraction of particles with error greater than $\epsilon$. The thick line shows this for the homogeneous distribution. Error $\epsilon$ for $99\%$ of particles is less than $3.5\%$. Results for the clumpy distribution of particles are shown by the dashed line. We used the output of a CDM simulation (fig.3a) run with the TreePM code. Errors in this case are much smaller, as compared to the homogeneous distribution, as in the case of tree code (Hernquist, Bouchet and Suto, 1991). Error $\epsilon$ for $99\%$ of particles is around $2\%$, as compared to $3.5\%$ for the homogeneous distribution.
=3.2truein =3.2truein
There are two noteworthy features of this figure. One is that the error for the homogeneous distribution is higher. The main reason for this is similar to that in tree codes, though the effect is much smaller here. When we are dealing with a homogeneous distribution, the total force on each particle is very small because forces due to nearly identical mass distributions on opposite sides cancel out. This near cancellation of large numbers gives rise to errors that decrease as the net result of these cancellations grows. In a tree code, we calculate the force due to all the particles in the simulation box whereas in the TreePM method we add up the contribution of only those within a sphere of radius $r_{cut}$. This is the reason for the difference in these two curves being much less pronounced than the corresponding curves for the tree code (Hernquist, Bouchet and Suto, 1991).
The other feature is that the shape of the curves for the homogeneous distribution and the clumpy distribution is different. This is because we begin to see the effect of the error due to tree approximation in case of clumpy distribution. In case of the homogeneous distribution, the distribution of particles is close to isotropic around any given particle and hence the error cancels out. This error can be controlled by reducing $\theta_c$.
We end this section with a brief comparison of the TreePM code with a PM code. We ran a simulation of the sCDM model ($262144$ particles, $64$h$^{-1}$Mpc box) with a PM code (Bagla and Padmanabhan, 1997) and with the TreePM code discussed here. Fig.3 shows a slice from these simulations; fig.3a shows the simulation with the TreePM code and fig.3b shows the same for a PM code. The large scale structures are the same in the two but there are significant differences at small scales. The halos are much more compact in the TreePM simulation, and large halos show more substructure. These differences are also clear in the two point correlation function $\bar\xi(r)$ plotted in fig.4. The thick line shows the correlation from the TreePM simulation and the dashed line shows the same for the PM simulation. As expected from fig.3 and from general considerations, the correlation function in the TreePM simulation matches with that from the PM simulation at large scales, but at small scales, the TreePM simulation has a higher correlation function.
=4truein
We have checked the accuracy of evolution by checking the rate of growth for the correlation function in the linear regime and also by looking for scale invariance of the correlation function for power law models. For more details please see (Bagla and Ray, 2002).
Computational Resources
=======================
In this section, we describe the computational resources required for the present implementation of the TreePM code. Given that we have combined the tree and the PM code, the memory requirement is obviously greater than that for either one code. We need four arrays for the PM part, the potential and the force. The rest is exactly the same as a standard Barnes and Hut tree code. With efficient memory management, we need less than $160$MB of RAM for a simulation with $128^3$ particles in a $128^3$ mesh for most part. In absence of memory management, this requirement can go up to 250MB. These are the numbers for floating point numbers, if we use double precision variables then this requirement goes up by a factor of two.
---------------- ------------- ------------- ------------- ----------- -------------
$N_{particle}$ time time time time time
(ms) (ms) (ms) (ms)
TreePM TreePM TreePM TreePM tree
unclustered unclustered unclustered clustered unclustered
P-4 PIII Alpha Alpha Alpha
$32768$ 0.57 0.59 2.94
$262144$ 0.78 0.80 3.75
$2097152$ 0.34 0.89 1.22 1.28 6.03
---------------- ------------- ------------- ------------- ----------- -------------
: Time taken by the code, per time step per particle. Column 1 lists the number of particles. Column 2, 3, 4 and 5 list the time taken (per time step per particle) by the TreePM code for an unclustered and a clustered particle distribution. Column 6 lists the same number for a tree code for an unclustered distribution of particles. All the times are in milli seconds.
Table 1 lists the time required per time step per particle for three values of the number of particles. These were run on a 533MHz Alpha workstation (EV5) and compiled with the native F90 compiler, a $1$GHz Pentium III desktop or a $1.6$GHz P-4 and compiled with the Intel F90 compiler. Column 1 lists the number of particles and col.2, 3 and 4 list the time per step per particle for an unclustered distribution. This number increases much slower than the total number of particles, as expected from the theoretical scaling of $O(N\ln{N})$.
Column 5 of table gives the same number for a highly clustered particle distribution, similar in clustering strength to that shown in fig.3. Column 6 lists the time per step per particle taken by the tree code for the particle distribution used in col.4. It is clear that the TreePM code is faster than the tree code by a factor of about $4.5$. It is also clear that this code performs well even on inexpensive hardware.
The performance of this code can be improved further by including features like individual time steps for particles. It is expected that adding individual time steps will improve the performance by a factor of two or more.
Comparison with other Methods
=============================
Amongst other codes that try to augment the performance of PM codes are the P$^3$M (Efstathiou et al, 1985; Couchman, 1991) codes and the TPM code (Xu, 1995). Following subsections compare TreePM with these codes.
P$^3$M and AP$^3$M
------------------
There are two main differences between P$^3$M codes (Efstathiou et al, 1985; Couchman, 1991) and the TreePM code presented here. One is that most P$^3$M codes use the natural cutoff provided by the grid for the long range force, i.e. these take the PM force to be the long range force. Hence errors in the PM force are present in the P$^3$M force. In contrast, the TreePM code uses an explicit cutoff that allows us to limit errors near the grid scale.
The second difference is in terms of the time taken for the adding the short range correction as a function of clustering. In both instances, the short range force is added for particles within a fixed radius $r_{cut}$. This process is of order $O(N n r_{cut}^3 (1 +
\bar\xi(r_{cut})) )$ for the P$^3$M method, where $N$ is the number of particles in the simulation, $n$ is the number density of particles and $\bar\xi(r_{cut})$ is the average number of excess particles around a particle, here excess is measured compared to a homogeneous distribution of particles with the same number density. At early times this reduces to $O(N n r_{cut}^3)$, but at late times, when the density field has become highly non-linear ($\bar\xi(r_{cut}) \gg 1$), it becomes $O(N n r_{cut}^3
\bar\xi(r_{cut}))$. As the density field becomes more and more clumpy, the number of operations required for computing the short range force increase rapidly. This is to be compared with the number of operations required for adding the short range correction in the TreePM code: $O(N \log(n r_{cut}^3 (1 + \bar\xi(r_{cut}))) )$. The linear and the non-linear limits of this expression are $O(N
\log(n r_{cut}^3))$ and $O(N \log(n r_{cut}^3 \bar\xi(r_{cut})))$, respectively. Thus the variation in the number of operations with increase in clustering is much less for TreePM code than a P$^3$M code. The problem is not as severe as outlined for the Adaptive P$^3$M code (Couchman, 1991) but it still persists. Therefore the TreePM code has a clear advantage over the P$^3$M and AP$^3$M code for simulations of models where $\bar\xi(r_{cut})$ is very large.
In turn, P$^3$M codes have one significant advantage over TreePM, these require much less memory. This gives P$^3$M codes an advantage on small machines and for simulations of models where $\bar\xi(r_{cut})$ is not much larger than unity.
TPM
---
Before we go into the differences between the TreePM and TPM methods, we would like to summarise the TPM method (Xu, 1995) here.
The TPM method is an extension of the P$^3$M method in that the PM force is taken to be the long range force and a short range force is added to it. Tree method is used for adding the short range correction instead of the particle-particle method. There are some further differences, e.g. correction is added only for particles in high density regions implying that the resolution is non-uniform. At each time step, high density regions are identified and a local tree is constructed in each of these regions for computing the short range correction. Thus, there are two clear differences between the TreePM and the TPM method:
- The TPM code uses the usual PM force to describe the long range component. In contrast, the TreePM code uses an explicit cutoff ($r_s$).
- TreePM treats all the particles on an equal footing, we compute the short range (eqn(\[fshort\])) and the long range force for each particle. In the TPM code, the short range force is computed only for particles in the high density regions.
Discussion
==========
Preceeding sections show that we have developed a new method for doing cosmological N-Body simulations with a clean mathematical model. The model splits force into long and short range forces using a parameter $r_s$. By choosing this parameter judiciously, in conjunction with two other parameters that arise in the implementation of this model ($r_{cut}$ and $\theta_c$) we can obtain a configuration that matches our requirements for the error budget.
It is possible to devise a more complex scheme for splitting the force into two parts but the one we have chosen seems to be the optimal scheme from the point of view of errors in force calculation as well as CPU time (Bagla and Ray, 2002).
Apart from improving control over errors, the TreePM code also leads to a significant gain in speed over the traditional tree code.
TreePM code is also amenable to parallelisation along the lines of (Dubinski, 1996), and is likely to scale well because the communication overhead is much more limited. Work in this direction is in progress and will be reported elsewhere (Bagla, 2002).
Acknowledgement {#acknowledgement .unnumbered}
===============
I would like to thank Rupert Croft, Lars Hernquist, Suryadeep Ray, Volker Springel and Martin White for insightful comments and discussions. Part of the work reported in this paper was done while the author was at the Harvard-Smithsonian Center for Astrophysics.
[10]{}
Bagla J.S. and Padmanabhan T. 1997, Pramana – Journal of Physics 49, 161
Bagla J.S. and Ray S. 2002, Manuscript in Preparation.
Bagla J.S. 2002, To appear in proceedings of [*Numerical Simulations in Astrophysics $2002$*]{}.
Barnes, J.E. 1990, J.Comp.Phys. 87, 161
Barnes J. and Hut P. 1986, Nature 324, 446
Bouchet F.R. and Kandrup H.E. 1985, ApJ 299, 1
Couchman H.M.P. 1991, ApJL 368, L23
Dubinski J. 1996, New Astronomy 1, 133
Efstathiou G., Davis M., Frenk C.S. and White S.D.M. 1985, ApJS 57, 241
Ewald P.P. 1921, Ann.Physik 64, 253
Hernquist L. 1987, ApJS 64, 715
Hernquist L., Bouchet F.R. and Suto Y. 1991, ApJS 75, 231
Hockney R.W. and Eastwood J.W. 1988, [*Computer Simulation using Particles*]{}, (New York: McGraw Hill)
Rybicki G.B. 1986, in [*The Use of Supercomputers in Stellar Dynamics*]{}, ed. P.Hut and S.McMillan (Berlin: Springer), p.181
Salmon J.K. and Warren M.S. 1994, J.Comp.Phys. 111, 136
Springel V., Yoshida N. and White S.D.M. 2001, New Astronomy 6, 79
White M. 2000, Private communication.
Xu G. 1995, ApJS 98, 355
\[lastpage\]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a new variant of the Växjö interpretation: contextualistic statistical realistic. Basic ideas of the Växjö interpretation-2001 are essentially clarified. We also discuss applications to biology, psychology, sociology, economy,...'
author:
- |
Andrei Khrennikov[^1]\
International Center for Mathematical\
Modeling in Physics and Cognitive Sciences,\
MSI, University of Växjö, S-35195, Sweden\
Email: Andrei.Khrennikov@msi.vxu.se
title: 'Växjö interpretation-2003: realism of contexts'
---
The first version of the [*Växjö interpretation*]{} of quantum mechanics was presented in \[1\], see also \[2\], after the conference “Quantum Theory: Reconsideration of foundations”, Växjö, June-2001, on the basis of numerous exciting discussions with participants. I was really surprised, that in spite of the formal acceptance of the official [*Copenhagen interpretation,*]{} many people (having top-qualification in quantum physics) still have doubts of various kinds and many of them are still looking for a [*realistic interpretation.*]{} This dream about [*quantum realism*]{} was the main stimulus for my attempt to present a new version of the realistic interpretation of quantum mechanics. The main problem was to create such an interpretation in which realism would coexist with a rather strange (people like to say “nonclassical”) behaviour of quantum probabilities – [*Born’s rule and interference of probabilities.*]{}[^2]
In 2000 it was demonstrated \[5\], see also \[6\]-\[10\], that Born’s rule and interference of probabilities can be (quite easily) derived in the realistic framework.[^3] The only thing that should not be neglected is [*contextuality of probabilities*]{} – dependence of probabilities on complexes of physical conditions (physical contexts). My investigations were induced by interest to frequency probability theory, see R. von Mises \[20\], see also \[21\]. In the frequency approach probabilities directly depend on collectives (random sequences) which are associated with concrete complexes of physical conditions. This probability theory is contextual from the very beginning.
Since 2001 I organized a series of conferences on foundations of probability and quantum mechanics [^4] and through intensive discussions (and, in particular, hard critique of some my colleagues, see \[25\], \[26\]) my ideas on interpretations of quantum mechanics are now essentially clearer.
First of all I understood the difference between my contextualism and Bohr’s contextualism, see Remark 1.
Then I understood the difference between my contextualism and contextualism of operational (empirisist) interpretation.
Another new issue is understanding of the role of special [*reference observables*]{} which are used in a concrete model for probabilistic representation contexts (e.g., in classical and quantum physical models we use the [*position*]{} and [*momentum*]{} observables).
Finally, it became clear that, in fact, I discussed \[1\], \[2\] not an interpretation of quantum mechanics, but a model – statistical and contextual – of physical reality. The corresponding interpretation of quantum mechanics is obtained automatically for those models in which statistical data can be represented by complex amplitudes. This is the Växjö interpretation of quantum mechanics: [*contextualistic statistical realistic interpretation.*]{}
By starting not from the formalism of quantum mechanics (calculus of probabilities in a complex Hilbert space), but from the general [*contextual statistical model of reality,*]{} we get the possibility to apply contextual statistical methods in many domains of science: [*biology, psychology, sociology, economy,...*]{} In some special cases we can use even the quantum probabilistic formalism. Such new applications of powerful mathematical methods developed in quantum theory can induce revolutionary changes in many sciences. But for us the quantum formalism is not the starting point. We should start with the general Växjö model of reality (physical, biological, psychological, social,...) and then test statistical data to find an appropriative mathematical formalism. In general there are no reasons to hope to obtain the complex [*quantum-like representation.*]{} For example, there might appear models in which data cannot be represented by complex amplitudes, but by hyperbolic ones. In this case we should use the formalism of [*hyperbolic quantum mechanics,*]{} \[10\]. [^5]
**1. Realism of contexts**
We start with the basic definition:
[**Definition 1.**]{} [*A physical [*context*]{} $C$ is a complex of physical conditions.*]{}
In principle, the notion of context can be considered as a generalization of a widely used notion of [*preparation procedure*]{} \[15\]. I prefer to use contextualistic and not preparation terminology. By using the preparation terminology we presuppose the presence of an experimenter preparing physical systems for a measurement. By using the contextualistic terminology we need not appeal to experimental preparations, experimenter should appear only on the stage of a measurement. Moreover, context need not be macroscopic. Of course, there exist [*experimental contexts*]{} – preparation procedures. However, in general contexts are not coupled to preparation procedures. I consider contexts as [*elements of physical reality*]{} which exist independently of experimenters[^6] This is the cornerstone of my contextualistic viewpoint to physics (quantum as well as classical):
[**Contexts are elements of reality**]{}
To construct a concrete model $M$ of reality, we should fix some set of contexts ${\cal C},$ see definition 2.
[**Remark 1.**]{} (Copenhagen and Växjö contextualisms) [Bohr’s interpretation of quantum mechanics is considered as contextualistic, see \[28\] for detailed analysis. However, we should sharply distinguish two types of contextualism: Copenhagen and Växjö contextualisms. For N. Bohr “context” had the meaning “context of a measurement”. For example, in his answer to the EPR challenge N. Bohr pointed out that position can be determined only in [*context of position measurement.*]{} For me “context” has the meaning a complex of physical conditions. As was underlined, a context is an element of physical reality and it has no direct relation to measurements (or existence of experimenters at all).[^7] For example, there exist contextualistic statistical models which cannot be represented in a complex Hilbert space – so called hyperbolic quantum-like models, \[10\].]{} Moreover, a Bohrian measurement context is always [*macroscopic*]{}, our context – a complex of physical conditions – need not be macroscopic.
**2. Observables**
Suppose that there is fixed a set of observables[^8] ${\cal O}$ such that any observable $a\in {\cal O}$ can be measured under a complex of physical conditions $C$ for any $C\in {\cal C}.$
There can be in principle defined other observables on contexts ${\cal C}$ which do not belong to the system ${\cal O},$ but they will define another contextual model of reality, see definition 2.
We remark that our general Växjö-representation of reality does not contain physical systems, cf. footnote 5. At the moment we do not (and need not) consider observables as observables on physical systems. It is only supposed that if a context $C$ is fixed then for any instant of time $t$ we can perform a measurement of any observable $a \in {\cal O}.$
We do not assume that all these observables can be measured simultaneously; so they need not be compatible. The sets of observables $ {\cal O}$ and contexts ${\cal C}$ are coupled through
[**Axiom 1:**]{} [*For any observable $a \in {\cal O},$ there are well defined contexts $C_\alpha$ corresponding to $\alpha$-filtrations: if we perform a measurement of $a$ under the complex of physical conditions $C_\alpha,$ then we obtain the value $a=\alpha$ with probability 1. It is supposed that the set of contexts ${\cal C}$ contains filtration-contexts $C_\alpha$ for all observables $a\in {\cal O}.$*]{}
**3. Probabilistic representation of contexts**
[**Axiom 2:**]{} [*There are defined contextual probabilities ${\bf P}(a=\alpha/C)$ for any context $C \in {\cal C}$ and any observable $a \in {\it O}.$*]{}
At the moment we do not fix a definition of probability. Depending on a choice of probability theory we can obtain different models. For any $C\in {\cal C},$ there is defined the set of probabilities: $$E({\cal O}, C)= \{ {\bf P}(a=\alpha/C): a \in {\cal O}\}$$ We complete this probabilistic data by $C_\alpha$-contextual probabilities: $$D({\cal O}, C)= \{ {\bf P}(a=\alpha/C),...,
{\bf P}(a=\alpha/C_\beta), {\bf P}(b=\beta/C_\alpha),...: a,b,... \in {\cal O}\}$$ (we remark that $D({\cal O}, C)$ does not contain the simultaneous probability distribution of observables ${\cal O}).$ Data $D({\cal O}, C)$ gives a probabilistic image of the context $C$ through the system of observables ${\cal O}.$ Probabilities ${\bf P}(a=\alpha/C_\beta),...$ play the role of [*structural constants*]{} of a model. We denote by the symbol ${\cal D}({\cal O}, {\cal C})$ the collection of probabilistic data $D({\cal O}, C)$ for all contexts $C\in {\cal C}.$ There is defined the map: $$\label{MP}
\pi :{\cal C} \to {\cal D}({\cal O}, {\cal C}), \; \; \pi(C)= D({\cal O}, C).$$ In general this map is not one-to-one. Thus the $\pi$-image of contextualistic reality is very rough: [*not all contexts can be distinguished with the aid of probabilistic data produced by the class of observables ${\cal O}.$* ]{}
Mathematically such probabilistic data can be represented in various ways. In some special cases it is possible to represent data by complex amplitudes. A complex amplitude (wave function) $\phi\equiv
\phi_{D({\cal O}, C)}$ is constructed by using a formula of total probability with $\cos$-interference term, see \[6\]-\[10\] for extended exposition. In this way we obtain the probabilistic formalism of quantum mechanics. In other cases it is possible to represent data by hyperbolic amplitudes[^9] and we obtain the probabilistic formalism of “hyperbolic quantum mechanics," \[9\], \[10\].
**4. Contextualistic statistical model (Växjö model)**
[**Definition 2.**]{} [*A contextualistic statistical model of reality is a triple $$\label{VM}
M =({\cal C}, {\cal O}, {\cal D}({\cal O}, {\cal C}))$$ where ${\cal C}$ is a set of contexts and ${\cal O}$ is a set of observables which satisfy to axioms 1,2, and ${\cal D}({\cal O}, {\cal C})$ is probabilistic data about contexts ${\cal C}$ obtained with the aid of observables ${\cal O}.$*]{}
We call observables belonging the set ${\cal O}\equiv {\cal O}(M)$ [*reference of observables.*]{} Inside of a model $M$ observables belonging ${\cal O}$ give the only possible references about a context $C\in {\cal C}.$
**5. Realistic interpretation of reference observables**
Our general model can (but, in principle, need not) be completed by some interpretation of reference observables $a\in {\cal O}.$ By the Växjö interpretation reference observables are interpreted as [*properties of contexts:*]{}
“If an observation of $a$ under a complex of physical conditions $C \in {\cal C}$ gives the result $a=\alpha,$ then this value is interpreted as the objective property of the context $C$ (at the moment of the observation).”
As always, a model is not sensitive to interpretation. Therefore, instead of the realistic Växjö interpretation, we might use the Bohrian measurement-contextualistic interpretation, see Remark 1. However, by assuming the reality of contexts it would be natural to assume also the reality of observables which are used for the statistical representation of contexts. Thus we use the realistic interpretation both for contexts and reference observables. This is Växjö realism.
**6. On the role of reference observables**
Reader has already paid attention that reference observables play the special role in our model. I interpret the set ${\cal O}$ as a family of observables which represent some fixed class of properties of contexts belonging ${\cal C}.$ For example, such a family can be chosen by some class of cognitive systems ${\it Z}_{\rm{cogn}}$ – “observers” – which were interested only in the ${\cal O}$-properties of contexts ${\cal C}$ (and in the process of evolution they developed the ability to “feel” these and only these properties of contexts). The latter does not mean that observables ${\cal O}$ are not realistic. I would like just to say that observers $\tau \in {\it Z}_{\rm{cogn}}$ use only observables ${\cal O}.$
We remark again that there can exist other properties of contexts ${\cal C}$ which are not represented by observables ${\cal O}.$ The same set of contexts ${\cal C}$ can be the basis of various models of contextual reality: $M_i= ({\cal C}, {\cal O}_i, {\cal D}({\cal O}_i, {\cal C})), i=1,2,....$ For example, such models can be created by various classes of cognitive systems ${\it Z}_{\rm{cogn},i}.$
Moreover, we may exclude the spiritual element from observables. By considering “observation” as “feeling”of a context $C$ by some system $\tau$ we need not presuppose that $\tau$ is a cognitive system. Such a $\tau$ can be, e.g., a physical system (e.g. an electron) which “feel” a context $C$ (e.g., electromagnetic-context).
[**Remark 2.**]{} (Number of reference observables) In both most important physical models – in classical and quantum models – the set ${\cal O}$ of reference observables consists of [**two observables:**]{} [*position and momentum.*]{} I think that this number “two” of reference observables plays the crucial role (at least in the quantum model).
**7. Växjö model outside physics**
Our contextual statistical realistic models of reality can be used not only in physics, but in any domain of natural and social sciences. Instead of complexes of physical conditions, we can consider complexes of biological, social, economic,... conditions – contexts – as elements of reality. Such elements of reality are represented by probabilistic data obtained with the aid of reference observables (biological, social, economic,...).
In the same way as in physics in some special cases it is possible to encode such data by complex amplitudes. In this way we obtain representations of some biological, social, economic,.... models in complex Hilbert spaces. We call them [*complex quantum-like models.*]{} These models describe the usual $\cos$-interference of probabilities.
Thus, when we speak, e.g., about a quantum-like mental model, this has nothing to do with quantum mechanics for electrons, photons, ... contained in the brain, see \[29\] for detail. A quantum-like mental model is a contextualistic probabilistic model of brain and nothing more, \[\]. There were found (at least preliminary) experimental evidences that in psychology there can be obtained quantum-like (i.e., represented by complex probability amplitudes) statistical data, see \[30\]; such data also can be generated by some games, \[31\] (which have been called “quantum-like games” in \[31\]).
**8. Choice of a probability model**
As was mentioned, any Växjö model $M$ should be combined on some concrete probabilistic model describing probabilistic data ${\cal D}({\cal O}, {\cal C}).$ Of course, the Kolmogorov measure-theoretical model dominates in modern physics. However, this is not the only possible model for probability, see \[21\]. In particular, I strongly support using of the frequency model \[20\], \[21\]. Here we shall use this model to describe probabilistic data. It does not mean that other models which are used in physics cannot be combined with some Växjö models. Of course, such a combination is not straightforward, see \[8\] on the use of the contextual extension of the Kolmogorov model. We now present the frequency probabilistic description of data ${\cal D}({\cal O}, C)$ for some $C \in {\cal C}.$
**9. Frequency description of probability distributions**
By taking into account Remark 2, we consider a set of reference observables ${\cal O}= \{ a, b \}$ consisting of two observables $a$ and $b.$ We denotes the sets of values (“spectra”) of the reference observables by symbols $X_a$ and $X_b,$ respectively.
Let $C$ be some context. In a series of observations of $b$ (which can be infinite in a mathematical model) we obtain a sequence of values of $b:$ $$\label{KOL1}
x\equiv x(b/C) = (x_1, x_2,..., x_N,...), \;\; x_j\in X_b.$$ In a series of observations of $a$ we obtain a sequence of values of $a:$ $$\label{KOL2}
y\equiv y(a/C) = (y_1, y_2,..., y_N,...), \;\; y_j\in X_a.$$ We suppose that the principle of statistical stabilization for relative frequencies holds true and the frequency probabilities are well defined: $$\label{KOL3}
p^b(\beta) \equiv {\bf P}_x( b=\beta)= \lim_{N\to \infty} \nu_N(\beta; x), \;\; \beta \in X_b;$$ $$\label{KOL3a}
p^a(\alpha) \equiv {\bf P}_y( a=\alpha)= \lim_{N\to \infty} \nu_N(\alpha; y), \;\; \alpha\in X_a.$$ Here $\nu_N(\beta; x)$ and $ \nu_N(\alpha; y)$ are frequencies of observations of values $b=\beta$ and $a=\alpha,$ respectively (under the complex of conditions $C).$
Let $C_{\alpha}, \alpha\in X_a,$ be contexts corresponding to $\alpha$-filtrations, see Axiom 1. By observation of $b$ under the context $C_\alpha$ we obtain a sequence: $$\label{KOL4}
x^{\alpha} \equiv x(b/C_\alpha) = (x_1, x_2,..., x_{N},...), \;\; x_j \in X_b.$$ It is also assumed that for sequences of observations $x^{\alpha}, \alpha\in X_a,$ the principle of statistical stabilization for relative frequencies holds true and the frequency probabilities are well defined: $$\label{KOL5}
p^{b/a}(\beta/\alpha) \equiv {\bf P}_{x^{\alpha}}(b=\beta)= \lim_{N \to \infty} \nu_{N}(\beta; x^{\alpha}), \;\;
\beta \in X_b.$$ Here $\nu_N(\beta; x^\alpha), \alpha\in X_a,$ are frequencies of observations of value $b=\beta$ under the complex of conditions $C_\alpha.$ We obtain probability distributions: $$\label{KKK4}
{\bf P}_x(\beta), \;\; {\bf P}_y (\alpha), \;
{\bf P}_{x^{\alpha}}(\beta),\;\;\alpha\in X_a, \beta \in X_b.$$ We can repeat all previous considerations by changing $b/a$-conditioning to $a/b$-conditioning. We consider contexts $C_\beta, \beta \in X_b,$ corresponding to selections with respect to values of the observable $b$ and the corresponding collectives $y^{\beta}\equiv y(a/C_\beta)$ induced by observations of $a$ in contexts $C_\beta.$ There can be defined probabilities $p^{a/b}(\alpha/\beta)\equiv {\bf P}_{y^{\beta}}(\alpha).$ Combining these data with data (\[KKK4\]) we obtain $$D({\cal O}, C)= \{ p^a(\alpha), p^b(\beta), p^{b/a}(\beta/\alpha), p^{a/b}(\alpha/\beta): \alpha\in X_a, \beta \in X_b\}$$
This data gives a statistical contextual image of reality based on reference observables $a$ and $b.$ As was remarked, there exist various mathematical methods for encoding of data $D({\cal O}, C),$ e.g., in some cases by complex amplitudes – complex quantum-like representations.
**10. Representation in a complex Hilbert space**
Let $M$ be a contextualistic statistical model such that ${\cal O}$ contains only two observables $a$ and $b.$ For any context $C\in {\cal C},$ by using statistical data $D(a,b, C)$ we can compute a quantity $\lambda(\beta/\alpha, C), \alpha \in X_a, \beta\in X_b,$ see \[6\]-\[10\]. This quantity was called in \[6\]-\[10\] a [*measure of statistical disturbance*]{} (of the $b$-observable by the $a$-observations under the context $C).$ If $$\label{EN}
\vert \lambda(\beta/\alpha, C)\vert \leq 1$$ for all $\alpha \in X_a, \beta\in X_b,$ then data $D({\cal O}, C)$ can be represented (by using the formula of total probability with interference term) by a complex amplitude $\phi_C$ or in the abstract framework by an element the unit sphere $U_1$ of the complex Hilbert space $H.$ Denote the family of all contexts which satisfy to (\[EN\]) by the symbol ${\cal C}^{\rm{tr}}.$ We have the map: $$\label{M1}
J : {\cal C}^{\rm{tr}} \to U_1$$ We emphasize that $J$ is determined by the reference observables $a$ and $b.$ Thus (\[M1\]) is a Hilbert space representation of contexts determined by these concrete reference observables. The map $J$ is not one to one. Thus by representing contexts by complex amplitudes we lose a lot of information about contexts.
The map (\[M1\]) induces \[8\] a map: $$\label{M2}
L: {\cal O} \to L (H),$$ where $L (H)$ is the set of self-adjoint operators. Probability distributions of operators $\hat{a}= L(a)$ and $\hat{b}= L(b)$ (calculated by using quantum Hilbert space framework) in the state $\phi_C$ coincide with $ p^a(\alpha)$ and $p^b(\beta).$
If for a context $C$ we find that $$\label{EN1}
\vert \lambda(\beta/\alpha, C)\vert \geq 1$$ then $C$ can be represented by a hyperbolic amplitude
**11. Systems, ensemble representation**
We now complete the contextualistic statistical model by considering systems $\omega$ (e.g., physical or cognitive, or social,..) Systems are also [**elements of realty.** ]{} In our model a context $C \in {\cal C}$ is represented by an ensemble $S_C$ of systems which have been interacted with $C.$ For such systems we shall use notation: $$\omega \hookleftarrow C$$ The set of all (e.g., physical or cognitive, or social) systems which are used to represent all contexts $C\in {\cal C}$ is denoted by the symbol $\Omega\equiv \Omega({\cal C}).$ Thus we have a map: $$\label{VMM}
C \to S_C=\{ \omega\in \Omega: \omega \hookleftarrow C \}.$$ This is the ensemble representation of contexts. We set $${\cal S}\equiv {\cal S}({\cal C})=\{S: S=S_C, C \in {\cal C}\}.$$ The ensemble representation of contexts is given by the map (\[VMM\]) $$I: {\cal C} \to {\cal S}$$ Reference observables ${\cal O}$ are now interpreted as observables on systems $\omega\in \Omega.$ In principle, we can interpret values of observables as [*objective properties*]{} of systems. Oppositely to the very common opinion, such models (with realistic observables) can have nontrivial quantum-like representations (in complex and hyperbolic Hilbert spaces) which are based on the formula of total probability with interference terms.
Probabilities are defined as ensemble probabilities, see \[21\].
[**Definition 3.**]{} [*The ensemble representation of a contextualistic statistical model $M =({\cal C}, {\cal O}, {\cal D}({\cal O}, {\cal C}))$ is a triple $$\label{VM1}
S(M) =({\cal S}, {\cal O}, {\cal D}({\cal O}, {\cal C}))$$ where ${\cal S}$ is a set of ensembles representing contexts ${\cal C}$, ${\cal O}$ is a set of observables, and ${\cal D}({\cal O}, {\cal C})$ is probabilistic data about ensembles ${\cal S}$ obtained with the aid of observables ${\cal O}.$*]{}
[**12. Algebraic structure on the set of reference observables**]{}
We do not assume the presence of any algebraic structure on ${\cal O}.$ Even if these observables take values in some set endowed with an algebraic structure, e.g., in ${\bf R},$ we do not assume that this structure induces (in the standard way) the corresponding algebraic structure on ${\cal O}.$ If $a,b \in {\cal O}$ and take values in ${\bf R}$ it does not imply that $d=a+b$ is well defined as observable on every context $C\in {\cal C}.$ In the general contextual approach it is very clear why we cannot do this. If $a$ and $b$ are not compatible, then we cannot measure they simultaneously under a context $C$ at the fixed instant of time and form $d=a+b.$ But a reader may say:
“You use the realistic interpretation of the reference observables in a model $M.$ Thus one can form the sum $d=a+b.$”
a). By the realistic contextualistic interpretation, $a(t)$ and $b(t)$ are objective properties of a context $C$ at the instant of time $t.$ There is defined $d(t)=a(t) + b(t)$.
b). By the realistic interpretation of the model with systems, $a(\omega)$ and $b(\omega)$ are objective properties of a system $\omega.$ There is defined $d(\omega)=a(\omega) + b(\omega)$.
However, this is the ontic or “hidden sum” and the representation (\[M2\]) cannot be extended to such sums. Quantum theory cannot say us anything about $d=a+b$ as pointwise observable. Of course, we can define the sum of operators $\hat{d}= \hat{a}+\hat{b},$ but in general this operator would represent not the ontic observable $d,$ but another observable $d_{\rm{quant}}.$ Observables $d$ and $d_{\rm{quant}}$ can have different probability distributions! (see \[8\]). Nevertheless (and this seems to be crucial in using of quantum theory), averages of these observables coincide: $$\label{SUM}
<d_{\rm{quant}}>\equiv <\hat{d}>= <d>$$ This is a consequence of linearity of both quantum (Hilbert space) and classical probabilistic averages and the coincidence of probability distributions of reference observables and they representatives in the Hilbert space.
**13. Realist and empirisist interpretations of quantum mechanics**
We emphasize again that up to now we have not been considering an interpretation of quantum mechanics. There was proposed a contextualistic statistical model of physical reality. Sometimes this model can be mathematically described by using the formalism of classical mechanics, sometimes quantum, sometimes hyperbolic and so on. However, it is useful to discuss relation of our model to models of physical reality corresponding to various interpretations of quantum mechanics. Here we follow to P. Busch, M. Grabowski, P. J. Lahti \[15\], de Muynck, De Baere, and Martens \[32\] and L. Ballentine \[33\].
[**13.1. Empirisist interpretation.**]{} In this interpretation the formalism of quantum mechanics does not describe reality as such. It only serves to calculate probabilities (relative frequencies) of certain phenomena that can be interpreted as corresponding to the results of a quantum measurement. The probabilities are conditioned on certain procedures, to be interpreted as quantum mechanical preparation procedures. Thus, the wave function or density operator can be interpreted as symbolizing a preparation procedure; in the same way a hermitian operator describes symbolically a quantum mechanical measurement. Wave function and hermitian operator are not thought to correspond to something existing in microscopic reality. They are just labels of (macroscopic) instruments that can be found in the laboratory. QM is thought to describe only (cor)relations of preparation acts and measurement phenomena, It is also important for our further considerations to underline that in an empirisist interpretation of QM the eigenvalues of the hermitian operator do not play a significant role, because these eigenvalues do not correspond to properties of the microscopic object. The empirisist interpretation has achieved great popularity because its antimetaphysical flavor: physics must be about observables only, and about nothing else. Hence, in this interpretation neither the wave function nor the observable must be taken as a property of the microscopic object system.
[**13.2. Realist interpretation.**]{} By this interpretation values of physical observables are considered as objective properties – properties of objects (physical systems).
[**13.3. Växjö interpretation: realism of contexts.**]{} In the Växjö approach quantum mechanics (as a physical theory) is a particular contextualistic statistical model of reality in which the probabilistic data ${\cal D}({\cal O}, {\cal C})$ can be encoded by complex amplitudes. This point of view to quantum formalism induces the Växjö interpretation of quantum mechanics. This is a [*contextualistic statistical realistic interpretation*]{} of quantum mechanics. And Växjö realism is realism of contexts and reference observables.
The Växjö interpretation of quantum mechanics is quite close to the empirisist interpretation. The crucial difference is that by the Växjö interpretation quantum mechanics is about reality – reality of contexts, and not about preparation and measurement procedures. Contexts exist independently of our measurement activity and values of reference observables $a\in {\cal O}$ are objective properties of contexts. The space-scale does not play any role, because the description of reality is purely probabilistic. Quantum probabilistic behaviour is a consequence of complementarity of information for reference observables. Such complementarity of information can take place at microscopic as well as macroscopic scales and, moreover, not only in physics, but in any domain of natural and social sciences.
We remark that by considering context as an element of reality we eliminated the important difference between realist and empirisist interpretations – [*the wave function is considered as a description of the result of preparation rather than as a symbolic representation of the preparation itself.*]{} If a model $M$ has a quantum(-like) complex representation then the wave function represents context – a complex of physical (or biological,...) conditions.
[**13.4. Växjö interpretation: realism of contexts, systems and observables.**]{} Let us now consider the completed Växjö model which contains physical systems, contexts are represented by ensembles of systems. Physical observables are considered as objective properties of systems. As well as for general contextualistic model, quantum mechanics (as a physical theory) is about a rather special class of contexts ${\cal C}^{\rm{tr}}$ such the probabilistic data ${\cal D}({\cal O}, {\cal C}^{\rm{tr}})$ can be encoded by complex amplitudes. The only difference is that probabilities are defined as ensemble probabilities. This interpretation of quantum mechanics is very close to the well known ensemble interpretation which was strongly supported by A. Einstein, see introduction; L. Ballentine called it the statistical interpretation, see \[33\]. A difference is that in our model we start with reality of contexts which can be (but need not be) represented by ensembles.
But this is not the main difference. The main difference is that we did not start at all with an interpretation of one special mathematical formalism, calculus of probabilities in complex Hilbert spaces. We started with a general contextual statistical model of reality and then demonstrated that some special contexts can be represented by quantum-like complex amplitudes. Interpretation of such amplitudes follows automatically from the basic contextual statistical model.
[**References**]{}
1\. A. Yu. Khrennikov, Växjö interpretation of quantum mechanics, http://xxx.lanl.gov/abs/quant-ph/0202107.
2\. A. Yu. Khrennikov, On foundations of quantum theory. Proc. Conf. [*Quantum Theory: Reconsideration of Foundations,*]{} ed. A. Yu. Khrennikov. Ser. Math. Modelling, [**2**]{}, 163-196,Växjö Univ. Press (2002).
3\. A. Fine, [*The Shaky game.*]{} Univ. Chicago Press, Chicago/London (1988).
4\. M. Lockwood, What Schrödinger should have learned from his cat? In [*Erwin Schrödinger: Philosophy and the birth of quantum mechanics,*]{} eds. M. Bitbol, O. Darrigol. Editions Frontieres, Gif-sur Yvette (1992).’
5\. A. Yu. Khrennikov, Origin of quantum probabilities. Proc. Conf. [*Foundations of Probability and Physics.*]{} In: [*Q. Probability and White Noise Analysis*]{}, [**13**]{}, 180-200, WSP, Singapore (2001).
6\. A. Yu. Khrennikov, [*Linear representations of probabilistic transformations induced by context transitions.*]{} [*J. Phys. A: Math. Gen.,*]{} [**34**]{}, 9965-9981 (2001); http://xxx.lanl.gov/abs/quant-ph/0105059
7\. A. Yu. Khrennikov, Contextual viewpoint to quantum stochastics. [*J. Math. Phys.*]{}, [**44**]{}, 2471- 2478 (2003).
8\. A. Yu. Khrennikov, Representation of the Kolmogorov model having all distinguishing features of quantum probabilistic model. [*Phys. Lett. A*]{}, [**316**]{}, 279-296 (2003).
9\. A. Yu. Khrennikov, Hyperbolic quantum mechanics. [*Advances in Applied Clifford Algebras,*]{} [**13**]{}(1), 1-9 (2003).
10\. A. Yu. Khrennikov, Interference of probabilities and number field structure of quantum models. [*Annalen der Physik,*]{} [**12**]{}, 575-585 (2003).
11\. G. W. Mackey, [*Mathematical foundations of quantum mechanics.*]{} W. A. Benjamin INc, New York (1963).
12\. A. Lande, [*Foundations of quantum theory.*]{} Yale Univ. Press (1955).
A. Lande, [*New foundations of quantum mechanics*]{} Cambridge Univ. Press, Cambridge (1968).
13\. G. Ludwig, [*Foundations of quantum mechanics.*]{} Springer, Berlin (1983).
14\. K. Kraus, [*States, effects and operations.*]{} Springer-Verlag, Berlin (1983).
15\. P. Busch, M. Grabowski, P. Lahti, Operational Quantum Physics, Springer Verlag (1995).
16\. S. P. Gudder, [*Axiomatic quantum mechanics and generalized probability theory.*]{} Academic Press, New York (1970).
17\. A. S. Holevo, [*Probabilistic and statistical aspects of quantum theory.*]{} North-Holland, Amsterdam (1982).
A. S. Holevo, [*Statistical structure of quantum theory.*]{} Springer, Berlin-Heidelberg (2001).
18\. P. A. M. Dirac, [*The Principles of Quantum Mechanics.*]{} Oxford Univ. Press (1930).
19\. R. Feynman and A. Hibbs, [*Quantum Mechanics and Path Integrals.*]{} McGraw-Hill, New-York (1965).
20\. R. Von Mises, [*The mathematical theory of probability and statistics.*]{} Academic, London (1964).
21\. A. Yu. Khrennikov, [*Interpretations of Probability.*]{} VSP Int. Sc. Publishers, Utrecht/Tokyo (1999).
22\. A. Yu. Khrennikov, ed., Proc. Int. Conf. [*Quantum Theory: Reconsideration of Foundations.*]{} Ser. Math. Modelling, [**2,**]{} Växjö Univ. Press (2002).
23\. A. Yu. Khrennikov, ed., Proc. Conf. [*Foundations of Probability and Physics-2,*]{} Ser. Math. Modelling, [**5,**]{} Växjö Univ. Press (2002).
24\. A. Yu. Khrennikov, ed., Proc. Int. Conf. [*Quantum Theory: Reconsideration of Foundations-2.*]{} Ser. Math. Modelling, [**10,**]{} Växjö Univ. Press (2004).
25\. C. Fuchs, The anti-Växjö interpretation of quantum mechanics. Proc. Int. Conf. [*Quantum Theory: Reconsideration of Foundations.*]{} ed. A. Yu. Khrennikov, Ser. Math. Modelling, [**2**]{}, 99-116, Växjö Univ. Press, 2002; http://www.msi.vxu.se/forskn/quantum.pdf
26\. A. Plotnitsky, The spirit and the letter of Copenhagen: a response to Andrei Khrennikov, http://xxx.lanl.gov/abs/quant-ph/0206026.
27\. A. Einstein, B. Podolsky, N. Rosen, [*Phys. Rev.*]{}, [**47**]{}, 777–780 (1935).
28\. A. Plotnitsky, Quantum atomicity and quantum information: Bohr, Heisenberg, and quantum mechanics as an information theory, Proc. Conf. [*Quantum theory: reconsideration of foundations,*]{} ed: A. Yu. Khrennikov, Ser. Math. Modelling, [**2**]{}, 309-343, Växjö Univ. Press (2002).
A. Plotnitsky, Reading Bohr: Complementarity, Epistemology, Entanglement, and Decoherence. Proc. NATO Workshop ”Decoherence and its Implications for Quantum Computations”, Eds. A.Gonis and P.Turchi, p.3–37, IOS Press, Amsterdam, 2001.
29\. A. Yu. Khrennikov, On cognitive experiments to test quantum-like behaviour of mind. [*Rep. Växjö Univ.: Math. Nat. Sc. Tech.,*]{} N 7 (2002); http://xxx.lanl.gov/abs/quant-ph/0205092.
A. Yu. Khrennikov, Quantum-like formalism for cognitive measurements. [*Biosystems,*]{} [**70**]{}, 211-233 (2003).
30\. E. Conte, O. Todarello, A. Federici, T. Vitiello, M. Lopane, A. Yu. Khrennikov A Preliminar Evidence of Quantum Like Behavior in Measurements of Mental States. [*Reports from MSI, Växjö Univ.,*]{} N. 03090, 2003;\
http://xxx.lanl.gov/abs/quant-ph/0307201.
31\. A. Grib, A. Khrennikov, K.Starkov, Probability amplitude in quantum like games. [*Reports from MSI, Växjö Univ.,*]{} N. 03088, 2003;\
http://xxx.lanl.gov/abs/quant-ph/0308074.
32\. W. M. de Muynck, W. De Baere, and H. Martens, Interpretations of quantum mechanics, joint measurement of incompatible observables, counterfactual definiteness. [*Found. Physics,*]{} [**24**]{}, N. 12 (1994).
33\. L. E. Ballentine, [*Rev. Mod. Phys.*]{}, [**42**]{}, 358 (1970).
L. E. Ballentine, [*Quantum mechanics.*]{} Englewood Cliffs, New Jersey (1989).
[^1]: Supported in part by the EU Human Potential Programme, contact HPRN–CT–2002–00279 (Network on Quantum Probability and Applications) and Profile Math. Modelling in Physics and Cogn. Sc. of Växjö University.
[^2]: This problem was well known to founders of quantum theory, see, for example, the correspondence between A. Einstein and E. Schrödinger (see \[3\] for English translation and comments, see also \[4\]). E. Schrödinger did not like the Copenhagen interpretation; in particular, he created his cat just to demonstrate absurdness of this interpretation. Unfortunately, people practically forgot about this, see \[4\] for detail. We also recall that Schrödinger’s cat was just a modification of Einstein’s example “involving a charge of gunpowder in a state of unstable chemical equilibrium”, see \[4\] (letter of Einstein to Schrödinger, 8 August 1935, see \[3\], p.78). But neither Einstein nor Schrödinger could combine a realistic ensemble model with quantum statistics. In particular, Schrödinger wrote to Einstein that he will accept the realistic statistical interpretation of quantum mechanics if the interference of probabilities would be explained, see \[4\]. Consequently the Copenhagen interpretation preserved its status of the official quantum ideology until present time.
[^3]: And this is one of advantages of my contextual statistical approach, cf. G. Mackey \[11\], A. Lande \[12\], G. Ludwig \[13\], K. Kraus \[14\], see also P. Busch, M. Grabowski, P. Lahti \[15\], S. Gudder \[16\], and A. Holevo \[17\]. In our approach everything is trivial: complex amplitudes are constructed automatically on the basis of the formula of total probability with interference term, see \[6\]-\[10\]. Moreover, besides the ordinary complex Hilbert space representation, there exists the hyperbolic one and mixed hyper-trigonometric, see \[10\]. I emphasize that the study of former approaches, especially investigations of A. Lande \[12\] and G. Mackey \[11\], were very important for me. However, the starting point were the books of P. Dirac \[18\] and R. Feynman \[19\] in which they payed attention to the mystery of quantum interference of probabilities.
[^4]: See http://www.msi.vxu.se/aktuellt/konferens/index.html and \[22\]–\[24\].
[^5]: In particular, our approach implies that quantum mechanics is not complete.
[^6]: We use the notion “elements of physical reality” in common sense. There is no direct coupling with the EPR sufficient condition for values of physical observables to be elements of physical reality, see \[27\]. Moreover, in general the Växjö model need not contain physical systems. Thus even the formulation of the question: “Can values of observables be considered as objective properties of physical systems?” – is in general meaningless. We shall come to the problem of reality of quantum observables as observables on physical systems (i.e., classical or EPR reality) in section 11. There we shall present the Växjö model completed by physical systems.
[^7]: We remark that so far we do not speak about an interpretation of quantum mechanics. We are presenting an approach to modeling of physical reality. The quantum representation is possible only for some class of models, ${\cal M}_{\rm{quantum}}.$ The class ${\cal M}_{\rm{quantum}}$ is a very special subclass of the class of contextualistic statistical models.
[^8]: We shall denote observables by Latin letters, $a,b,...,$ and their values by Greek letters, $\alpha, \beta,...$
[^9]: Such amplitudes are constructed by using a formula of total probability with $\cosh$-interference term (“hyperbolic interference”), see \[10\].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Calibration is a basic property for prediction systems, and algorithms for achieving it are well-studied in both statistics and machine learning. In many applications, however, the predictions are used to make decisions that select which observations are made. This makes calibration difficult, as adjusting predictions to achieve calibration changes future data. We focus on click-through-rate (CTR) prediction for search ad auctions. Here, CTR predictions are used by an auction that determines which ads are shown, and we want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to achieve, depending on the details of the auction. We also show that it can be impossible to maximize auction efficiency while using calibrated predictions. Finally, we give conditions under which calibration is achievable and simultaneously maximizes auction efficiency: roughly speaking, bids and queries must not contain information about CTRs that is not already captured by the predictions.
author:
- |
H. Brendan McMahan\
`mcmahan@google.com`
- |
Omkar Muralidharan\
`omuralidharan@google.com`
bibliography:
- '../new.bib'
- '../my\_pubs.bib'
- 'calibration\_references.bib'
date: 'Google, Inc.'
title: |
On Calibrated Predictions\
for Auction Selection Mechanisms
---
Introduction
============
Calibration is a fundamental measure of accuracy in prediction problems: if we group all the events a predictor says happen with probability $p$, about a $p$ fraction should occur. This property has been extensively studied in the stochastic and online settings.
We study problems where the predictions themselves partially determine which events occur. Our general approach applies to many problems where predictions are used to make decisions, but we are motivated in particular by the application to search engine advertising. Over the past decade, this business has grown to tens of billions of dollars, and prediction systems play a fundamental role.
In a typical interaction, first a user does a query (say “flowers”) on a search engine. Then, the search engine selects a set of candidate ads that can be shown on the given query, based on keywords provided by advertisers. These components can be reasonably approximated by an IID process. A prediction is made for each candidate ad, and an auction ranks the ads based on the prediction and the bid of the advertiser. Typically, the bid indicates the value of a click to the advertiser, and the score is simply the product of the bid and the prediction, giving an estimate of the value generated by showing the ad. Finally, some of these ads are shown to the user (we consider two models: the single top-ranked ad is shown, or all the ads with scores above a certain threshold are shown). This auction selection mechanism has been extensively studied, and has many nice properties [@varian07auctions; @edelman07internet].
In this setting, an important measure of the quality of the predictions is how much value the auction generates (equivalently, how *efficient* are the allocations produced by the auction). The auction mechanisms we consider are in fact designed to maximize the combined value to the search engine and advertiser if bids accurately reflect value and the true click-through-rates (CTRs) are known.
The algorithm used to predict CTRs for such a system faces many constraints already, for example, the need to process enormous volumes of data quickly and produce predictions with extremely low latency (e.g., [@graepel10webscale]). Thus, rather than advocating new algorithms, we focus on applying a post-correction via a prediction map to the outputs of an existing system in order to improve the quality of the predictions.
We consider two main questions. Informally stated: 1) Do efficiency-maximizing prediction maps with calibration properties exist, and can they can be found computationally efficiently? 2) If we iteratively calibrate our predictions so they match observed CTRs, does the process converge? And if so, is this prediction map efficiency maximizing?
#### Outline and Summary of Results
We formalize our model and questions in Section \[sec:formalization\], where we introduce two primary variants of the selection mechanism that lead to different properties; Section \[sec:all\] and \[sec:one\] investigate these mechanisms in the general case. We demonstrate that without further assumptions, in both our models it may be impossible for a deterministic prediction map to produce calibrated predictions on the ads it serves, and iterative calibration procedures can fail badly. Since some deterministic map always maximizes value, this is unfortunate. When all ads above a certain threshold are shown, we give an algorithm for finding this value-maximizing map in polynomial time, but when the single highest-rated ad is shown, we prove finding the value-maximizing map is NP-hard (even if we knew the true CTRs).
In Section \[sec:cond\] we introduce additional assumptions that are sufficient to guarantee calibration procedures are well-behaved. While these assumptions are fairly strong, they are not unreasonable for real systems. Our strongest assumption is essentially that in all cases bid and query provide no more information than the raw prediction about average CTRs; under this assumption, we can show in both selection models a value-maximizing and calibrated prediction map exists. Under threshold selection, somewhat weaker conditions are in fact sufficient.
#### Related Work
Calibration has been extensively studied. Much of the earliest work is in the probabilistic forecasting literature [@brier50verification; @dawid82well; @Ranjan2010]. Calibration is particularly important when comparing predictors, since two sets of calibrated predictions can be fairly evaluated by how concentrated they are on observed outcomes [@DeGroot1983; @Gneiting2007; @Gneiting2007a]. Calibration also makes it easier to use predictions. For example, it is easier to threshold the output of a calibrated classifier to minimize weighted classification error [@Cohen2004].
Not all prediction systems are naturally calibrated. However, when examples are drawn IID, if we have a good but uncalibrated predictor, we can calibrate it by applying a prediction map. For example, boosted trees are uncalibrated, but become excellent probability estimators after calibration [@Niculescu-Mizil2005; @Caruana2006]. The two most common methods for calibration are Platt scaling, which is equivalent to logistic regression, and isotonic regression [@Platt1999; @Zadrozny2002; @caruana05prob; @caruana06compare].
Calibration is also studied in the online setting, where no stochastic assumptions are made on the sequence of examples; in the worst case, they could be chosen by an adversary that sees our predictions. It is easy to see that in this setting, no deterministic classifier (or prediction map) can produce calibrated predictions for all sequences. However, if the system is allowed to use randomness (that is, predict a distribution), then calibration can be achieved ([@foster96calibration; @Foster1999] and [@cesabianchi06plg Sec 4.5]).
Problem Formalization {#sec:formalization}
=====================
The interaction of calibration and selection has received little direct attention in the literature, so constructing a suitable model requires some care: we require a formulation that is theoretically tractable but still captures the key characteristics of the real-world problems of interest.
We begin by defining our units of prediction (queries and ads) and the mechanism used to select them (auctions). We assume a fixed, existing prediction system provides a raw prediction for each ad; our study will then concern prediction maps, functions that attempt to map these raw predictions to calibrated probabilities. Once this framework is established, we can formally state the questions we study.
We model the interaction between a search engine’s users and advertising system. There is a fixed finite set of queries ${\mathcal{Q}}$ (strings like “flowers” or “car insurance” typed into the search engine), which are chosen according to distribution ${{\text{Pr}}^{\mathcal{Q}}}(q)$ for $q
\in {\mathcal{Q}}$. There is also a fixed finite set of ads ${\mathcal{C}}$ which can be shown alongside queries. Each ad $i \in {\mathcal{C}}$ is defined by tuple $(p_i,
b_i, z_i, q_i)$ where $q_i \in {\mathcal{Q}}$ is the (only) query for which ad $i$ can show,[^1] $p_i$ is the true probability of a click, $b_i$ is the bid (the maximum amount the advertiser is willing to pay for a click), and $z_i \in {\ensuremath{\left\{1, \dots, K\right\}}}$ is a bucketed estimate of $p_i$ (we call $z_i$ the raw prediction). That is, we assume the predictions of the underlying prediction system have been discretized into $K$ buckets. We drop the $q$ (and sometimes $z$) from the ad tuples when those values are clear from context. Each ad can show for a single query $q$, so we define ${\mathcal{C}}(q) \equiv {\ensuremath{\left\{i {\!\mid\!}q_i = q\right\}}}$, the indexes of the candidate ads for query $q$.
Our goal is to find good prediction maps $f : {\ensuremath{\left\{1, \dots, K\right\}}}
\rightarrow [0, 1]$. The prediction map will be used in the auction selection mechanism: First, a query is sampled from ${{\text{Pr}}^{\mathcal{Q}}}$, and then the candidate ads for that query are ranked by $b \cdot f(z)$ (we drop the subscripts when we mean an arbitrary ad). We consider two models for which ads show:
[`ONE`]{}:
: We only show a single ad. If multiple ads achieve the highest value of $b \cdot f(z)$, we pick one uniformly at random.
[`ALL`]{}:
: We show all ads where $b \cdot f(z) - 1 > 0$.
Mechanism [`ONE`]{}models the case of an oversold auction, where ads with different raw predictions $z$ must compete for a single position. Mechanism [`ALL`]{}models the case where all eligible ads with positive predicted value can be shown. In general, mechanism [`ALL`]{}is much easier to work with theoretically, because for $z_1 \neq z_2$, changing $f(z_1)$ does not change which ads with prediction $z_2$ are shown. In either case, we assume any candidate $(p, b, z)$ which is shown is clicked with probability $p$.[^2]
#### Distributions on Ads
Other than the distribution ${{\text{Pr}}^{\mathcal{Q}}}$, all probabilities and expectations will be with respect to some distribution on the set of candidate ads ${\mathcal{C}}$. Two distributions will be of particular importance: ${{\text{Pr}}_{{\mathcal{C}}}}$, the uniform distribution over candidate ads, and ${\text{Pr}}_f$, the distribution of ads shown by a prediction map $f$. We formalize these as follows:
${{\text{Pr}}_{{\mathcal{C}}}}$ is the distribution on ads where ${{\text{Pr}}_{{\mathcal{C}}}}(i)$ is proportional to ${{\text{Pr}}^{\mathcal{Q}}}(q_i)$. That is, letting $
C \equiv \sum_{i \in {\mathcal{C}}} {{\text{Pr}}^{\mathcal{Q}}}(q_i),
$ we have $
{{\text{Pr}}_{{\mathcal{C}}}}(i) = \frac{{{\text{Pr}}^{\mathcal{Q}}}(q_i)}{C}.
$ This is not the same as choosing a random query $q$ from ${{\text{Pr}}^{\mathcal{Q}}}$ and then choosing a random candidate. For example, suppose there are two queries $q_1$ and $q_2$, with ${{\text{Pr}}^{\mathcal{Q}}}(q_1) = {\frac{1}{2}}$ and ${{\text{Pr}}^{\mathcal{Q}}}(q_2) =
{\frac{1}{2}}$. There is one candidate $a_1$ for query $q_1$, and two candidates, $a_2$ and $a_3$ for query $q_2$. Then, ${{\text{Pr}}_{{\mathcal{C}}}}(a_i) =
1/3$ for each ad, which means the marginal probability ${{\text{Pr}}_{{\mathcal{C}}}}(q_1) =
\frac{1}{3}$ and ${{\text{Pr}}_{{\mathcal{C}}}}(q_2) = \frac{2}{3}$. One can think of ${{\text{Pr}}_{{\mathcal{C}}}}$ as the distribution on ads shown if we showed all the eligible candidates for each query that occurs.
${\text{Pr}}_f$ for a prediction map $f$ is the distribution on ads where ${\text{Pr}}_f(i)$ is proportional to $w_i \equiv {{\text{Pr}}^{\mathcal{Q}}}(q_i)
{\text{Pr}}(\text{ad $i$ shows} {\!\mid\!}q_i, f)$. The second term is actually only random in the case of selection mechanism [`ONE`]{}, when randomness is used to break ties. The distribution ${\text{Pr}}_f$ is thus the distribution on ads shown when serving using prediction map $f$. Using this notation, ${\text{Pr}}_f(i {\!\mid\!}q) ={\text{Pr}}(\text{ad $i$ shows} \mid
q_i, f)$.
We use ${\mathbb{E}_{{\mathcal{C}}}}[\cdot]$ and ${\mathbb{E}}_f[\cdot]$ for the corresponding expectations.
#### Calibration
We say a prediction map $f$ is calibrated on a distribution on ads $D$ if $$\forall z, \quad
\underbrace{{\mathbb{E}}_{(p,b,z,q) \sim D}[ p {\!\mid\!}z]}_{ \text{Average CTR
given $z$}} = \underbrace{f(z).}_{ \text{Predicted CTR given $z$}}$$
The choice of the distribution $D$ in the above definition is critical; a single $f$ will in general not be able to achieve calibration for multiple $D$. For the auction selection problem, the natural distribution to consider is ${\text{Pr}}_f$. Thus, we will be particularly concerned with finding *self-calibrated* prediction maps $f$, which satisfy $$\forall z, \quad {\mathbb{E}}_f[ p {\!\mid\!}z] = f(z).$$
In general one may not be able to estimate ${\mathbb{E}}_f[\, p\! \mid\! z \,]$ exactly, and so calibration will only be approximately achievable. This issue is orthogonal to our results, so we assume that the necessary expected quantities can be estimated exactly. Thus, we emphasize that our negative results are a fundamental limitation, rather than a byproduct of insufficient data.
#### Auction Efficiency
In addition to calibration, we are concerned with how the choice of $f$ impacts the auction mechanism. The expected value of showing ad $(p, b)$ is $p \cdot b -
\text{cost}$, where we take cost = 1 for selection mechanism [`ALL`]{}, and cost = 0 for [`ONE`]{}. We assume the bid $b$ reflects the true value to the advertiser of a click, which is justified by the incentives of the auction under a suitable pricing scheme [@varian07auctions]. The cost can be viewed as the cost per impression of showing the ad (either a cost incurred by the user doing the query or incurred by the search engine itself). In practice such costs might be different for clicked versus unclicked ad impressions, and might vary depending on the ad and query. Extending our results to such a models would add a significant notational burden, so we focus on the simplest interesting cost models.
For a given query $q$, the expected value generated is $$\sum_{i \in C(q)} {\text{Pr}}(\text{ad $i$ shows} {\!\mid\!}f, q) (p_i b_i - \text{cost}).$$ The expected value per query is just $$\begin{aligned}
{\text{EV}}(f) &= \sum_{q \in Q} {{\text{Pr}}^{\mathcal{Q}}}(q) \sum_{i \in C(q)} {\text{Pr}}_f(i {\!\mid\!}q) (p_i b_i - \text{cost})\\
&= \sum_{i \in C} w_i (p_i b_i - \text{cost}).\end{aligned}$$ We say an $f^* \in \operatorname*{arg\,max}_f {\text{EV}}(f)$ is *efficiency maximizing*. Our goal is to find an $f$ that transforms the $z$ into the best possible predictions in terms of efficiency. Note that if it was possible to predict exactly $p_i$ for ad $i$, these predictions would maximize efficiency.
#### Questions
Ideally, we would like to use prediction maps that are self-calibrated and efficiency-maximizing; we say such prediction maps are *nice*, and say a problem instance is *nice* if such a map exists.
First, we consider questions relating to the offline problem where we have access to all the problem data. Note that there must exist an efficiency-maximizing prediction map.[^3]
[Q1]{}
: Are all problem instances nice? That is, do self-calibrated efficiency-maximizing prediction maps always exist?
[Q2]{}
: Can an efficiency-maximizing prediction map, even one that is not self-calibrated, be found in polynomial time?
In practice, we are further concerned with learning a good prediction map from observed data. Suppose we start with some $f_0$, for example the function that gives the predictions of the underlying system. Then, we serve some large number of queries with this $f_0$, and observe the results. We would like to then train an improved $f_1$ from this data, serve another large batch of queries ranked using $f_1$, then train an $f_2$, etc.
A natural procedure is to choose $f_t$ so that the predictions on the ads shown in batch $t-1$ would have been calibrated under $f_t$. Of course, when we then select ads using $f_t$ on the next batch, we may show different ads. Formally, define $T: [0,1]^K \rightarrow [0,1]^K$ (a function from prediction maps to prediction maps) by $T(f) = f'$ where $$f'(z) =
\begin{cases}
{\mathbb{E}}_{f} [ p {\!\mid\!}z] & \text{when ${\text{Pr}}_f(z) > 0$} \\
f(z) & \text{otherwise}.
\end{cases}$$ We assume we have enough data in each batch so that we can calculate ${\mathbb{E}}_{f_{t-1}} [ p {\!\mid\!}z]$ exactly. Then, we ask:
[Q3]{}
: Does $T$ always have at most a small (polynomial) number of fixed points?
[Q4]{}
: Does $T$ always have at least one fixed point where ads are shown?
[Q3]{}is important, because with an affirmative answer we could potentially enumerate the fixed points and find the best one from an efficiency perspective. A negative answer to [Q4]{}implies the iterative calibration procedure will cycle. To see this, note that for a given starting point $f_0$, subsequent $f_t(z)$ can only take on finitely many values: ${\mathbb{E}}[p {\!\mid\!}z]$ for some distribution of ads that show (finitely many values), or $f_0(z)$. That means that $T$ maps some finite set of calibration maps into itself. Since it has no fixed points, $T$ is a permutation and so must cycle.
In the next two sections, we address these questions in the general case (putting no additional restrictions on the problem instances).
Mechanism [`ALL`]{}: Threshold Selection {#sec:all}
========================================
In this section, we consider the case where we select ads by mechanism [`ALL`]{}, that is, we show all ads where $b \cdot f(z) -1 \ge 0$.
We will show that an efficiency-maximizing prediction map can be found efficiently ([Q2]{}), but without further assumptions, [Q1]{}, [Q3]{}, and [Q4]{}are answered in the negative. We prove the negative results first; for this purpose, it is sufficient to construct counter-examples.
In this section, the examples we construct all require only a single query where all of the candidates have the same raw prediction $z$. Thus, choosing prediction map reduces to choosing a single value ${\hat{p}}\in [0, 1]$. The selection rule simply shows all candidates where $b
\cdot f(z) = b \cdot {\hat{p}}\ge 1$.
Ad CTR bid min ${\hat{p}}$ EV cumulative CTR
---- ----- ----------------------------------- ----------------- ------ ----------------
1 0.1 $1 / (0.1) = 10.0$ 0.10 0.00 0.10
2 0.2 $2 / (0.1 + 0.2) \approx\ 6.7$ 0.15 0.33 0.15
3 0.3 $3 / (0.1 + 0.2 + 0.3) =\ 5.0$ 0.20 0.50 0.20
4 0.4 $4 / (0.1 + \dots + 0.4) =\ 4.0$ 0.25 0.60 0.25
#### [Q1]{}: All fixed points can have bad efficiency
Consider an example with $2n + 1$ candidate ads, divided into three classes, with ads given as $(p, b)$ tuples:
A) $1$ ad is $(0.5, 2.0)$, shown if ${\hat{p}}\ge 0.5$
B) $n$ ads are $(1, 1.9)$, shown if ${\hat{p}}\ge 1/1.9 \approx 0.53$
C) $n$ ads are $(0, 1.8)$, shown if ${\hat{p}}\ge 1/1.8 \approx 0.56$
We either show no ads, $A$, $A \!+\! B$, or $A \!+\! B \!+\! C$. Choosing ${\hat{p}}= 0.5$ is a fixed point (it only shows the first ad) which generates value $0.5 \cdot 2 - 1 = 0$. Using ${\hat{p}}= 0.54$ shows $A + B$, and generates value $0.9n$. But, this is not a fixed point: the observed CTR is near one (for large $n$). Showing all the ads (which occurs for any ${\hat{p}}> 1/1.8$) is not a fixed point, and generates negative value, since ads from class $C$ generate value $-n$.
#### [Q3]{}: An example with exponentially many fixed points
Suppose there are $n$ candidates $(p_i, b_i)$ where the $p_i$ are distinct, and we have indexed by $i$ so that $p_i$ is strictly increasing. Further, suppose $b_i = \frac{i}{p_{1:i}}$, a decreasing sequence (using the shorthand $p_{1:i} \equiv \sum_{j=1}^i p_j$). Pick any $i \in {\ensuremath{\left\{1, \dots, n\right\}}}$, and let ${\hat{p}}= \frac{1}{b_i}$. We show candidate $j$ if $b_j {\hat{p}}= \frac{b_j}{b_i} \geq 1$. Since the bids are decreasing, we show candidate $j$ if and only if $j \le i$. Thus, serving with ${\hat{p}}= \frac{1}{b_i} =\frac{p_{1:i}}{i}$ we show candidates $1, \dots, i$, and so the average CTR is in fact ${\hat{p}}$. Thus, for any $i \in {\ensuremath{\left\{1, \dots, n\right\}}}$, there is a fixed-point ${\hat{p}}$ that shows ads $\{1, \dots, i\}$. Figure \[fig:many\] shows an example of this construction. If we have $m$ queries each with a distinct fixed raw prediction $z$ and $n$ candidates constructed in this manner, we can choose a per-query fixed point independently for each query, for $n^m$ distinct fixed points.
#### [Q4]{}: An example with no fixed points
Consider a single query with two candidates, $(p_1 = 0.7, b_1 = 4, z)$ and $(p_2=0.1, b_2 = 2, z)$. For any ${\hat{p}}\geq 0.5$, both ads show and we observe a click-through-rate of $0.4$, so no such ${\hat{p}}$ can be self-calibrated. For any ${\hat{p}}\in [0.25, 0.5)$, only ad 1 shows, and we observe a click-through rate of $0.7$. For ${\hat{p}}\in [0, 0.25)$, we don’t show any ads. Thus, there is no non-trivial fixed point; assuming we start with ${\hat{p}}\ge 0.25$, the calibration procedure will cycle between $0.7$ and $0.4$.
#### [Q2]{}: Calculating the efficiency-maximizing $f$
The above examples show that self-calibrated prediction maps may not exist, and that even if they do, they need not maximize efficiency.
Nevertheless, given access to the full problem data (including true click-through rates) one might be interested in calculating an efficiency maximizing prediction map. The following algorithm accomplishes this in polynomial time.
We define $f^*$ by considering each $z' \in {\ensuremath{\left\{1, 2, \dots, K\right\}}}$ independently:
1. Consider the set of candidates $(p, b, z, q)$ where $z = z'$, and sort these candidates in decreasing order of bid, for $j=1, \dots,
n_j$. We must show some prefix of this list. In particular, if we set ${\hat{p}}= 1/b_j$ and $b_{j+1} < b_j$, then we will show exactly ads $1, \dots, j$.
2. For each $j$ where $b_{j+1} < b_j$, compute the expected value per query of using ${\hat{p}}_j = 1/b_j$ (which shows ads 1, …, j). This can be computed as $${\text{EV}}({\hat{p}}_j) = \sum_{i=1}^j {{\text{Pr}}^{\mathcal{Q}}}(q_i) (p_i \cdot b_i - 1).$$
3. Let $f(z) = {\hat{p}}_{j^*}$ where ${\hat{p}}_{j^*}$ is the value that maximizes ${\text{EV}}({\hat{p}}_j)$.
While this result is interesting theoretically (especially in contrast to results in the next section), we note it is not likely to be useful in practice: if it was possible to estimate $p_i$ accurately for each ad, then one could simply throw out the coarser-grained predictions $z_i$ and use these estimates.
Mechanism [`ONE`]{}: Selecting One Ad {#sec:one}
=====================================
In this section, we consider results for selection mechanism [`ONE`]{}. When there is only a single query, or only a single raw prediction, selection mechanism [`ONE`]{}can be quickly analyzed, and our questions are in fact answered in the affirmative, except for [Q3]{}. But in non-trivial cases, we again show negative answers to all four questions.
#### Single query, multiple raw predictions
Selection mechanism [`ONE`]{}becomes rather degenerate under a single query. We show how to construct a nice $f$, answering [Q1]{}and [Q2]{}, and [Q4]{}in the affirmative.
For each raw prediction $z' \in {\ensuremath{\left\{1, \dots, K\right\}}}$, observe that if an ad with $z_i = z'$ shows, it must be an ad that has bid $b(z') \equiv
\max_{j:z_j = z'} b_j$. Thus, if an ad with $z'$ shows, the expected value generated is $b(z') \cdot {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z', b(z')]$, where ${\mathbb{E}_{{\mathcal{C}}}}[p
\mid z', b(z')]$ is the average click-through-rate of ads with $z =
z', b = b(z')$. We can guarantee we obtain this value by simply setting $f(z') = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z', b(z')]$ and $f(z) = 0$ for all $z
\neq z'$. Note that this $f$ is self-calibrated because ties are broken uniformly at random under selection mechanism [`ONE`]{}, answering [Q4]{}in the affirmative. We obtain maximum efficiency by using the $f$ that only shows ads with raw prediction $$z^* = \operatorname*{arg\,max}_z b(z) \cdot {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z, b(z)].$$
Let $f_z$ be the $f$ function that only shows candidates with the given $z$ value. Thus, $f_{z^*}$ is nice. However, we can define a more satisfying $f^*$ by $$f^*(z) = {\mathbb{E}}_{f_z}[p {\!\mid\!}z].$$ We only show ads $(b, z)$ where $b \cdot f^*(z)$ achieves the argmax value over candidates, and in fact $$b \cdot f^*(z) = b(z) \cdot {\mathbb{E}}_{f_z}[p {\!\mid\!}z],$$ and so we still maximize efficiency.
The answer to [Q3]{}is negative: iterative calibration can have exponentially many fixed points. Suppose each ad $i$ has a distinct $z_i$, and $p_i = b_i ^ {-1}$. Let ${\mathcal{I}}$ be any subset of the ads and define $f_{\mathcal{I}}$: $f_{\mathcal{I}}(z_i) = p_i$ for $i \in {\mathcal{I}}$, $f_{\mathcal{I}}(z_i) = 0$ for $i \not\in {\mathcal{I}}$. Then, under $f_{\mathcal{I}}$ all ads in ${\mathcal{I}}$ tie, so we show them randomly. Each of the $2^{{|{\mathcal{C}}|}}$ subsets of ${\mathcal{C}}$ thus corresponds to a self-calibrated prediction map that shows a different set of ads.
#### Multiple queries, single raw prediction
Under mechanism [`ONE`]{}, if there is a single raw prediction $z$ made for all candidates (on all queries), then the ads that show are in fact independent of the value ${\hat{p}}= f(z) > 0$: for each query, we always randomly pick one of the candidates with the highest bid. Thus, any ${\hat{p}}> 0$ is efficiency-maximizing, and we can choose ${\hat{p}}$ equal to the average observed CTR to obtain self-calibration. Thus, in this case we answer [Q1]{}- [Q4]{}in the affirmative.
#### [Q2]{}: NP-hardness in general
In general (with at least two distinct raw predictions and at least two queries), under selection mechanism [`ONE`]{}, the offline problem of finding the efficiency-maximizing prediction map $f$ is NP-hard, even if all bids are $1$. We show this using a reduction from the minimum feedback arc set (MFAS) problem on tournaments (see, for example, @kleinberg10sleeping).
In this problem, there are $n$ players, ${\ensuremath{\left\{1, \dots, n\right\}}}$, that have just completed a tournament where every pair of players has played. The MFAS for this problem is a ranking of the players that minimizes the number of upsets; that is, if $\mu_i$ is the rank of player $i$, we want a ranking $\mu$ that minimizes the number of times $\mu_i >
\mu_j$, but player $j$ beat player $i$.
We encode this problem as an auction efficiency maximization problem as follows: There are $n$ distinct $z$ values, $1, \dots, n$, one for each player, and there are ${\frac{1}{2}}n(n-1)$ queries (each equally likely), one for each $(i, j)$ pair with $i < j$. The query for the pair $(i,j)$ (where $i$ beat $j$ without loss of generality) has two candidates $(p, z)$, namely $(1, i)$ and $(0, j)$. Thus, if we show the ad corresponding to the winner (with $z = i$), we have $p=1$, and the bid is $1$, so we get value 1; if we show ad with $z=j$, we have $p=0$, we get no value. It is then clear that the efficiency-maximizing ranking of the raw predictions $z$ exactly corresponds to the solution to the MFAS problem.
#### Negative results for [Q1]{}, [Q3]{}, and [Q4]{}in general
We also show negative results for [Q1]{}, [Q3]{}, and [Q4]{}in general.
For [Q1]{}, observe that in the NP-hardness construction when there is a perfect ranking, we observe a CTR of 1.0, and so the efficiency-maximizing prediction map cannot be self-calibrated. We can illustrate this directly with the following example. There are four ads, each given as $(p, b, z)$ tuples:
------------------- --------------------
A $(1.0, 2, z_1)$ C $(1.0, 2, z_2 )$
B $(0.0, 2, z_2)$ D $(0.0, 1, z_1)$
------------------- --------------------
We need $f(z_1) > f(z_2)$ in order to guarantee we show Ad A on $q_1$; we also need $f(z_2) > {\frac{1}{2}}f(z_1)$ in order to show Ad C on $q_2$. We will observe a 1.0 CTR on both $z_1$ and $z_2$ on any such efficiency maximizing $f$, but we are constrained to pick $f(z_2) < f(z_1) \le
1$, and so no such $f$ can be self-calibrated.
For [Q3]{}, we have already shown multiple fixed points in the single-query case. If we consider multiple queries, where each query has a single distinct raw prediction, we immediately arrive at a problem with exponentially many fixed points.
For [Q4]{}, it is straightforward to construct an example with cycles, but constructing one with no fixed point is a bit trickier. In particular, any time there is some prediction $z$ where each query has at least one ad with prediction $z$, we can always find a fixed point by setting $f(z') = 0$ for $z' \ne z$ and $f(z) > 0$. The set of ads shown will be independent of the non-zero value $f(z)$, so we can set it equal to the observed CTR, achieving self-calibration (except in the degenerate case where all the ads with prediction $z$ have zero CTR).
However, it is still possible to construct problems with no fixed points without resorting to such degeneracy, as the following example illustrates. Each query is equally likely, all the bids are $1$, and the $(p,z)$ ad tuples are:\
---------------- ---------------- ---------------- ----------------
A $(0.5, z_1)$ B $(0.6, z_2)$ C $(0.5, z_1)$ E $(0.2, z_2)$
D $(0.6, z_2)$ F $(0.3, z_1)$
---------------- ---------------- ---------------- ----------------
If $f(z_1) > f(z_2)$, then we show ads A,B, C, and F. In this case, we observe a CTR of $(0.5 + 0.5 + 0.3)/3 = 0.433$ for $z_1$, and 0.6 for $z_2$, so we cannot be self-calibrated. If $f(z_1) < f(z_2)$, we show ads A, B, D, and E, and observe a CTR of $(0.6 + 0.6 + 0.2)/3 = 0.467$ for $z_2$, and $0.5$ for $z_1$, and so again we cannot be self-calibrated. Finally, if $f(z_1) = f(z_2)$, we always show $A$ and $B$, and show the other ads half of the time. Thus, we observe a CTR of $(3/4) 0.5
+ (1/4) 0.3 = 0.45$ for $z_1$, and a CTR of $(3/4) 0.6 + (1/4) 0.2 =
0.5$ for $z_2$, and so again we cannot be well-calibrated. Thus, no self-calibrated $f$ exists for this problem.
Sufficient Conditions {#sec:cond}
=====================
----------- --------------- --------------------------------- --------------- ------------------------
both [Prop `E1`]{} [$\Longrightarrow$]{} [Prop `E2`]{} (immediate)
[`ALL`]{} [Prop `E2`]{} $\iff$ [Prop `SI`]{} Thm \[thm:equivall\]
[`ALL`]{} [Prop `E2`]{} [$\Longrightarrow$]{} nice Thm \[thm:e2ev\]
[`ALL`]{} [Prop `E1`]{} [$\Longrightarrow$]{} nice (from above)
[`ONE`]{} [Prop `E1`]{} [$\Longrightarrow$]{} nice Thm \[thm:onezgqev\]
both [Prop `SI`]{} [$\centernot\Longrightarrow$]{} [Prop `E1`]{} Sec \[sec:negresults\]
[`ONE`]{} [Prop `E2`]{} [$\centernot\Longrightarrow$]{} [Prop `SI`]{} Sec \[sec:negresults\]
[`ONE`]{} [Prop `SI`]{} [$\centernot\Longrightarrow$]{} nice Sec \[sec:negresults\]
----------- --------------- --------------------------------- --------------- ------------------------
: Relationships between problem properties. A “nice” problem instance is one where a self-calibrated efficiency-maximizing prediction map exists. []{data-label="tab:results"}
As the previous two sections show, without additional assumptions significant problems arise if one tries to achieve both calibration and auction efficiency. In this section, we introduce additional assumptions that are sufficient to guarantee nice prediction maps exist. Table \[tab:results\] summarizes our results.
The intuition behind our results is a basic property of conditional probability. Calibration depends on the conditional expectation ${\mathbb{E}}[p {\!\mid\!}z]$. In general, selection changes the distribution this expectation is with respect to. But if selection is *only* a function of $z$, it does not change the conditional distribution of $p$ given $z$, since the latter is already conditioned on $z$.
For example, suppose we have a single query, and that all bids are 1, so all selection decisions are functions of $z$. This means that ${\mathbb{E}}[p
{\!\mid\!}z]$ does not change under selection, and thus defines an efficiency-maximizing self-calibrated prediction map. To extend this intuition to more realistic auctions, we need to make sure that the query and the bid do not add any information about $p$, so that selection does not change ${\mathbb{E}}[p {\!\mid\!}z]$ and the different ${\mathbb{E}}[p
{\!\mid\!}z]$ for each query can be reconciled. We now state these properties formally:
#### [Prop `E1`]{}
For each $z$ there exists a value ${\bar{p}}(z)$ such that for each query $q$ with ${\text{Pr}}_{\mathcal{C}}(q {\!\mid\!}z) > 0$, and for each $b$ with ${\text{Pr}}_{\mathcal{C}}(b {\!\mid\!}q, z) > 0$, $$\label{eq:pzbindep}
{\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z,b,q] = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z,q] = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z] \equiv {\bar{p}}(z).$$ That is, in all cases the bid and query provide no more information than the raw prediction about average click-through rates.[^4] For this assumption, the natural prediction map to consider is $f(z) = {\bar{p}}(z)$.
#### [Prop `E2`]{}
A weaker assumption is that $${\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z,b] = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z]$$ whenever both expectations are defined. This essentially marginalizes over queries, rather than holding simultaneously for all $q$.
#### [Prop `SI`]{}
A problem instance is **selection-invariant** if for all $f, f'$, for any $z$ where both ${\mathbb{E}}_f[p {\!\mid\!}z]$ and ${\mathbb{E}}_{f'}[p {\!\mid\!}z]$ are defined, we have $${\mathbb{E}}_f[p {\!\mid\!}z] = {\mathbb{E}}_{f'}[p {\!\mid\!}z].$$ Selection invariance says that the observed CTR for a given raw prediction $z$ is independent of the prediction map used for selection. Under this assumption, the natural calibration map to consider is $
f^*(z) = {\mathbb{E}}_{f_z}[p {\!\mid\!}z],
$ where $f_z$ is any prediction map that shows some ads with raw prediction $z$.
It is easy to show that [Prop `E1`]{}implies [Prop `E2`]{}.
A weak per-query variant of [Prop `E1`]{}is that, for all $z$, $b$, and $q$ (when defined), ${\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z,b,q] = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z, q]$. We can dismiss this assumption as insufficient, as we can take the negative examples of Section \[sec:all\] and re-state them where each candidate occurs on a distinct query, each equally likely. Thus, the above property holds trivially, but the pathological behaviors still occur.
Properties that Imply Nice Maps Exist
-------------------------------------
First, we show that under mechanism [`ALL`]{}, [Prop `E2`]{}and [Prop `SI`]{}are equivalent; we then show that [Prop `E2`]{}(and hence also [Prop `SI`]{}) imply a nice problem.
\[thm:equivall\] Under selection mechanism [`ALL`]{}, [Prop `E2`]{}is equivalent to [Prop `SI`]{}(selection invariance).
Suppose [Prop `E2`]{}holds. Selection mechanism [`ALL`]{}must show either all of the candidates with a given $(z, b)$ combination, or none of them. Thus, for any $f$ where ${\text{Pr}}_f(z, b) > 0$, we must have $$\label{eq:sa}
{\mathbb{E}}_f[p {\!\mid\!}z,b] = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z,b].$$ Then, for any $f$, assuming ${\mathbb{E}}_f[p {\!\mid\!}z]$ is defined, $$\begin{aligned}
{\mathbb{E}}_f[p {\!\mid\!}z]
&= {\mathbb{E}}_f[{\mathbb{E}}_f[p {\!\mid\!}z,b]] \\
&= {\mathbb{E}}_f[{\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z,b]] && \text{{Eq.~\eqref{eq:sa}}} \\
&= {\mathbb{E}}_f[{\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z]] && \text{{Prop \texttt{E2}\xspace}} \\
&= {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z].
\end{aligned}$$ For the other direction, suppose we have selection invariance ([Prop `SI`]{}). It is sufficient to consider a fixed raw prediction $z$ (if there are multiple $z$, we can consider them independently). Also, we can assume candidates have distinct bids - if multiple candidates have the same bid and raw prediction, mechanism [`ALL`]{}treats them all the same, so we can just average over them.
Index the bids $(b_1, b_2, \dots)$ in decreasing order. Then, depending on the chosen ${\hat{p}}= f(z)$, we either show (when the appropriate queries occur) ad $1$, or ads $1$ and $2$, etc. [Prop `SI`]{}says that no matter what ${\hat{p}}$ is, the average CTR of the ads we show is the same. Suppose that all the ads are on the same query. Then [Prop `SI`]{}implies $p_1$ = ${\frac{1}{2}}p_1 + {\frac{1}{2}}p_2$, so $p_1 = p_2$; ${\frac{1}{2}}(p_1 + p2) = \frac{1}{3} (p_1 + p_2 + p_3)$, so $p_1 = p_2 = p_3$; and so on. When the ads are on different queries, the weights in the above equalities change to reflect the query distribution, but are still all positive and sum to 1, so the same inductive reasoning holds.
This result implies that under selection mechanism [`ALL`]{}, when [Prop `E2`]{}holds the prediction map $f^*(z) = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z]$ is self-calibrated. Next, we show this map is in fact also efficiency-maximizing:
\[thm:e2ev\] Under selection mechanism [`ALL`]{}, [Prop `E2`]{}implies $f^*$ is efficiency maximizing, where $f^*(z) = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z]$.
Recall we need to show $f^*$ maximizes $$EV(f) = \sum_{i \in {\mathcal{C}}} {{\text{Pr}}^{\mathcal{Q}}}(q_i) {\text{Pr}}(i {\!\mid\!}q_i, f) (p_i b_i - 1).$$ Since selection decisions for one $z$ value do not impact others, it suffices to consider a single $z$ value. We can decompose the sum over ${\mathcal{C}}$ over the partition that associates all the ads that share a common bid and raw prediction. Let $B = {\ensuremath{\left\{i {\!\mid\!}b_i = b, z_i =
z\right\}}} \subseteq {\mathcal{C}}$ be the element of this partition for $(b,z)$. For a given $f(z) = {\hat{p}}$, either all the ads in $B$ show (when their respective queries occur), or none of them do; thus, if we can show that $f^*$ shows these ads if and only if they increase EV, we are done. The expected value per query of showing these ads is: $$\sum_{i \in B} {{\text{Pr}}^{\mathcal{Q}}}(q_i) {\text{Pr}}(i {\!\mid\!}q_i, f) (p_i b_i - 1).$$ Since ${\text{Pr}}(i {\!\mid\!}q_i, f) \in {\ensuremath{\left\{0, 1\right\}}}$ must be the same for all these ads, this quantity is non-negative if and only if $\sum_{i
\in B} {{\text{Pr}}^{\mathcal{Q}}}(q_i) (p_i b_i - 1) \geq 0$.
Recall ${{\text{Pr}}_{{\mathcal{C}}}}(i) = {{\text{Pr}}^{\mathcal{Q}}}(q_i)/C$ where $C = \sum_{i \in {\mathcal{C}}} {{\text{Pr}}^{\mathcal{Q}}}(q_i)$. We have ${{\text{Pr}}_{{\mathcal{C}}}}(i \wedge b \wedge z) = {{\text{Pr}}_{{\mathcal{C}}}}(i)$ if $i \in B$, and 0 otherwise. Letting $C_B = \sum_{i \in B} {{\text{Pr}}^{\mathcal{Q}}}(q_i)$, then ${{\text{Pr}}_{{\mathcal{C}}}}(b \wedge z) =
\frac{C_B}{C}$, and so $$\label{eq:ii}
{{\text{Pr}}_{{\mathcal{C}}}}(i {\!\mid\!}b, z)
= \frac{{{\text{Pr}}_{{\mathcal{C}}}}(i)}{C_B/C}
= \frac{{{\text{Pr}}^{\mathcal{Q}}}(q_i)/C}{C_B/C}
= \frac{{{\text{Pr}}^{\mathcal{Q}}}(q_i)}{C_B}$$ for $i \in B$, and 0 otherwise. Then, $$\begin{aligned}
{\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z]
&= {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}b, z] && \text{{Prop \texttt{E2}\xspace}}\\
&= \sum_{i \in {\mathcal{C}}} {{\text{Pr}}_{{\mathcal{C}}}}(i {\!\mid\!}b, z) p_i \\
&= \frac{1}{C_B}\sum_{i \in B} {{\text{Pr}}^{\mathcal{Q}}}(q_i) p_i. && \text{{Eq.~\eqref{eq:ii}}}\end{aligned}$$ Using this result, we have $$\begin{aligned}
\sum_{i \in B} {{\text{Pr}}^{\mathcal{Q}}}(q_i) (p_i b_i - 1)
&= b \left(\sum_{i \in B} {{\text{Pr}}^{\mathcal{Q}}}(q_i) p_i\right) - C_B) \\
&= C_B (b {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z] - 1).\end{aligned}$$ This quantity is non-negative if and only if $b f^*(z) - 1 \geq 0$; since this is exactly the condition we use to decide whether or not to show the ads in $B$, we are done.
It is not hard to directly prove that under selection mechanism [`ALL`]{}, [Prop `SI`]{}implies $f^*$ is efficiency-maximizing: the idea is to consider again a single $z$, sort the ads by bid into blocks, and show by induction that each block has average CTR $f^*(z)$.
In Section \[sec:one\] we saw that the problem of finding an efficiency-maximizing $f$ is NP-hard under mechanism [`ONE`]{}, even under the assumption of a single bid. Under [Prop `E1`]{}, fortunately the situation is much easier:
\[thm:onezgqev\] Under selection mechanism [`ONE`]{}, if [Prop `E1`]{}holds then the prediction map $f^*$ where $f^*(z) = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z]$ is efficiency-maximizing and self-calibrated.
For a query $q$, consider a partition ${\mathfrak{B}^q}$ of ${\mathcal{C}}(q)$ into sets of ads that share a common $b$ and $z$, so the elements of the partition are $${B^q_{b,z}}= {\ensuremath{\left\{i {\!\mid\!}b_i = b, z_i = z, q_i = q\right\}}} \subseteq {\mathcal{C}}(q)$$ for each $(b,z)$ pair.
All $i \in B$ for some $B$ must share a common value ${\text{Pr}}_f(i {\!\mid\!}q)$. We also use $B$ as the event that some $i \in B$ shows; so for example ${\text{Pr}}_f(B {\!\mid\!}q)$ is the probability that *some* ad from $B$ shows. Under selection mechanism [`ONE`]{}, for each $i \in B$, we have ${\text{Pr}}_f(i {\!\mid\!}B, q) = \frac{1}{{|B|}}$ (since ties are broken at random). Also, $$\label{eq:ECpz}
{\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}b, z, q] = \frac{1}{{|{B^q_{b,z}}|}} \sum_{i \in {B^q_{b,z}}} p_i.$$ Recalling cost is zero under [`ONE`]{}, for any $f$, $$\begin{aligned}
&{\text{EV}}(f) \\
&= \sum_{q \in {\mathcal{Q}}} {{\text{Pr}}^{\mathcal{Q}}}(q) \sum_{i \in C(q)} {\text{Pr}}_f(i {\!\mid\!}q) p_i b_i\\
&= \sum_{q \in {\mathcal{Q}}} {{\text{Pr}}^{\mathcal{Q}}}(q) \sum_{{B^q_{b,z}}\in {\mathfrak{B}^q}}
\sum_{i \in {B^q_{b,z}}} {\text{Pr}}_f(i {\!\mid\!}q) p_i b_i \\
&= \sum_{q \in {\mathcal{Q}}} {{\text{Pr}}^{\mathcal{Q}}}(q) \sum_{{B^q_{b,z}}\in {\mathfrak{B}^q}}
{\text{Pr}}_f({B^q_{b,z}}{\!\mid\!}q) \frac{1}{{|{B^q_{b,z}}|}} \sum_{i \in {B^q_{b,z}}} p_i b \\
\intertext{and using {Eq.~\eqref{eq:ECpz}},}
&= \sum_{q \in {\mathcal{Q}}} {{\text{Pr}}^{\mathcal{Q}}}(q) \sum_{{B^q_{b,z}}\in {\mathfrak{B}^q}}
{\text{Pr}}_f({B^q_{b,z}}{\!\mid\!}q) b\, {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}b, z, q] \\
&\leq \sum_{q \in {\mathcal{Q}}} {{\text{Pr}}^{\mathcal{Q}}}(q) \max_{{B^q_{b,z}}\in {\mathfrak{B}^q}}
b\, {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}b, z, q].
\end{aligned}$$ Thus, it is sufficient to show that selecting ads using $f^*$ produces the expected value in the last line of the above inequality. For each query, we rank the ads using $b \cdot f^*(z) =
b\, {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}b, z, q]$, and so this is exactly the expected value that $f^*$ obtains.
To see that $f^*$ is self-calibrated, observe that when ${\text{Pr}}_f(z, b, q) > 0$, $${\mathbb{E}}_f[p {\!\mid\!}z, b, q] = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z, b, q] = f^*(z),$$ and so $${\mathbb{E}}_f[p {\!\mid\!}z]
= \sum_{b,q} {\text{Pr}}_f(b, q {\!\mid\!}z) {\mathbb{E}}_f[p {\!\mid\!}z, b, q]
= f^*(z).$$
Negative Results {#sec:negresults}
----------------
We show several negative results relating to the assumptions considered in the previous section.
#### [`ONE`]{}and [`ALL`]{}: [Prop `SI`]{}does not imply [Prop `E1`]{}
Consider an example with two queries, each equally likely. Each query has two candidates, given as the following $(p, b)$ tuples (they all share a common $z$):
------------ ------------
A (0.1, 1) C (0.1, 2)
B (0.2, 2) D (0.2, 1)
------------ ------------
Because of the symmetry between these queries, under any $f$ (and either selection mechanism), ad A must show with the same probability as ad D, as must ads B and C. Thus, for any $f$, ${\mathbb{E}}_f[p {\!\mid\!}b=1, z]
= 0.15$, and similarly ${\mathbb{E}}_f[p {\!\mid\!}b=2, z] = 0.15$. Thus, selection invariance holds, as does [Prop `E2`]{}. However, ${\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z, b=1, q_1]
= 0.1 \neq {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z, q_1] = 0.15$.
#### [`ONE`]{}: [Prop `E2`]{}does not imply [Prop `SI`]{}
Consider the example, with two equally likely queries, and two distinct raw predictions:
------------------- -------------------
A $(0.2, 2, z_1$) C $(0.1, 2, z_1)$
B $(0.1, 1, z_1)$ D $(0.2, 1, z_1)$
E $(1.0, 9, z_2)$
------------------- -------------------
Note that ${\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z_1, b=1] = {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z_1, b=2] = 0.15$. However, if we consider two prediction maps $f(z_1) = 0.5, f(z_2) = 1$ and $f'(z_1)
= 1, f'(z_2) = 0$, under selection mechanism [`ONE`]{}, we have ${\mathbb{E}}_f[p {\!\mid\!}z_1]
= 0.1$, but ${\mathbb{E}}_{f'}[p {\!\mid\!}z_1] = 0.15$.
#### [`ONE`]{}: [Prop `SI`]{}does not imply a nice problem
We have four queries, each equally likely; the bids for the ads on $q_3$ and $q_4$ are defined in terms of some small $\eps > 0$, with $(p,b,z)$ tuples:
----------------------- ----------------------- ---------------------------- ----------------------------
A [$(1,2,z_1)\!\!$]{} C [$(1,2,z_2)\!\!$]{} A’ [$(0,2\eps,z_1)\!\!$]{} C’ [$(0,2\eps,z_2)\!\!$]{}
B [$(0,2,z_2)\!\!$]{} D [$(0,1,z_1)\!\!$]{} B’ [$(1,2\eps,z_2)\!\!$]{} D’ [$(1,1\eps,z_1)\!\!$]{}
----------------------- ----------------------- ---------------------------- ----------------------------
Note that $q_3$ and $q_4$ mirror $q_1$ and $q_2$, except that the bids are scaled by $\eps$, and the CTRs are reversed. Under any $f$, ads $A$ and $A'$ show with the same probability, as do $B$ and $B'$, and the other two pairs. Thus, under selection by any $f$, we have ${\mathbb{E}}_f[p
{\!\mid\!}z_1] = {\mathbb{E}}_f[p {\!\mid\!}z_2]= 0.5$ whenever the expectation is defined, and so [Prop `SI`]{}holds. However, as $\eps \rightarrow 0$, only $q_1$ and $q_2$ have any impact on efficiency. Thus, as before we have constraints on the optimal solution that $f(z_1) > f(z_2) > {\frac{1}{2}}f(z_1)$. Thus, the prediction map $f^*$ with $f^*(z_1) = 0.5$ and $f^*(z_2) = 0.5$ is not efficiency-maximizing, as it only shows ad A on $q_1$ only half the time.
Discussion and Future Work
==========================
Our sufficient conditions are quite strong, but not unrealistic. They require that the bid and query not add any information about the CTR, conditional on the raw prediction. CTR estimation systems normally use queries as features (e.g., [@graepel10webscale]), so it is reasonable to hope that the query does not add extra information. Bids are set by advertisers for query-ad pairs, which are already used by CTR estimation systems, so any systematic patterns in bids are likely to be accounted for. Since advertisers have much less information than the auctioneer, it seems unlikely that they can add extra information about CTRs through fine-grained bid manipulation. We can test if our sufficient conditions hold by running randomization experiments that change the mix of ads shown.
Since randomized predictions cannot in general lead to maximum efficiency, it is natural to first consider deterministic prediction maps. Nevertheless, given the negative results in the current work, it would be interesting to also study randomized calibration strategies that provide calibration guarantees without needing IID assumptions. Then the natural question becomes: how much efficiency is lost by using a randomized calibration strategy, versus using a deterministic efficiency-maximizing prediction map that is not self-calibrated.
[^1]: This is without loss of generality, as we can always replicate ads for each query to which the advertiser has targeted the ad.
[^2]: This ignores the well-known issue of position normalization; this aspect of the problem is largely orthogonal to our work.
[^3]: Note ${\text{EV}}$ depends only on the ordering of the ads for each query induced by $f$, and so over all possible $f$, ${\text{EV}}$ takes on only a finite number of distinct values.
[^4]: Note that this does not hold under the NP-Hardness reduction for [`ONE`]{}in the previous section, as ${\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z, q]
\neq {\mathbb{E}_{{\mathcal{C}}}}[p {\!\mid\!}z]$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We formulate and study a novel multi-armed bandit problem called the *qualitative dueling bandit (QDB)* problem, where an agent observes not numeric but qualitative feedback by pulling each arm. We employ the same regret as the *dueling bandit* (DB) problem where the duel is carried out by comparing the qualitative feedback. Although we can naively use classic DB algorithms for solving the QDB problem, this reduction significantly worsens the performance—actually, in the QDB problem, the probability that one arm wins the duel over another arm can be *directly* estimated without carrying out actual duels. In this paper, we propose such direct algorithms for the QDB problem. Our theoretical analysis shows that the proposed algorithms significantly outperform DB algorithms by incorporating the qualitative feedback, and experimental results also demonstrate vast improvement over the existing DB algorithms.'
author:
- 'Liyuan Xu${}^{\dagger\ddagger}$'
- 'Junya Honda${}^{\dagger\ddagger}$'
- |
Masashi Sugiyama${}^{\ddagger\dagger}$\
\
$\dagger$:The University of Tokyo $\ddagger$:RIKEN
bibliography:
- 'reference.bib'
title: Dueling Bandits with Qualitative Feedback
---
Introduction
============
Related Work
============
Problem Formulation {#sec:problem-formulation}
===================
Qualitative Dueling Bandit with the Condorcet Winner
====================================================
Qualitative Dueling Banidt with the Borda Winner {#sec:Borda_winner}
================================================
Experiments {#sec:Experiments}
===========
Conclusions
===========
Acknowledgements
================
LX utilized the facility provided by Masason Foundation. JH acknowledges support by KAKENHI 18K17998, and MS acknowledges support by KAKENHI 17H00757.
Preliminaries {#sec:Preliminaries}
=============
Proof of Theorem \[thm:thompson-condorcet-regret\] {#sec:Proof-of-thompson-condorcet-regret}
==================================================
Proof of Theorem \[thm:thompson-fails-borda\] {#sec:Proof-of-thompson-fails-borda}
=============================================
Proof of Theorem \[thm:UCB-borda-regret\] {#sec:Proof-of-Borda-regret}
=========================================
Proof of Theorem \[thm:borda-lower-bound\] {#sec:Proof-of-borda-lower-bound}
==========================================
| {
"pile_set_name": "ArXiv"
} |
-1.cm -1.5cm =16.cm =24.cm \#1 \#1[\#1]{} \#1 5774[7]{}
**THE “PISCO” SPECKLE CAMERA AT PIC DU MIDI OBSERVATORY[^1]**
**J.-L. Prieur$^*$, L. Koechlin$^*$, C. André$^*$, G. Gallou$^*$, C. Lucuix$^*$**
*$^*$Observatoire Midi-Pyrénées, 14, Avenue E. Belin, F-31400 Toulouse, France.*
[**Abstract:**]{}
We present a new speckle camera designed and built at Observatoire Midi-Pyrénées. This focal instrument has been used for two years with the 2-meter Bernard Lyot Telescope of Pic du Midi observatory. It can be set in various operating modes: full pupil imaging, masked-pupil imaging, spectroscopy, wave-front sensor and stellar coronagraphy, hence its name “PISCO” (“Pupil Interferometry Speckle COronagraph"). Restored images of double and triple stars have demonstrated its capabilities in providing close to diffraction limited images (0.06$\,$ in V). PISCO has been fully tested and is now ready to be used by the whole astronomical community. [**1. Introduction**]{}
10000 This speckle camera was designed and built between 1991 and 1994 by the “Aperture synthesis team” of Observatoire Midi Pyrénées (OMP), as a new focal instrument for the 2 meter Télescope Bernard Lyot (TBL) at Pic du Midi. The aim was to take advantage of the good seeing quality of that site.
The optical design was chosen such that a pupil plane would be accessible and pupil masks could be put into it, to allow for aperture synthesis experiments and simulate telescope arrays such as the ESO VLTI (European Southern Observatory Very Large Telescope in the Interferometric mode) or others. This speckle camera thus provides appropriate experimental tools for the investigation of image restoration techniques with the optical telescope interferometric networks that are currently being built and operated around the world (CHARA, COAST, GI3T, IOTA, NPOI, PTI, SUSI, etc, see [*f.i.*]{} [*Astronomical Interferometry,*]{} Proceedings of SPIE, Vol. 3350, Kona, 20-28/03/98)
In the following we present an overview of PISCO and its optical concept (§2). Then we introduce the various operative modes and illustrate them with observational results (§3). The detectors that have been used with PISCO are presented in §4. In §5, we discuss the current performances of the instrument in relation with the physical limitations of the speckle techniques and the image restoration methods. In §6 we briefly describe the on-going scientific programs which use PISCO, and conclude about the use of a speckle camera in the current context of new high angular resolution techniques.
[**2. The PISCO speckle camera**]{}
10000 Compared to other speckle cameras (Blazit [*et al.*]{}, 1977, Breckinridge [*et al.*]{} 1979, Strittmater, 1980, Beckers [*et al.*]{}, 1983, Foy, 1988a) our instrument presents the advantage of a versatility and a full remote control of all its operating modes. PISCO offers thus a wide range of possibilities with fast switching between modes (less than one minute), allowing an optimal use of the seeing conditions.
[*2.1. Description*]{}
100000 The general layout is shown in Fig. . The external mechanical structure is a rectangular box of 100$\times$40$\times$36 cm$^3$ which is mounted at the Cassegrain focus of the telescope (Fig. ).
The input image plane (I1) of the telescope is located 200 mm downstream from the front flange of PISCO (this value is easily adaptable for the 3.6 m CFH or ESO telescopes). The converging input beam is transformed into a parallel beam by the collimating lens (L2), then focused into the image plane (I2) by the lens (L3).
Lens (L4) magnifies this image and projects it to the detector faceplate. The focal length of (L4) can be selected with the wheel GR that bears a series of eyepieces and microscope objectives. A magnification of at least 20 mas/pixel is needed to obtain a good sampling for speckle observations at the TBL while a lower magnification is used for field acquisition.
Figure 1: Optical diagram of PISCO.
Filters (wheels FA and FB) allow selecting the desired wavelength range while neutral densities ( wheels DA and DB) are used to adjust the light level to the (generally poor) dynamic range of the photon-counting detectors (see §4). As will be explained in § 2.2, a set of Risley prisms correct for the atmospheric chromatic dispersion.
Figure 2: PISCO and photo-counting detector CP40 at the TBL.
When a field lens is selected (wheel CH), the pupil plane is located in the plane of the wheel MA, where pupil masks are available for coronagraphy (§3.3) or multi-aperture interferometry (§3.2). If low dispersion spectroscopy (§3.4) is wanted, a grism can be put into the parallel beam (wheel FA). Wavefront analysis is also possible with a Hartman sensor by selecting a microlens array in the wheel MA, and the pupil imaging mode in wheel GR.
PISCO can then be seen as an optical bench on which mounts and wheels can freely move and rotate. This concept allows a great flexibility for future instrumental developments. All instrumental functions, including wheel positioning and control of the Risley prisms are monitored by a microprocessor and remotely accessible via a RS232 link.
One of us (J.-L. Prieur) has developed a program in the PC/Window environment to facilitate the remote control of PISCO. All the basic functions are available (Fig. ) with mouse-driven menus. The program controls in real time the atmospheric dispersion correction according to the telescope position, the filter and the atmosphere parameters (§2.2). A log file is also produced at the end of each night with the PISCO setup parameters of all the exposures taken during that night.
PISCO was primarily designed to be used at the Cassegrain focus of TBL, but it has also been made mechanically and optically compatible with the Cassegrain foci of the Canada-France-Hawaii (CFH) and European Southern Observatory (ESO) 3.6-m telescopes.
[*2.2. Atmospheric dispersion correction with Risley prisms*]{}
10000 For an astronomical object observed from the ground at an elevation different from zenith, the atmosphere behaves as a dispersive prism (see f.i., Simon, 1966). Polychromatic images are spread into a small vertical spectrum. For instance, for a 250 nm bandwidth centered at 500 nm, the typical atmospheric dispersion is 1 for an elevation $h = 60^\circ$ and 2 for $h =30^\circ$.
In PISCO, the atmospheric dispersion is corrected with “Risley prisms,” which consist in two identical sets of prisms (Breckinridge [*et al.*]{} 1979, Walner, 1990) that can be rotated to produce a tunable chromatic dispersion both in amplitude and direction (Fig. 4a).
Each set is made of two prisms of different dispersion law and roof angles, placed in an upside-down position. These prisms have been designed to have a null mean deviation, and a dispersion allowing atmospheric correction from the zenith down to an elevation of 30$^\circ$ for the blue domain which is the most defavorable (B filter, centered at 450 nm with a 70 nm bandwidth). We used the same combination of Shott glasses (F4, SK10) as the one used for the Kitt Peak speckle camera (Breckinridge [*et al.*]{}, 1979). Wallner (1990) found other combinations which are closer to the atmospheric dispersion curve but we chose the Kitt Peak combination because of its low cost and sufficient efficiency for our purpose. Our Risley prisms reduce the residual dispersion down to a level smaller than 0.01“ for every location of the object in the sky above an elevation of 30$^\circ$, with the 70 nm bandpass B filter which is small compared to the diffraction limit of 0.05” in B at the TBL.
During the observations, a specially designed program (already mentionned in §2.1) computes the elevation of the star and the corresponding atmospheric dispersion using J.C. Owens’ model of atmosphere (Owens, 1967, formulae 29–31). The Risley prisms are then dynamically rotated during data acquisition to compensate for the atmospheric dispersion.
[**3. The observation modes**]{}
10000 PISCO can be used in various modes which are selected during the observations. Switching from one mode to another takes a few mouse clicks and less than one minute for the motors to set the wheels.
[*3.1. Full pupil speckle imaging*]{}
10000 In this mode, no pupil masks are used and a high magnification is selected with wheel GR (Fig. ). It corresponds to the “conventional” way of observing in speckle interferometry. Most speckle cameras (Breckinridge [*et al.*]{} 1979, Strittmater, 1980, Foy, 1988a). offer only this possibility – or would require many optical changes with a full re-calibration of the instrument for other operating modes –. By applying bispectral techniques (§5.2), we have obtained images with an angular resolution close to the diffraction limit of the telescope (Fig. 4b).
to to The full pupil is used and the optical transfer function (OTF) corresponds to that of the telescope, and we shall see in the following that another OTF may be preferred. For instance, the diffraction pattern of the telescope spider may pollute the final image and hinder the detection of faint objects in the vicinity of a bright one.
Another drawback of this OTF is that low spatial frequencies dominate the transfer function. In photon counting mode mode, the few available photons are then mainly spread in the (less useful) low frequencies.
As the limited dynamic range of detectors such as the CP40 imposes the use of neutral densities to reduce the photon flux for bright objects to only a few hundreds per frame(§4), they always work in photon counting mode. This limits the performances of the restoration methods even for bright objects. This is the reason why the next mode could be preferred in that case.
[*3.2. Masked pupil speckle imaging and aperture synthesis*]{}
10000 By inserting masks into the pupil plane (P1) (cf. Fig. ), the pupil function (and thus the OTF) can be modified as desired. For instance the spider diffraction pattern can be removed by placing a Lyot’s mask or a four-hole mask which carefully avoids the shadow of the spider (see §3.3), and telescope arrays can be simulated by placing a mask with appropriately located small holes.
Pupil masks allow to select a sub-sample of spatial frequencies and more accurately measure the corresponding complex visibilities since they will be less attenuated (as the overlap of the fringes is lower); hence, a better use of the maximum number of photons allowed by the detector. The price to pay is to perform an interpolation in the Fourier space (aperture synthesis, see methods in §5.1) and complementary observations to make the process more robust in the case of complex objects.
We made 3 pupil masks by drilling 0.7 mm holes into a 5-cm metallic disk, according to some of the complementary non-redundant networks from Golay (1971). They are displayed in Fig. with the corresponding $(u,v)$ coverage. A successful image of HR 8652 was restored using these masks and the aperture synthesis method from Lannes (1989, 1991) and Anterrieu (1992). These method would work with any other configuration – there is no need for complementary networks –. We chose these masks because the corresponding $(u,v)$ coverage (Fig. ) was rather compact with only small gaps, which makes image restoration more robust.
The main drawback is a lower limiting magnitude in the case of small holes, and this method can only be used for objects with $V < 8$ at the TBL.
to
[*3.3. Coronagraphic mode*]{}
10000 PISCO can be used as a Lyot’s coronagraph by putting adequate masks $m_1$ in the entrance image plane (I1) and $m_2$ in the pupil plane (P1) (wheels EN and MA of Fig. ). This mode was successfully tested in 1994 with long integrations on a conventional CCD detector. Speckle imaging from short-exposured frames has little interest in this mode since the obturation of the mask $m_1$ in (I1) needs to be quite large (a few times the FWHM seeing) to hide most of the brightness of the central target. This mode would take its real advantage with adaptive optics and a small obturation of $m_1$, to investigate closer to the target (see f.i., some recent developments with a phase mask in the image plane, Roddier & Roddier, 1997).
A four-hole pupil mask can also be used to suppress the diffraction image of the spider of the telescope. This reduces the diffusion of a bright object and allows the detection of a possible faint close companion or stellar envelopes, as seen in Fig. . The stellar profile is more concentrated when putting this mask. If we normalize the profiles with the central value, the level of the wings has been reduced by a factor larger than 3.
to
[*3.4. High angular resolution spectroscopy*]{}
10000 Some authors have already shown the feasibility of speckle spectroscopy (Weigelt [*et al.*]{}, 1991, Kuwamura [*et al.*]{}, 1992) which has a great interest for the individual study of binary stars or for determining the physical nature of fine details found in speckle imaging. Two possibilities have been used: spectroscopy with or without a slit in the entrance image plane.
– Both the Hokudai speckle camera at the Okayama 188 cm telescope and the Steward Observatory speckle camera with the spectroscopy module at KPNO used by Kuwamura [*et al.*]{}, 1992, worked in [*objective prism spectroscopy mode*]{}, i.e., slitless spectroscopy. In this mode, the resolution is not fixed and changes as the seeing varies. The spectral calibration is rather difficult to perform since it depends upon the position of the object in the field. But the main advantage is that all the incoming light is used, without any loss.
– Weigelt [*et al.*]{}, 1991, proposed a slit spectroscopy setup which allows a high spectral resolution and a fixed spectral calibration. This is the option we chose because we wanted to be able to do stellar classification of the components of binary stars and work with a good spectral calibration. The main drawback is a loss of sensitivity due to a rejection of the light by the entrance slit. PISCO can be easily converted to a spectrograph by selecting the grism in the wheel FA and a slit in the wheel EN in the entrance image plane (I1) (Fig. ). It then provides a low dispersion spectrographic mode with a spectral range of 350–500 nm, and a spectral resolution of $\sim$300 with a slit of 0.7. This range was chosen to allow stellar classification of close binaries with the hydrogen Balmer series. Unfortunately, the atmospheric turbulence is stronger in the blue domain, which make images more difficult to restore. The wavelength calibration can be performed with calibration spectral lamps in wheel AS (Halogen, Argon, Neon and Xenon lamps).
Quick switching between imaging and spectroscopy is possible since this mode can be remotely selected by rotating the wheels. The observing procedure is the following:
– obtain the autocorrelation of the binary star in the full pupil mode (cf., §3.1) and measure the rotation angle to align the slit on the direction of the two components.
– rotate the telescope flange supporting PISCO and center the object on the slit.
– switch to the spectroscopic mode and record the data.
Restoration of high angular resolution information in the direction of the entrance slit is then done by applying one-dimensional speckle imaging techniques to each monochromatic image of the slit (see §5.2).
The first spectroscopic observations were made in 1995. Unfortunately the poor seeing conditions and the low dynamic range of the detector did not allow us to restore high resolution images. We simply obtained long integration spectra (Fig 5774) and calibrated the whole instrument with known stars.
=8.8cm
Figure 7: First spectrum obtained with PISCO (HD 5774).
[*3.5. Wavefront analyzer*]{}
The atmospheric wavefront can be sensed with the Shack-Hartman method (Roggeman [*et al.*]{}, 1997), by putting a microlens array into the pupil plane (P1) and a specific imaging lens in the GR wheel, (cf. Fig. ). Each microlens has a diameter of 0.7 mm which corresponds to 20 cm on the pupil at the TBL and 10 cm at the 3.6 m CFH or ESO telescopes.
The SCIDAR (SCintillation Detection And Ranging) technique (Vernin & Roddier, 1973) which consists in analysing the images of the pupil brightness lit by two stars to measure the wind speed and the altitude of the turbulence layers, could also be applied with PISCO. In that case, fast detectors operating at frequencies larger than 200 Hz (such as the PAPA or the RANICON cameras, cf. §4) are needed to “freeze” the turbulence.
[**4. The detectors**]{}
10000 PISCO has been used with a wide range of detectors. Actually, the qualities of the detector mainly condition the performances of the image restoration process. A good knowledge of the limitations of the detector is essential to elaborate a strategy of observation and obtain valid measurements. Here we describe the detectors that we have already tested on PISCO. For a wider information about the detectors used in the field of optical speckle interferometry, see for instance a review in Cuby (1988), Richard [*et al.*]{}, (1990) or Cuby [*et al.*]{}, (1990).
[*4.1. The CP40-INSU detector*]{}
10000 The first CP40 detector was designed at CERGA for interferometry observations by A. Blazit (Blazit, 1976, 1987). It is a two-stage intensified CCD camera followed by a photon analyzer which computes the coordinates of the photo-events (Foy, 1988b). The field is covered by a mosaic of 4 CCD’s, 288$\times$384 pixels each. We actually used the duplication of this prototype, financed by INSU to make it available to the French astronomical community and in particular to the instruments of the TBL.
The exposure time is set to 20 msec, which may be too large for speckle applications when the coherence time is smaller than this value. To circumvent this difficulty a rotating shutter was implemented which reduces the exposure time to 5 or 10 msec. This shutter interrupts the light beam in the speckle camera with a rotating opaque sector synchronized with the frame signal of the CP40 (phase-locked motor).
Because of a dead zone between the 4 image quadrants of the CP40, we decided to use only one quadrant and made a special “off-axis” mechanical interface to align the center of the selected quadrant with the optical axis of PISCO.
The geometrical distortion caused by the two-stage amplifier is rather large, of the order of 20% in the edges (Thiebault, 1994).
Another problem is a strongly non-uniform sensitivity of the photo-cathode within a single quadrant (down to nearly zero in one edge), which can hardly be corrected by a flat field map and causes a big non-uniformity of the signal to noise ratio within the elementary frames. The photometry of the image restoration process is also badly affected for intrinsically big objects that spread on the whole image.
10000 The electronic device which computes the coordinates of the photo-events produces an artifact which affects the photometry of the images.
When two photo-events are very close in the image, they merge into a single spot. The photon centering device is unable to identify it properly and discards such an event. This causes a depletion of high spatial frequencies in the power spectrum. A “hole” can be seen in the center of the mean auto-correlation, which becomes larger when the photon flux increases. This problem also affects the photometry since many photons are not recorded in the high intensity regions of the image.
To reduce this effect during our observations, the photon flux had to be limited to around 10,000 photons/sec and a high magnification was used to over-sample by a factor of 3.
[*4.2. The Ranicon*]{}
10000 The Ranicon (“Resistive Anode camera”, described in Clamping and Paresce (1989)) has been built by the Space Telescope Science Institute (Baltimore). The model we used was lent by the Observatoire de la Côte d’Azur (OCA) for some observing runs between 1993 and 1996.
This detector has a S20 photo-cathode and a saturated mode single microchannel amplifier (Gen II). The position analysis of the detected photo-events is made with a resistive anode. Each cloud of amplified electrons, resulting from the impact of a photon on the photo-cathode, produces a charge drift towards the four electrodes which surround the resistive anode. The location of the impact is deduced from the voltage variations et the electrodes and can be measured accurately to about 10 kHz.
Compared to the CP40, the quantum efficiency of the Ranicon is smaller by a factor of $\sim$3. This is due to the lower efficiency of the GEN II compared to the GEN I intensifiers.
Although the micro-channel amplification does not introduce geometric distortion, we have noticed a small distortion with the X and Y axes which are not perfectly perpendicular. A small variation of the geometric scale was also noticed and calibration was needed during the night.
Another unexpected defect was the presence of a small “hole” at the center of the autocorrelation function, similar in some way to that of the CP40 (cf, §4.2), but with a smaller amplitude. This is caused by the depletion of electrons of a micro-channel after a photon-detection: a delay of a few tenths of milliseconds is needed to recover its full charge and efficiency. To reduce this defect, the photon flux had to be lowered to about 8000 photons/sec.
[*4.3. Other detectors*]{}
10000 Two other detectors have been used with PISCO: the ICCD (Intensified Charge Coupled Device) belonging to C. Aime and E. Aristidi’s from Nice University, and P. Nisenson’s PAPA camera from Harvard Smithsonian Center for Astrophysics (CfA):
– The ICCD has a single stage intensifier. It cannot operate in true photon counting mode and is thus limited to objects brighter than V$_{lim} \sim$10. This detector has no significant geometric distortion nor non-linearity problems which would affect the photometry measurements. The exposure time can be set between 64 $\mu$sec and 16 msec and the gain of the micro-channel amplifier can be tuned, thus allowing a wide range of input luminosities. The output is an analog video signal, recorded on SVHS video cassettes. Bispectral image restoration with this detector has been very promising, and the first attempts lead to the restoration of a triple star (Aristidi [*et al.*]{}, 1997).
– The principle of the PAPA camera was described in Papaliolios and Mertz (1982) and Papaliolios [*et al.*]{}, (1985). It features a two-stage electrostatic amplifier, and a fast (P46) phosphor. Amplified photon impacts are analyzed by a set of binary masks which act as an optical computer to instantly digitize the position of photons in the field. The version we used was new and not fully operational, with a new binary mask setup and a refurbished image intensifier jointly made by P. Nisenson, D. Gezari (CfA) and L. Koechlin (OMP). The first observations in June 1997 have shown that the quantum efficiency was very good, slightly larger than that of the CP40. The maximum photon rate per second was as high as 100,000 but a small “hole” at the center of the autocorrelation function was also noticed. The geometric distortion caused by the image intensifier was large and an overall scale variation during the night imposed quasi permanent scale calibrations. To solve this problem, the image intensifier was changed after these observations.
[*4.4. Comparison of the detectors*]{}
10000 Here is a summary of the characteristics of the detectors used with PISCO.
10000
1. The CP40 has a good quantum efficiency but a non-uniform sensitivity and a strong geometric distortion, with a fixed integration time of 20 msec. The “photon-counting hole” affects the photometry and limits the photon flux to around 10000 ph/sec (limiting magnitude at the TBL: V$_{lim}\sim$12).
2. The Ranicon has a very low geometric distortion, but a poor quantum efficiency and a limitation of the usable photon flux to around 8000 ph/sec (V$_{lim}\sim$11). It generates a chronologically ordered list of photon coordinates.
3. The PAPA exhibits a flat-field pattern and geometric distortion. The photon flux is limited to around 100,000 ph/sec, and V$_{lim}\sim$12. It also a chronologically ordered list of photon coordinates.
4. The ICCD of Nice Univ. has a lower gain than the previous detectors, no geometric distortion, virtually no limitation to the photon flux for normal astrophysical use, and V$_{lim}\sim$10. The image rate is 50 Hz, with an electronic shutter, able to reduce the integration time to 0.06 msec
10000 Hence the detectors should be chosen according to the observing program, since some defects may be incompatible with the observation requirements. A good detector for high resolution imaging is still to be desired. Some technical developments are under way in our team to contribute to this problem. A prototype of a new photon-counting camera that would allow a high photon rate and a direct numerisation of photon coordinates is beeing tested (DELTA camera, Koechlin & Morel, 1998).
[**5. Performances and limitations**]{}
10000 [*5.1. Physical limitations*]{}
10000 The effects of seeing on speckle observations is a strong reduction of the limiting magnitude for bad seeing conditions, whereas the angular resolution attainable in the image restoration process degrades more slowly from the theoretical limit of $\lambda/D$, where $\lambda$ is the wavelength and $D$ is the telescope diameter (Roddier, 1981).
As solar observations have demonstrated, the Pic du Midi site sometimes features slow seeing variations and extended isoplanetic patch, which indicates that the TBL is potentially well suited to speckle and adaptive optics observations at short wavelengths, despite its modest size. With a diffraction limit at 0.06$\,$ in V, the TBL can provide high quality data from which many astrophysical programs could benefit.
Due to the necessary short exposure times photon noise is the most severe limiting factor in speckle imaging (Dainty and Greenaway, 1979, Beletic and Goody, 1992). The limiting magnitude depends on the atmospheric seeing, the spectral bandpass, the angular resolution to be achieved, and the quantum efficiency of the detector (Dainty and Greenaway, 1979). We reach with the TBL and detectors such as the CP40 and the PAPA with a bandpass of 70 nm. Without filters, the expected limiting magnitude would increase to about . In that case, the wavelength range is determined by the product of the sensitivity response of the detector with that of the photo-cathode, and the resulting bandwidth is a few hundreds of nanometers. It was shown both experimentally (Hege [*et al.*]{}, 1981) and with numerical simulations (Ziad [*et al.*]{}, 1994) that such extreme observing conditions can be used for faint detection of object duplicity.
[*5.2. Data processing and image restoration*]{}
10000 Whereas data quality is of paramount importance and obviously limits the angular resolution that can be ultimately obtained in the reconstructed images, the nature of the data reduction methods subsequently employed to extract the scientific information contained in these images plays a key role in ensuring the overall success of the scientific programs.
We have written the software to process data from the various detectors used (cf. §4), and the different observing modes: speckle imaging, aperture synthesis with pupil masks, speckle spectroscopy, and coronagraphy. The analysis of the wavefront is not yet implemented.
For speckle imaging, a few observers have independently reached the conclusion that “the bispectrum combined with a constrained iterative deconvolution of amplitudes produces the highest quality imagery” (Beletic and Goody, 1992). Nevertheless, we have used various programs ranging from Knox-Tompson (1974) to full bispectrum methods (Weigelt, 1977, Roddier, 1986, Lannes, 1989) (and even partial bispectrum methods, i.e. using only a subset of all possible closure relations) and found little differences on the restored phasor image of double stars. The pre-processing of the original data (correction of geometric distortion, of flat-field, and various calibrations) is for us the crucial step in the whole image restoration process.
The “Aperture Synthesis” team at OMP has been mainly involved during the last few years in the theoretical aspects of aperture synthesis and related problems such as deconvolution, wavelets and multi-resolution methods, with applications on single aperture interferometry and multi-aperture devices (Lannes [*et al.*]{} 1987; Lannes 1988, 1989, 1991). The approach to these problems is deterministic and based on a least-squares scheme that allows error analysis, hence a good understanding of the stability of the image restoration process. Note that the deconvolution method (Lannes [*et al.*]{} 1987) respects the photometry of the target, which is necessary for many applications, such as the determination of color indices of binary stars for example (cf §6).
[**6. Astrophysical programs**]{}
10000 In this section, we describe are some of the scientific subjects which are being studied with PISCO at Pic du Midi. Other programs which aimed at imaging complex objects (asteroids and stellar enveloppes) have been impossible to do because of bad weather conditions.
10000 [*– Orbits of binary stars*]{}
10000 The study of binaries is a well suited program for speckle cameras as the bibliography of the last two decades can easily show (CHARA project, McAlister and Hartkopf, 1984, 1988, Hartkopf [*et al.*]{}, 1996).
A long term program aims at measuring the position of close binaries to determine the orbits and derive the masses of the components using the parallaxes measured by Hipparcos (Carbillet [*et al.*]{}, 1996, Aristidi [*et al.*]{}, 1997, Aristidi [*et al.*]{}, 1998). New orbital elements have already been recalculated for 8 double stars from these observations (Aristidi [*et al.*]{}, 1998).
We noticed that PISCO was very efficient for binary study, even when the atmospheric conditions were poor and did not allow any other imaging program. Hence binary measurements have been used as a backup program of all our high angular resolution observations.
[*– Stellar classification of components of binary stars*]{}
10000 When images have been restored in B, V, R, with a good photometry (cf. §5.2), color indices can be measured which then allow stellar classification of each of the two stars. This may reveal essential for the stars for which a derivation of mass has been made. Acurate orbit determination (and hence masses) are easier to perform for short-period binaries, which are generally very close and for which only global color indices or spectra are available. The individual stellar classification is then poorly known. The paradox is then that acurate masses are affected to stars with big uncertainties in the stellar classification, or less acurate masses to well identified stars, in the case of binaries with a big angular separation. Hence we see that color indices, or even spectra, of individual stars are crucial for stellar studies.
A study of composite spectrum stars (coll. J.-M. Carquillat and N. Ginestet, OMP) associate the imaging and spectroscopic modes of PISCO. Some stars exhibit the signature of a composite spectrum which could be interpreted as the sum of (at least) two spectra of different type. (Ginestet [*et al.*]{}, 1994) The aim of this program is to detect the possible presence of a companion and then to identify the spectral type of both components either with color indices or (when possible) with a spectrum with high angular resolution which would separate the spectra of the individual stars.
[*– Search for binarity and statistical studies*]{}
10000 The influence of the presence of a companion for star formation (accretion of a disk) and stellar evolution (stellar winds, f.i.) is not yet fully understood. Hence some surveys have been undergone to determine the frequency of binarity among pre-main sequence or post-AGB stars to constrain the theoretical models with these statistical results. Once the binarity has been established, the next step is to identify the nature of each companion, either with its photometry or directly by spectroscopy.
A statistical study of pre-main sequence stars has been started in 1996 with complementary high angular observations made at ESO and CFHT with Adaptive Optics (AO) in the infra-red (Bouvier [*et al.*]{}, 1996).
Another program directed by E. Aristidi and B. Lopez (Nice Univ., France) aims at searching for binaries among Mira-type stars (for which binarity has been suspected by Hipparcos), and studying the interaction between the envelope of the Mira and the atmosphere of the companion (Lopez [*et al.*]{}, 1998).
[**7. Conclusion**]{}
10000 The speckle camera of Observatoire Midi-Pyrénées has been tested in all its operating modes and is now qualified for routine scientific exploitation. Its versatility with multi-mode observational possibilities makes it particularly well suited to testing the new methods of image restoration and aperture synthesis. The experience gained with pupil masks may have direct applications for reducing data from optical interferometric arrays.
The good performances of speckle methods for binary star observations have lead to consequent orbit measurements during the last twenty years all around the world and PISCO has started to bring its contribution to this effort (Carbillet [*et al.*]{}, 1996, Aristidi [*et al.*]{}, 1997, Aristidi [*et al.*]{}, 1998). This high efficiency makes speckle observations a “privileged” tool for binary studies. A new series of speckle programs have been impulsed by the discovery by Hipparcos of thousands of binary candidates (confirmation of binarity, orbits, variability of companions, etc). New space projects such as the space interferometers dedicated to parallax measurements (ESA GAIA) will also need follow-up based-ground observational programs in the future for which PISCO and speckle techniques in general may significantly contribute.
[**Acknowledgements:**]{}
We are indebted to A. Blazit, D. Mourard, E. Aristidi, D. Gezari and P. Nisenson for lending us their detectors, and to A. Lannes, M. Festou, J.-M. Carquillat, N. Ginestet and M. Scardia for the fruitful collaboration for the scientific exploitation of PISCO.
We thank the Observatoire Midi-Pyrénées technical staff, and especially the workshop of Toulouse, Bagnères de Bigorre and Pic du Midi and the night assistants and operators of the TBL, for their participation to this project. We acknowledge the assistance of J. Cadaugade and S. Chastanet for the preparation of the photographs.
This instrument was financed by a grant from the [*Institut National des Sciences de l’Univers*]{} of the [*Centre National de la Recherche Scientifique (CNRS)*]{} to the TBL, with additional support from the [*Unités de Recherche Associées n$^{\circ}$1281*]{} and [*n$^{\circ}$285*]{} (now [*Unité Mixte de Recherche n$^{\circ}$5572*]{}) of CNRS.
**Bibliography**
[^1]: Based on observations made at Télescope Bernard Lyot, Pic du Midi, France
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We address the question of the dependence of the fragility of glass forming supercooled liquids on the ”softness” of an interacting potential by performing numerical simulation of a binary mixture of soft spheres with different power $n$ of the interparticle repulsive potential. We show that the temperature dependence of the diffusion coefficients for various $n$ collapses onto a universal curve, supporting the unexpected view that fragility is not related to the hard core repulsion. We also find that the configurational entropy correlates with the slowing down of the dynamics for all studied $n$.'
author:
- 'Cristiano De Michele${}^{1,2}$'
- Francesco Sciortino
- Antonio Coniglio
title: 'Scaling in soft spheres: fragility invariance on the repulsive potential softness'
---
When a liquid is cooled below its melting temperature, if crystallization does not take place, it becomes [*supercooled*]{}. In this supercooled region, the viscosity increases by more than 15 order of magnitude in a a small $T$-range. When the viscosity $\eta$ reaches a value of about $10^{13}$ Poise the liquid can be treated as an amorphous solid, i.e., a glass [@stillnaturerev; @torquatorevglass; @stillglassreviewscience; @angellnatureliqland] and the corresponding temperature is defined as the glass transition temperature (labelled $T_g$). The $T$-dependence of the viscosity $\eta$ differs for different glass formers. Angell has proposed a classification based on the behaviour of $\eta(T)$. Glasses are said to be [*fragile*]{} if they show large deviations from an Arrhenius law ($\eta(T)\propto \exp[E/T]$) or [*strong*]{} otherwise [@angellfrag]. The fragility $m$ of a glass forming liquid can be quantified by the slope of $\log\eta(T)$ vs $T_g/T$, evaluated at $T_g$, i.e. as $$m = \frac{{\it d}\log \eta}{{\it d}(T_g/T)}|_{T=Tg}
\label{eq:fragdlog}$$ While the original definition of fragility is based on a purely dynamic quantity, correlation between $m$ and other physical properties of glass forming liquids, both with dynamic and thermodynamic properties, have been reported . Recently, a remarkable correlation with vibrational properties of the glass state has been discovered [@tullioscience]. One of the main challenges in the physics of supercooled liquid and glasses is to understand the connection between dynamical properties of the liquid close to the glass transition, i.e. the fragility, and microscopic properties. Is the fragility most affected by the steepness of the repulsive potential or by the inter particle attraction? Is it controlled by other properties of the interaction potential? In the present letter we address this question calculating numerically the fragility of several models for liquids, differing only in the softness of the repulsive potential. We aim at understanding whether changing the softness of the repulsive potential the fragility changes accordingly. We show more generally that the diffusion coefficient $D$ can be scaled on a universal master curve by changing the softness of the repulsive potential. This implies that the fragility does not depend on the softness of the interaction potential. We complement this dynamical study with the evaluation of the configurational entropy to check the validity of the Adam-Gibbs[@AG; @antosconf; @schillingreview; @wolynes] relation. In this Letter we consider a simple glass former, a $80:20$ binary mixture of $N=1000$ soft spheres [@replicaBMLJPRL; @speedySSPEL; @depablosoft], that is an ensemble of spheres interacting via the following potential $$V_{\alpha\beta}(r) = 4\epsilon_{\alpha\beta} \left(\frac{\sigma_{\alpha\beta}}{r}\right)^n
\label{Eq:Vsoftsphere}$$ where $\alpha,\beta\in{A,B}$, $\sigma_{AA}=1.0$, $\sigma_{AB} = 0.8$, $\sigma_{BB} = 0.88$, $\epsilon_{AA}=1.0$, $\epsilon_{AB}=1.5$, $\epsilon_{BB}=0.5$ and $n$ is a parameter by which is possible to tune the ”softness” of the interaction [@hansenmcdonald]. This interaction potential is a Kob-Andersen potential [@kobandersenPRE] in which the attractive part of the potential has been dropped. In particular we investigate the values $n=6,8,12,18$. This choice for the binary mixture is motivated by the fact that such a system is not prone to crystallization, that is it can be easily supercooled below its melting temperature . Still, for $n < 6$, crystallization takes place within the simulation time, determining a lower limit to the range of investigated $n$ values. Reduced units will be used in the following, length will be in units of $\sigma_{AA}$, energy in units of $\epsilon_{AA}$ and time in units of $(M\sigma_{AA}^2/\epsilon_{AA})^{1/2}$, where $M$ is the mass of all particles. In physical units, assuming the atom $A$ is Argon, the units are a length of $3.4$$\AA$, an energy of $120K k_B$ and a time of $2.15 ps$.
The self-similar nature of the soft-sphere potential couples $T$ and $V$. It can be shown that all thermodynamic properties depend on the quantity $TV^{\frac{1}{n}}$[@softeos]. Dynamic properties can also be scaled accordingly [@japsoft]. Hence, it is sufficient to quantify the $T$-dependence or the $V$-dependence of any observable to fully characterize the behavior of the system. As a consequence the fragility does not change upon changing the density of the soft binary mixture.
\[Fig:DallMCT\]
Figure \[Fig:diffMCT\] shows the $T$-dependence of the diffusion coefficients, evaluated from the long time limit behavior of the mean square displacement, for all $n$ investigated and covering a window of about four order of magnitudes. In the attempt of compare the $n$-dependence of the diffusion coefficient, we report in Fig. \[Fig:scaleDT\] the data as a function of $T_n/T$, where $T_n$ is chosen in such a way to maximize the overlap between data of different $n$, i.e. to collapse all data onto a single master curve. Figure \[Fig:scaleDT\] shows that all curves can be successfully scaled onto the master curve ${\cal D}$ choosing a proper set of scaling parameters $T_n$ (whose $n$-dependence is plotted in the inset of this Figure). The very good quality of the resulting master curve $$D(T) = {\cal D}(T/T_n).
\label{Eq:Dscaling}$$ suggests that the $n$-dependence enters only via a rescaling of the temperature [@note3; @tarjusmossa].
The remarkable consequence of latter result is that the fragility of the system does not depend on the repulsive interaction potential. In fact according to Eq. (\[Eq:Dscaling\]) and from the definition of liquid’s fragility $m$ given in Eq. (\[eq:fragdlog\]), assuming $D\propto\tau^{-1}$ we get: $$m = \frac{T_g(n)}{T_n}\; \frac{1}{{\cal D}[T_g(n)/T_n]}\;
\frac{d{\cal D}(x)}{dx}|_{x=T_g(n)/T_n}
\label{Eq:scaledfrag}$$ where $T_g(n)$ is the glass transition temperature for the system with softness $n$, which can be defined as the temperature at which diffusivity reaches an arbitrary small value $10^{\cal K}$[@note], i.e. $$-\log D[T_g(n)] = -\log {\cal D}\left [ \frac{T_g(n)}{T_n}\right ]={\cal K}
\label{Eq:BSdefTg}$$
Eq.\[Eq:scaledfrag\] shows that the fragility index $m$ is a function only of the scaled variable $\frac{T_g(n)}{T_n}$ and hence, as far as the scaling reported in Fig. \[Fig:scaleDT\] keeps holding even at temperatures lower than the one we are able to equilibrate, the dynamic fragility $m$ is independent of $n$ as well. By fitting the master curve to a Vogel-Tamman-Fulcher fit, as shown in Fig. \[Fig:scaleDT\], an estimate of $\frac{T_g(n)}{T_n} = 10^{\cal K} $ can be calculated, resulting into a estimation of $m \approx 130$. This figure should be compared with the value $m=81$ for o-terphenyl (OTP), that is a typical fragile liquid and $m=20$ for the prototypical strong glass the liquid silica ($SiO_2$).
For completeness, we report also in Fig. \[Fig:scaleDT\] a fit of the master curve according to the prediction of mode-coupling theory, which has been shown to be consistent with numerical data for several models in the weak supercooling region. A best fit procedure requires the exclusion of the low $T$ points, for which deviations from the power-law fit are observed [@note2].
Recently, evidence has been presented that kinetic fragility strongly correlates with thermodynamic fragility[@angelmartinez]. In this respect, it is worth looking if the scaling observed in dynamical properties has a counterpart in thermodynamic properties. In particular, we evaluate the configurational entropy for the system, within the potential energy landscape framework as discussed in details in Refs.[@crifraJCP; @fraPELPRL; @stillweberPRA; @emiliaOTP; @sastryPELform; @sastryBMLJnature]. In brief, we estimate $S_{c}$ as difference between the liquid entropy (calculated via thermodynamic integration from the ideal gas) and of the vibrational entropy (calculated via thermodynamic integration, including anharmonic corrections, from the very low temperature harmonic dynamics of the disordered solid associated to the liquid configuration). We then focus on the ability of the Adam-Gibbs (AG) relation — which states that $$D(T) = A_{AG} e^{\frac{B_{AG}}{T S_c}},
\label{Eq:AdamGibbs}$$ — of modelling the temperature dependence of $D$. Fig. \[Fig:AGallANH\] shows the AG plot for the studied $n$ values. For all $n$, a satisfactory linear representation of $\log(D)$ vs. $1/TS_{c}(T)$ is observed. As discussed in more details in Ref.[@jchemphysme], the simultaneous validity of the VTF description of $D$ and of the AG relation requires the identity of the kinetic and thermodynamic fragilities. In this respect, the independence of $n$ discussed above for the case of kinetic fragility carries on also to thermodynamic fragility.
\[Fig:AGDTanh\]
A remarkable consequence of the validity of the AG relation (Eq. \[Eq:AdamGibbs\]), associated to the scaling with $n$ of $D$ (Eq.\[Eq:Dscaling\]) is that the configurational entropy can be written as $$S_c(T) = S_0(n) {\cal F}(T/T_n)
\label{Eq:Scscaled}$$ where $F(x)$ is a scaling function and $S_0(n) = B_{AG}/T_n$. To support such proposition, we show in Figure \[Fig:Scscaled\] $S_c$ multiplied by the factor $B_{AG}/T_n$ as a function of $T/T_n$, were $T_n$ are the values for which $D$ scaling is recovered (inset of Fig.\[Fig:scaleDT\]). Again, the quality of the data collapse stresses the validity of the scaling with $n$.
To conclude the relevant result that has been shown in this Letter is that in the case of soft sphere potentials, the dynamic fragility is independent on the power $n$ of the short range repulsion. This conclusion is based on the hypothesis that the scaling observed in the range of $T$ where simulations are feasible extends also to lower temperatures, down to the glass transition temperature. Indeed, a particular effort has been made to equilibrate configurations to temperatures lower than the MCT temperature, where dynamical processes different from the ones captured by MCT are active. If the scaling is indeed valid, the results presented in this Letter strongly support the possibility, that contrary to our common understanding, fragility in liquids is mostly controlled by other properties of the potential, more than by the hard core repulsion. Finally we note that one could be tempted to associate the fact that the diffusion coefficient data can be rescaled only by change the energy scale by $T_n$ to a simple overall rescaling of the landscape potential. The data in Fig.\[Fig:Scscaled\] suggest that this is not the case since $S_c(E)$ is not just scaling function of $T/T_n$ but it needs to be rescaled by a factor $S_0(n)$ and hence the number of distinct basins explored at the same $T/T_n$ changes with $n$. A non-trivial compensation mechanism between the scaling of the static properties ($S_c$) and the scaling of the kinetic coefficient $B_{AG}(n)$ (defined in Eq. (\[Eq:AdamGibbs\])) on $n$ must be present.
We thank L. Angelani and G. Ruocco for useful discussions. We acknowledge support from INFM Initiative Parallel Computing, Marie Curie Network and Miur FIRB and COFIN2002.
[40]{} P. G. Debenedetti and F. H. Stillinger, Nature [**410**]{}, 259-267 (2001). S. Torquato, Nature [**405**]{}, 521-523 (2000). F. H. Stillinger, Science [**267**]{}, 1935-1939 (1995). R. Böhmer, K. L. Ngai, C. A. Angell and D. J. Plazek, J. Chem. Phys. [**99**]{} (5), 4201-4209 (1993). T. Scopigno, G. Ruocco, F. Sette and G. Monaco, Science [**302**]{}, 849-852 (2003). G. Adam and J. H. Gibbs, J. Chem. Phys. [**43**]{}, 139 (1965). R. Schilling, cond-mat/0305565 (2003). A. Scala, F. W. Starr, E. La Nave, F. Sciortino and E. Stanley, Nature [**406**]{} 166 (2000). X. Xia and P. G. Wolynes, PNAS [**97**]{}, 2990-2994 (1999). R. J. Speedy, J. Phys.: Condens. Matter [**15**]{}, S1243-S1251 (2003). R. Faller and J. J. de Pablo, J. Chem. Phys., [**119**]{}, 4405 (2003). J. P. Hansen and I. R. McDonald, Theory of Simple Liquids (Academic Press, London - New York - San Francisco, 1976 ). W. Kob and H. C. Andersen, Phys. Rev. E [**51**]{}, 4626-4641 (1994); Phys. Rev. Lett. [**73**]{}, 1376-1379 (1994). W. G. Hoover, M. Ross, K. W. Johnson, D. Henderson, J. A. Barker and B. C. Brown, J. Chem. Phys. [**52**]{}, 4931 (1970). Y. Hiwatari, H, Matsuda, T. Ogawa, N. Ogita and A. Ueda, Prog. Theor. Phys. [**52**]{}, 1105 (1974). Note that linear extrapolation of $T_n$ with $n$ shows that $T_n$ goes to $0$ at same value $n\simeq 2$. This may suggest that below a critical value of the range of the potential the $T$-dependence of diffusivity coefficients exhibits a strong crossover to a different regime. Below this critical value the $T$-dependence of the diffusion coefficient should be weak. The scaling behaviour of temperature dependence of diffusion coefficients on varying the density for ortho-terphenyl has been studied in: G. Tarjus, D. Kivelson, S. Mossa and C. Alba-Simionesco,J. Chem. Phys. [**120**]{}, 6135 (2004). S. Sastry, P. G. Debenedetti and F. H. Stillinger, Nature [**393**]{}, 554-557 (1998). T. B. Schroeder, S. Sastry, J. C. Dyre and S. Glotzer, J. Chem. Phys [**112**]{} (22), 9834-9840 (2000). W. Götze, J. Phys.: Condens. Matter [**11**]{}, A1-A45 (1999). W. Götze, in Liquids, Freezing and Glass Transition, edited by J. P. Hansen, D. Levesque and J. Zinn-justin (North-Holland, Amsterdam) 1991. S. S. Ashwin and S. Sastry, J. Phys.: Condens. Matter [**15**]{}, S1253-S1258 (2003). In particular we have made the choice $D[T_g(n)]=10^{\cal K} = 5.75\times 10^{-16}$, this value for the diffusion coefficient ensures that at $T_g$ the relaxation time is about $100 s$. We also note that a Bassler form ($D(\xi) = A \exp (B/\xi^2)$) does not reproduce the data in a manner comparable to VFT and MCT. L.-M. Martinez and C. A. Angell, Nature [**410**]{}, 663-667 (2001). E. La Nave, F. Sciortino, P. Tartaglia, C. De Michele and S. Mossa, J. Phys.: Condens. Matter [**15**]{}, 1-10 (2003) . S. Sastry, Phase Transitions [**75**]{}, 507-515 (2002). F. H. Stillinger and T. A. Weber, Phys. Rev. A [**28**]{}, 2408 (1983); Science [**225**]{} (4666), 983-989 (1984). F. Sciortino, W. Kob and P. Tartaglia, Phys. Rev. Lett. [**83**]{}, 3214-3217 (1999). S. Sastry, Nature [**409**]{}, 164-167 (2001). S. Mossa, E. La Nave, H. E. Stanley, C. Donati, F. Sciortino and P. Tartaglia, Phys. Rev. E [**65**]{}, 041205 (2002). G. Ruocco, F. Sciortino, F. Zamponi, C. De Michele and T. Scopigno, J. Chem. Phys. (in press).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'It is theoretically demonstrated that parallel weakly tunnel coupled quantum dots exhibit non-equilibrium blockade regimes caused by a full occupation in the spin triplet state, in analogy to the Pauli spin blockade in serially weakly coupled quantum dots. Charge tends to accumulate in the two-electron triplet for bias voltages that support transitions between the singlet and three-electron states.'
author:
- 'J. Fransson'
title: 'Non-equilibrium triplet blockade in parallel coupled quantum dots'
---
From fundamental aspects of spin and charge correlations the two-level system in a double quantum dot (DQD) has recently become highly attractive. It has been demonstrated that spin correlations lead to Pauli spin blockade in serially coupled quantum dots (QDs), where the current is suppressed because of spin triplet correlations,[@ono2002; @rogge2004; @johnson2005; @franssoncm2005] something which may be applied in spin-qubit readout technologies.[@bandyopadhyay2003] Pauli spin blockade has also been reported for general DQDs with more than two electrons.[@liu2005] Recently, the Pauli spin blockade with nearly absent singlet-triplet splitting has been employed in studies of hyperfine couplings between electron and nuclear spins.[@ono2004; @johnson_nature2005; @erlingsson2005; @koppens2005; @petta2005] Besides being present in serially coupled QDs, it is relevant to ask whether an analog of the Pauli spin blockade is obtainable in parallel QDs.
The purpose of this paper is to demonstrate that parallel coupled QDs, see Fig \[fig-system\], exhibit regimes of non-equilibrium triplet blockade. Here only one of the QDs is tunnel coupled to the external leads while the second QD functions as a perturbation to the first QD. Important quantities in order to find the non-equilibrium triplet blockade regime is that the QDs are coupled through charge interactions, e.g. interdot Coulomb repulsion and exchange interaction, and weakly through tunnelling. In absence of interdot exchange interaction there may be regimes of usual Coulomb blockade in a finite bias voltage range around equilibrium.
In presence of a sufficiently large ferromagnetic interdot exchange interaction the triplet states $\ket{\sigma}_A\ket{\sigma}_B,\ \sigma=\up,\down$ (one electron in each QD with equal spins) and $[\ket{\up}_A\ket{\down}_B+\ket{\down}_A\ket{\up}_B]/\sqrt{2}$ acquire a lower energy than the lowest two-electron singlet (the singlet states being superpositions of the Fock states $\{[\ket{\up}_A\ket{\down}_B-\ket{\down}_A\ket{\up}_B]/\sqrt{2},\ket{\up\down}_A\ket{0}_B,\ket{0}_A\ket{\up\down}_B\}$). Then, the triplet naturally becomes the equilibrium ground state with a unit occupation probability, provided that the two-electron triplet state has a lower energy than all other states. The triplet persists in being fully occupied for bias voltages smaller than the energy separation between the triplet and singlet states, although transitions between the one-electron states and the singlets may open for conduction. However, for larger bias voltages this low bias triplet blockade is lifted as the transitions between the triplet and the one-electron states become resonant with the lower of the chemical potentials of the leads. At this lifting, the current through the system is mediated via transitions between the two-electron singlets and the one-electron states.
![(Colour online) Left panel: The coupled QDs of which only one is tunnel coupled to the leads. Right panel: Processes leading to the non-equilibrium triplet blockade. Faint and bold lines signify low and high transition probabilities, respectively. See text for notation.[]{data-label="fig-system"}](system_trans_cm.eps){width="8.5cm"}
The non-equilibrium triplet blockade regime is entered at bias voltages such that transitions between three-electron states and, at least, one of the singlet states become resonant, see Fig. \[fig-system\], while transitions between the three-electron states and the triplet lie out of resonance. At those conditions, an electron can enter the DQD from the lead with the higher chemical potential, through transitions between the singlet and the three-electron states. Transitions from the triplet to the three-electron state are suppressed since the bias is lower than the energy barriers between those states. However, electron tunnelling from the three-electron states in the DQD to the lead with the lower chemical potential are supported through transitions from those states to the triplet, since the tunnelling barrier to this lead is sufficiently low. Thereto, the probability for such transitions are about unity whereas the probability for transitions between the three-electron states and the singlet is at most a half. Finally, charge that end up in the triplet through this process is trapped in this state because of the negligible probability for transitions between the triplet and the one-electron states.
It is noticed that, a finite (ferromagnetic) interdot exchange interaction is not a necessary condition for the existence of a non-equilibrium triplet blockade regime. Nevertheless, a ferromagnetic exchange yields a larger degree of freedom in the variation of the interdot tunnelling and, also, allows a higher temperature.
For quantitative purposes, consider two single level QDs $(\dote{A},\dote{B}$, spin-degenerate) with intradot charging energies $(U_A,U_B)$, which are coupled by interdot charging $(U')$, exchange $(J\geq0)$, and tunnelling $(t)$ interactions. Specifically, the DQD is modelled by[@sandalov1995; @inoshita2003; @cota2005] $\Hamil_{DQD}=\sum_{i=A,B}(\sum_\sigma\dote{i\sigma}\ddagger{i\sigma}\dc{i\sigma}+Un_{i\up}n_{i\down})+(U'-J/2)(n_{A\up}+n_{A\down})(n_{B\up}+n_{B\down})-2J\bfs_A\cdot\bfs_B+\sum_\sigma(t\ddagger{A\sigma}\dc{B\sigma}+H.c.)$, where $\bfs_i=(1/2)\sum_{\sigma\sigma'}\ddagger{i\sigma}\hat{\sigma}_{\sigma\sigma'}\dc{i\sigma'}$, $\sigma,\sigma'=\up,\down$, $i=A,B$, are the spins of the two levels. In analogy with the Pauli spin blockade in serially coupled QDs,[@ono2002; @franssoncm2005] it is required that the lowest one-electron states, the triplet, and the two lowest singlets are nearly aligned, and that the lowest three-electron states lie below the equilibrium chemical potential $\mu$. Hence, $E_T\approx E_{S1}\approx E_{S2}\approx\min_{n=1}^4\{E_{1n}\}<\min_{n=1}^4\{E_{3n}\}<\mu<E_4$, where $E_T$ and $E_{Sn},\ 1,2$, are eigenenergies for the triplet and the two lowest singlet states, respectively, whereas $E_{1n}$, $E_{3n}$, and $E_4$ are the energies for the one-, three-, and four-electron states, respectively. This requires that $\mu-\dote{B}\approx\Delta\dote{}$, $U'\approx\Delta\dote{}$, and $U_A\approx2\Delta\dote{}\leq U_B$, where $\Delta\dote{}=\dote{B}-\dote{A}$. The inequality $U_A\leq U_B$ points out that the QDs do not have to be identical, merely that the charging energy of the second QD should be lower bounded by the charging energy of the first. It should be emphasized, however, that the presence of the second QD is essential in order to obtain the effect discussed in this paper. Finally, weakly coupled QDs, e.g. $\xi=2t/\Delta\dote{}\ll1$ implies that the energies for the lowest one- and three-electron states acquire their main weight on QD$_A$. This condition yields a low (large) probability for transitions between the triplet and the lowest one-electron (three-electron) states.
In general there are 16 eigenstates of $\Hamil_{DQD}$, labelled $\{\ket{N,n},E_{Nn}\}$ denoting the $n$th state of the $N$-electron ($N=1,\ldots,4$) configuration at the energy $E_{Nn}$.[@franssoncm2005] In diagonal form, the DQD is thus described by $\Hamil_{DQD}=\sum_{Nn}E_{Nn}\ket{N,n}\bra{N,n}$. Taking the leads to be free-electron like metals and the (single-electron) tunnelling between the DQD with rate $v_{k\sigma}$, the full system can be written as[@franssoncm2005] $$\begin{aligned}
\lefteqn{
\Hamil=\sum_{k\sigma\in L,R}\leade{k}\cdagger{k}\cc{k}+\Hamil_{DQD}
}
\label{eq-Ham}\\&&
+\sum_{k\sigma,Nnn'}[v_{k\sigma}(\dc{A\sigma})_{NN+1}^{nn'}
\cdagger{k}\ket{N,n}\bra{N+1,n'}+H.c.],
\nonumber\end{aligned}$$ where $(\dc{A\sigma})_{NN+1}^{nn'}=\bra{N,n}\dc{A\sigma}\ket{N+1,n'}$ is the matrix element for the transitions $\ket{N,n}\bra{N+1,n'}$. The operator $\dc{A\sigma}$ signify that electrons tunnel from molecular like orbitals in the DQD through QD$_A$ to the leads, which appropriately describes the physical tunnelling processes.
Following the procedure in Ref. , the occupation of the eigenstates are described by a density matrix $\rho=\{\ket{N,n}\bra{N,n}\}$. In the Markovian approximation (sufficient for stationary processes) one thus derives that the equations for $P_{Nn}\equiv\av{\ket{N,n}\bra{N,n}}$ to the first order in the couplings $\Gamma^{L/R}=2\pi\sum_{k\in L/R}|v_{k\sigma}|^2\delta(\omega-\leade{k})=\Gamma_0/2$ between the DQD and the leads, can be written as
$$\begin{aligned}
%\lefteqn{
\ddt P_{Nn}&=&
\frac{1}{\hbar}\sum_{\alpha=L,R}\biggl(
\sum_{n'}\Gamma_{N-1n',Nn}^\alpha
%}
%\nonumber\\&&\vphantom{\sum_\sigma}\times
[f^+_\alpha(\Delta_{Nn,N-1n'})P_{N-1n'}
-f^-_\alpha(\Delta_{Nn,N-1n'})P_{Nn}]
\nonumber\\&&
-\sum_{n'}\Gamma^\alpha_{Nn,N+1n'}
[f^+_\alpha(\Delta_{N+1n',Nn})P_{Nn}
%\nonumber\\&&\vphantom{\sum_\sigma}
-f^-_\alpha(\Delta_{N+1n',Nn})P_{N+1n'}]\biggr)=0,
%\ N=1,\ldots,4,
\label{eq-dtN}\\
N&=&1,\ldots,4,
\nonumber\end{aligned}$$
where $P_{-1n}=P_{5n}\equiv0$. Here, $\Delta_{N+1n',Nn}=E_{N+1n'}-E_{Nn}$ denote the energies for the transitions $\ket{N,n}\bra{N+1,n'}$, while $\Gamma_{Nn,N+1n'}^{L/R}=\sum_\sigma\Gamma^{L/R}(\dc{A\sigma})_{NN+1}^{nn'}$. Also, $f^+_{L/R}(\omega)=f(\omega-\mu_{L/R})$ is the Fermi function at the chemical potential $\mu_{L/R}$ of the left/right $(L/R)$ lead, and $f^-_{L/R}(\omega)=1-f^+_{L/R}(\omega)$. Effects from off-diagonal occupation numbers $\av{\ket{N,n}\bra{N,n'}}$, which only appear in the second order (and higher) in the couplings, are neglected since these include off-diagonal transition matrix elements to the second order (or higher) which generally are small for $\xi\ll1$.
Since the low bias triplet blockade can be found for weakly coupled QDs whenever $J>0$ is sufficiently large, the following derivation focus on the non-equilibrium blockade. The non-equilibrium blockade discussed here, is driven by opening transitions between the two- and three-electron states. For simplicity, assume that the bias voltage $V=(\mu_L-\mu_R)/e$ is applied such that $\mu_{L/R}=\mu\pm eV/2$. Then for $|eV|<7\Delta\dote{}/4$, $k_BT<0.01U_A$, and $\xi<0.2$, which is sufficient for the present purposes, only the population numbers $P_{1n},\ n=1,2$, $N_T=P_{2n}/3,\ n=1,2,3$, $P_{24},\ P_{25}$, and $P_{3n},\ n=1,2$, are non-negligible. The other populations are negligible since the corresponding transition energies lie out of resonance. Because of spin-degeneracy it is noted that $P_{1n}=N_1/2,\ n=1,2$, and $P_{3n}=N_3/2,\ n=1,2$, which reduces the system to five equations for the population numbers. As discussed in the introduction, the non-equilibrium blockade arises when transitions between a singlet and the three-electrons state are resonant. Therefore, the bias voltage is tuned into the regime where $\mu_L$ lies around these transition energies, e.g.[@Delta32] $\min_{nn'}\{\Delta_{3n',2n}\}<\mu_L<\max_{nn'}\{\Delta_{3n',2n}\}$, $n=1,\ldots,5$, $n'=1,2$ (here $eV>0$, the case $eV<0$ follows by symmetry of the system). For such biases it is clear that $f_L^+(\Delta_{2n,1n'})=f_R^-(\Delta_{2n,1n'})=1$, $n=1,\ldots,5$, $n'=1,2$, and that $f_R^+(\Delta_{3n',2n})=0$, $n=1,\ldots,5$, $n'=1,2$. It is also clear that the charge accumulation in the triplet is lifted for biases that supports transitions from the triplet to the three-electron states, hence, the bias voltage has to be such that $f_{L/R}^+(\Delta_{3n',2n})\approx0$, that is $\Delta_{3n',2n}=E_{3n'}-E_T>\mu_L+k_BT$, $n=1,2,3$, $n'=1,2$. Thus, the equations for the population numbers can be written as
\[eq-Pneq\] $$\begin{aligned}
N_1&=&\frac{1}{p}N_3=\frac{2/3}{1+2p(\kappa/\beta)^2}N_T
\label{eq-P1neq}\\
P_{2n}&=&\frac{1}{2}\frac{L_n^2
+\Lambda_n^2p\sum_\alpha f_\alpha^-(\Delta_{31,2n})}
{L_n^2+\Lambda_n^2f_L^+(\Delta_{31,2n})}N_1,\ n=4,5,
\label{eq-P2nneq}\\
p&=&\sum_{n=4}^5L_n^2
\frac{\Lambda_n^2f_L^+(\Delta_{31,2n})}
{L_n^2+\Lambda_n^2f_L^+(\Delta_{31,2n})}
\biggl\{3\kappa^2+\sum_{\alpha,n=4}^5\Lambda_n^2
\nonumber\\&&\times
f_\alpha^-(\Delta_{31,2n})\left[1-
\frac{\Lambda_n^2f_L^+(\Delta_{31,2n})}
{L_n^2+\Lambda_n^2f_L^+(\Delta_{31,2n})}
\right]\biggr\}^{-1}.
\label{eq-p}\end{aligned}$$
Here, ($n'=1,2$, $n=4,5$) $\beta^2\equiv\sum_\sigma|(\dc{A\sigma})^{n'1}_{12}|^2=\xi^2/[(1+\sqrt{1+\xi^2})^2+\xi^2]$, $L_n^2\equiv\sum_\sigma|(\dc{A\sigma})_{12}^{n'n}|^2$, $\kappa^2\equiv\sum_\sigma|(\dc{A\sigma})_{23}^{1n'}|^2=(1+\xi^2)/[(1+\sqrt{1+\xi^2})^2+\xi^2]$, and $\Lambda_n^2\equiv\sum_\sigma|(\dc{A\sigma})_{23}^{nn'}|^2$ are the matrix elements for the relevant transitions. The above relations are due to spin-degeneracy, e.g. $\Delta_{2n,11}=\Delta_{2n,12}$ and $\Delta_{31,2n}=\Delta_{32,2n}$, $n=1,\ldots,5$. Using Eq. (\[eq-Pneq\]), charge conservation ($1=\sum_{Nn}P_{Nn}=N_1+N_T+\sum_nP_{2n}+N_3$) thus implies that $$\begin{aligned}
N_\text{T}&=&\biggl\{1+\frac{2/3}{1+2p(\kappa/\beta)^2}
\Bigl(1+p
\nonumber\\&&
+\frac{1}{2}\sum_{n=4}^5\frac{L_n^2
+p\Lambda_n^2\sum_\alpha f_\alpha^-(\Delta_{31,2n})}
{L_n^2+\Lambda_n^2f_L^+(\Delta_{31,2n})}
\Bigr)\biggr\}^{-1}.
\label{eq-NTneq}\end{aligned}$$
Now, the matrix elements $L_n^2,\Lambda_n^2,\ n=4,5$, are finite and bounded, however, $L_4^2,2\Lambda_5^2\rightarrow1$ and $L_5^2,\Lambda_4^2\rightarrow0$ as $\xi\rightarrow0$, hence, the last term in Eq. (\[eq-NTneq\]) is at most 1/2 for weakly coupled QDs since $p\rightarrow0$, $\xi\rightarrow0$, in the considered bias regime (see discussion below). However, the ratio $2p(\kappa/\beta)^2$ is finite for all $\xi$ and $J>0$, while it diverges as $\xi\rightarrow0$ for $J=0$, see main panel in Fig. \[fig-Jvar\]. For weakly coupled QDs one thus finds that $N_T\approx1/(1+[1+2p(\kappa/\beta)^2]^{-1})\approx1$ whenever $2p(\kappa/\beta)^2\gg1$. The inset of Fig. \[fig-Jvar\] illustrates a subset in $(t,J)$-space where this ratio is larger than $10^2$. At this condition, the boundary is approximately given by $J(t)=J_0-15t^2[1+(10t)^2]$.
![(Colour online) Variation of the ratio $2p(\kappa/\beta)^2$ as function of $J$ for different $t$ at constant $\Delta\dote{},\ U',\ U_{A/B}$. The inset shows the region in $(t,J)$-space where $2p(\kappa/\beta)^2>10^2$.[]{data-label="fig-Jvar"}](Jvar_cm.eps){width="8.5cm"}
Using the transport equation derived in Ref. , identifying $G^<_{Nn,N+1n'}(\omega)=i2\pi P_{N+1n'}\delta(\omega-\Delta_{N+1n',Nn})$ and $G^>_{Nn,N+1n'}(\omega)=-i2\pi P_{Nn}\delta(\omega-\Delta_{N+1n',Nn})$, the current in the considered regime is given by $$\begin{aligned}
I&=&\frac{e\Gamma_0}{6\hbar}\biggl[3(\beta^2-\kappa^2)
+\sum_{n=4}^5[L_n^2-\Lambda_n^2f_L^-(\Delta_{31,2n})]
\nonumber\\&&
+2\sum_{n=4}^5\Lambda_n^2f_L^+(\Delta_{31,2n})
\frac{L_n^2+p\Lambda_n^2\sum_\alpha f_\alpha^-(\Delta_{31,2n})}
{L_n^2+\Lambda_n^2f_L^+(\Delta_{31,2n})}\biggr]
\nonumber\\&&\times
\frac{N_T}{1+2p(\kappa/\beta)^2}.
\label{eq-Jneq}\end{aligned}$$ This expression clearly shows that a large value of $2p(\kappa/\beta)^2$ yields a suppression of the current, that is, at the formation of a unit occupation in the triplet state. For biases such that $\mu_L<\min_{nn'}\{\Delta_{3n,2n}\}$ is follows that $f_L^+(\Delta_{3n',2n})\approx0\ \Rightarrow\ p\approx0$, which accounts for a lifting of the triplet blockade where the current is $\sim2p(\kappa/\beta)^2$ larger than in the blockaded regime.
The non-equilibrium triplet blockade depends on the interplay between $J$ and $t$. A reduced $t$ leads to a strong localisation of the odd number states in either of the QDs, which for $\Delta\dote{}>0$ leads to that the lowest odd number states are strongly localised on QD$_A$. Then, the probability for transitions between the triplet, and the one-/three-electron states is small/large $(\beta\rightarrow0/\kappa\rightarrow1$).
The singlets, on the other hand, are expanded in terms of the Fock states $\{[\ket{\up}_A\ket{\down}_B-\ket{\down}_A\ket{\up}_B]/\sqrt{2},\ket{\up\down}_A\ket{0}_B,\ket{0}_A\ket{\up\down}_B\}$ with weights that are slowly varying functions of $t$, however, strongly dependent on $J$. A negligible $J$ yields that the two lowest singlets are almost equally weighted on the states $[\ket{\up}_A\ket{\down}_B-\ket{\down}_A\ket{\up}_B]/\sqrt{2}$ and $\ket{\up\down}_A\ket{0}_B$. Increasing $J>0$ redistributes the weights such that the lowest singlet ($\ket{2,4}$) acquires an increasing weight on $\ket{\up\down}_A\ket{0}_B$, whereas the second singlet ($\ket{2,5}$) becomes stronger weighted on $[\ket{\up}_A\ket{\down}_B-\ket{\down}_A\ket{\up}_B]/\sqrt{2}$. Hence, for a finite $J>0$ and $t\rightarrow0$, this redistribution leads to that transitions between the lowest one-electron states and $\ket{2,4}\ (\ket{2,5})$ occur with an enhanced (reduced) probability, e.g. $L_4^2\rightarrow1,\ (L_5^2\rightarrow0$), and oppositely for transitions between the singlets and the three-electron states, e.g. $\Lambda_4^2\rightarrow0,\ (\Lambda_5^2\rightarrow1/2$). This implies that $p\rightarrow0$ as $t\rightarrow0$ while $p(\kappa/\beta)^2$ remains almost constant. This constant, however, becomes larger (smaller) for smaller (larger) $J$.
![(Colour online) Variation of the triplet occupation number $N_\text{T}$ a) and the modulus of the current (units of $e\Gamma_0/h$) b) as function of the bias voltage and the equilibrium chemical potential $\mu$. Here, $\xi=0.01$, $k_BT=0.01U_A=4t$, and $J=0.2(U_A-U')/2$.[]{data-label="fig-NT"}](JV_NT_copper.eps){width="8.5cm"}
The typical variation of the triplet state occupation number $N_\text{T}$, calculated from Eq. (\[eq-dtN\]), as function of the bias voltage and the equilibrium chemical potential for $0<J<J_0-15t^2[1+(10t)^2]$ and $t/(k_BT)<2$ is plotted in Fig. \[fig-NT\] a). Here, varying the equilibrium chemical potential mimics the effect of applying an external gate voltage $V_g$ by means of which the levels of the DQD are shifted relatively the equilibrium chemical potential. The extended diamond marks the region where the occupation of the triplet is nearly unity and where the transport through the DQD is blockaded. The calculated current is displayed in Fig. \[fig-NT\] b), from which it is legible that the triplet blockade regime is subset of a larger domain of a nearly vanishing current through the DQD. The two diamonds within the low current regime are caused by a lifting of the triplet blockade (see the introduction), where the current is mediated by transitions between the one-electron states and the singlets.
As is seen in Fig. \[fig-NT\], shifting $\mu$ in the range $\dote{B}+(\Delta\dote{}-J,2\Delta\dote{})$ causes an extension of the low bias triplet regime since the transitions between the triplet and the one-electron states become resonant at higher biases. On the other hand, the non-equilibrium triplet blockade is shifted to lower biases since $\mu$ lies closer to the transition between the three-electron states and the singlets. The two blockade regimes merge into single as $|\mu-\Delta_{3n',2n}|<|\mu-\Delta_{21,1n'}|$, $n=4,5$, $n'=1,2$, e.g. for $\mu-\dote{B}\in(3\Delta\dote{}/2,2\Delta\dote{})$, see Fig. \[fig-NT\]. Shifting $\mu$ in the interval $\dote{B}+(\Delta\dote{}/2,\Delta\dote{}-J)$ removes the low bias blockade since the one-particle states become the equilibrium ground state. The non-equilibrium blockade is shifted to lower biases, here caused by transitions between the one- and two-electron states which tend to accumulate the occupation in the triplet.
While the case $\Delta\dote{}>0$ is considered here, the non-equilibrium blockade is also found in the opposite case, e.g. $\Delta\dote{}<0$ and $\mu-\dote{A}\approx\Delta\dote{}$. In this case, however, the system has to be gated such that only the four-electron state lies above $\mu$, whereas the charge accumulation of the triplet state is governed by the same processes as described here.
It should be noted that higher order effects, as well as singlet-triplet relaxation, have been neglected in the equation for the population probabilities $P_{Nn}$. However, in many aspects the situation discussed here corresponds to the experiment reported in Ref. , hence the effect considered should be measurable under much the same conditions. Therefore, as in the case of the serially coupled DQD, the higher order effects give contributions that are at least two orders of magnitude smaller than the second order contributions. Therefore, these can be neglected in the present study. On the same basis as in the description of the serially coupled DQD,[@franssoncm2005] the singlet-triplet relaxation may be neglected here.
The conditions required for the existence of non-equilibrium triplet blockade, concerning the intra- and interdot charge interactions for weakly coupled QDs, have been experimentally obtained for serially coupled QDs.[@ono2002; @rogge2004; @johnson2005] The additional requirement, i.e. a ferromagnetic interdot exchange interaction which is larger than the interdot tunnelling and the thermal excitation energy, is accessible within the present state-of-the-art technology.[@kouwenhoven2001; @vanbeveren2005; @johnson2005; @petta2005]
Support from Carl Trygger’s Foundation is acknowledged. The Institute of Physics and Deutsche Physikalische Gessellschaft is gratefully acknowledged for covering the publications costs.
[20]{} K. Ono, D. G. Austing, Y. Tokura, and S. Tarucha, Science, [**297**]{}, 1313 (2002). M. C. Rogge, C. Fühner, U. F. Keyser, and R. J. Haug, Appl. Phys. Lett. [**85**]{}, 606 (2004). A. C. Johnson, J. R. Petta, C. M. Marcus, M. P. Hanson, and A. C. Gossard, Phys. Rev. B, [**72**]{}, 165308 (2005). J. Fransson and M. Råsander, Phys. Rev. B, [**73**]{}, 205333 (2006). S. Bandyopadhyay, Phys. Rev. B, [**67**]{}, 193304 (2003). H. W. Liu, T. Fujisawa, T. Hayashi, and Y. Hirayama, Phys. Rev. B, [**72**]{}, 161305(R) (2005). M. Eto, T. Ashiwa, and M. Murata, J. Phys. Soc. Jap. [**73**]{}, 307 (2004). K. Ono and S. Tarucha, Phys. Rev. Lett. [**92**]{}, 256803 (2004). A. C. Johnson, J. R. Petta, J. M. Taylor, A. Yacoby, M. D. Lukin, C. M. Marcus, M. P. Hanson, and A. C. Gossard, Nature, [**435**]{}, 925 (2005). S. I. Erlingsson, O. N. Jouravlev, and Y. V. Nazarov, Phys. Rev. B, [**72**]{}, 033301 (2005). F. H. L. Koppens, J. A. Folk, J. M. Elzerman, R. Hanson, L. H. Willems van Beveren, I. T. Vink, H. P. Tranitz, W. Wegscheider, L. P. Kouwenhoven, and L. M. K. Vandersypen, Science, [**309**]{}, 1346 (2005). J. R. Petta, A. C. Johnson, A. Yacoby, C. M. Marcus, M. P. Hanson, and A. C. Gossard, Phys. Rev. B, [**72**]{}, 161301(R) (2005). I. S. Sandalov, O. Hjortstam, B. Johansson, and O. Eriksson, Phys. Rev. B, [**51**]{}, 13987 (1995). T. Inoshita, K. Ono, and S. Tarucha, J. Phys. Soc. Jpn. Suppl. A, [**72**]{}, 183 (2003). E. Cota, R. Aguado, and G. Platero, Phys. Rev. Lett. [**94**]{}, 107202 (2005). There is one (spin-degenerate) transition energy $(\Delta_{3n,26},\ n=3,4)$ in the vicinity of $\mu$ which is only relevant for $eV\geq U$ since $\min_{n=1}^4\{\Delta_{26,1n}-\mu\}\gtrsim U/2$. A. -P. Jauho, N. S. Wingreen, and Y. Meir, Phys. Rev. B, [**50**]{}, 5528 (1994). L. P. Kouwenhoven, D. G. Austing, and S. Tarucha, Rep. Prog. Phys. [**64**]{} 701 (2001). L. H. Willems van Beveren, R. Hanson, I. T. Vink, F. H. L. Koppens, L. P. Kouwenhoven, and L. M. K. Vandersypen, New Journal of Physics, [**7**]{} 182 (2005).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We introduce a procedure to infer the interactions among a set of binary variables, based on their sampled frequencies and pairwise correlations. The algorithm builds the clusters of variables contributing most to the entropy of the inferred Ising model, and rejects the small contributions due to the sampling noise. Our procedure successfully recovers benchmark Ising models even at criticality and in the low temperature phase, and is applied to neurobiological data.'
author:
- 'S. Cocco$^{1,2}$, R. Monasson$^{1,3}$'
title: Adaptive Cluster Expansion for Inferring Boltzmann Machines with Noisy Data
---
Understanding the correlated activity of complex, non-homogeneous multi-component systems is of fundamental importance in physics, biology, sociology, finance, ... A natural issue is to separate direct correlations (due to direct interactions) from network-mediated correlations. The Ising model, of ubiquitous importance in statistical physics, provides a natural framework to extract interactions from correlations [@maxent], and was recently used for the analysis of neurobiological data [@bialek; @marre; @noi]. It is indeed the least constrained model capable of reproducing the individual and pairwise frequencies of a set of, say, $N$ binary-valued variables, $\sigma_i=0, 1$. In practice, these frequencies, $p_i$ and $p_{ij}$, are often estimated through empirical averages over a number of sampled configurations $\{\sigma_1^b,\sigma_2^b,\ldots,\sigma_N^b\}$, $b=1,\ldots,B$. The task then consists in inferring the parameters (fields $h_i$ and interactions $J_{ij}$) of the Ising model reproducing those data. From a mathematical point of view, one has to solve the $\frac 12N(N+1)$ implicit equations $p_i=\langle \sigma_i\rangle$ and $p_{ij}= \langle\sigma _i \sigma_j\rangle$ for the fields and interactions, where $\langle\cdot\rangle$ denotes the Gibbs average with Boltzmann factor $\exp \big(\sum _i h_i\sigma_i+\sum_{i<j} J_{ij}\sigma_i\sigma_j\big)$.
Various approaches have been developed to solve the inverse Ising problem, called Boltzmann Machine (BM) in Machine Learning, including BM learning [@ackley], mean field [@opper] and message-passing [@peli; @mora] methods, and pseudo-likelihood algorithms [@wain]. Despite their specificities, those methods have in common to be efficient when the correlations, $c_{ij}=p_{ij}-p_ip_j$, are weak, and to perform badly when most pairs $(i,j)$ are strongly correlated, [*e.g.*]{} when the data are generated by a critical Ising model. Those examples seem to suggest that fast algorithms cannot infer BMs with long-range correlations [@montanari].
However, the existence of a relationship between the presence of strong correlations in the ’direct’ model and the intrinsic hardness of the inverse problem is questionable [@swendsen]. Let ${\bf p}=\{p_i,p_{ij}\}$, ${\bf J}=\{h_i,J_{ij}\}$, ${\bf \langle \boldsymbol\sigma \rangle}=\{\langle \sigma_i\rangle, \langle \sigma_i\sigma_j\rangle\}$ be the $\frac 12 N(N+1)$-dimensional vectors of, respectively, the measured frequencies, the interaction parameters and the Gibbs frequencies. We define the susceptibility and the inverse susceptibility matrices through, respectively, $$\label{definvchi}
\boldsymbol\chi = \left. \frac{\partial {\bf \langle \boldsymbol\sigma \rangle}}{\partial {\bf J}}\right|_{\bf J}
\quad \hbox{\rm and}\quad
\boldsymbol\chi ^{-1} = \left. \frac{\partial {\bf J}}{\partial {\bf \langle \boldsymbol\sigma \rangle}} \right|_{\bf \langle \boldsymbol\sigma \rangle}\ .$$ $\boldsymbol\chi$ is attached to the direct model, and quantifies how the frequencies respond to a small change in the interaction parameters. $\boldsymbol\chi^{-1}$, which gives the response of the BM interaction parameters to a small change in the frequencies, is a natural characterization for the inverse problem. An essential point, which has received little attention in the context of BM so far, is that ${\boldsymbol \chi}^{-1}$ is generally much sparser and shorter-range than ${\boldsymbol \chi}$; evidence for this claim is reported below. Even if strong responses (and correlations) pervade the system, each BM interaction parameter may mostly depend on a small (compared to $N$) number of frequencies $\bf p$. Interestingly, the short-rangedness of ${\boldsymbol \chi}^{-1}$ makes the inference not only possible but also meaningful, as experiments generally probe limited parts of larger systems.
In this letter, we present a method for inferring BM, exploiting this notion of limited dependence. The interaction network is progressively unveiled, through a recursive processing of larger and larger subsets of variables, which we call clusters. To each cluster $\Gamma$ is associated an entropy $\Delta S(\Gamma)$, which assesses how much the cluster is relevant to infer the BM. Clusters such that $|\Delta S(\Gamma)|<\Theta$, where $\Theta$ is a fixed threshold are discarded; the other clusters are kept and recursively used to generate larger clusters. Threshold $\Theta$ must be large enough to avoid overfitting of the data corrupted by the sampling noise (finite $B$) and small enough in order not to miss important components of the interaction network. Contrary to conventional cluster expansions [@noi; @peli], the number, size, and composition of the clusters automatically adapt to the data, and, rather than the sole value of $N$, determine the running time of the algorithm. Pseudo-codes intended for the practical implementation of our algorithm are given in Supplemental Material [@si].
Our starting point is the Legendre transform of the partition function $Z({\bf J})$ (sum of the Boltzmann factors) of the Ising model, $$\label{entroising}
S ({\bf p})= \min_{\bf J} \big[\log Z({\bf J}) - {\bf p}\cdot {\bf J} \big] \ ,$$ where $\cdot$ denotes the dot product; it is the cross entropy between the sampled distribution and the best BM or, equivalently, the negative of the maximum log-likelihood of the parameters $\bf J$ given the data $\bf p$ [@cover]. Let us define $S_0({\bf p}) = \frac 12\log \hbox{\rm det} (\hat c_{ij})$, where $\hat c_{ij}=c_{ij}/[p_i(1-p_i)p_j(1-p_j)]^{\frac 12}$. We now formally write, for given $\bf p$, $$\label{recur-entro}
S-S_0= \sum _i \Delta S{(i)} +
\sum _{i<j} \Delta S{(i,j)} + \sum _{i<j<k} \Delta S{(i,j,k)} +\ldots\ ,
%\sum _{\Gamma \subset (1,2,\ldots ,N)} \Delta S_ \Gamma ({\bf p}) \ ,$$ where the sums run over every subset (cluster) of the $N$ variables. The choice of expanding $S-S_0$ rather than $S$ will be explained later. According to (\[recur-entro\]) for $N=1$, $\Delta S{(i)}$ is the entropy of a single spin with average value $p_i$. Using (\[recur-entro\]) again for $N=2$, we find that $\Delta S{(i,j)}$ equals the loss in entropy when imposing the constraint $\langle\sigma_i\sigma_j\rangle =p_{ij}$ to a system of 2 spins with fixed magnetizations, $\langle \sigma_i\rangle=p_i$, $\langle \sigma_j\rangle=p_j$, minus the contribution $\frac 12 \log (1-\hat c^2_{ij})$ coming from $S_0$. A recursive use of (\[entroising\]) and (\[recur-entro\]) for increasing $N$ allows us to calculate $\Delta S({\Gamma})$ for larger and larger clusters $\Gamma=(i_{1},i_{2},\ldots ,i_{K})$. The maximal cluster size, say, $K=20$, is set by the computational hardness of obtaining $S$ from (\[entroising\]). Note that $\Delta S(\Gamma)$ is a function of the individual and pairwise frequencies of the spins in $\Gamma$ only.
To illustrate the properties of the cluster expansion (\[recur-entro\]) consider the 2D-Ising model on a $M\times M$ grid (Fig. \[fig-grid\]), in the absence of sampling noise ($B=\infty$). Enumerations of the $2^{M^2}$ spin configurations allow us to calculate the frequencies ${\bf p}=\langle \boldsymbol\sigma\rangle$ and the cluster-entropies $\Delta S(\Gamma)$ exactly for small values of $M$ (Fig. \[fig-grid\]A). The entropy of the clusters $\Gamma$ decreases exponentially with the length $L(\Gamma)$ of the shortest closed interaction path joining the spins in $\Gamma$, [*e.g.*]{} $L(1,2)=2$, $L(1,3,6)=6$. The entropies of clusters sharing a common interaction path (and the same $L$) have alternating signs, depending on the parity of the cluster size (Fig. \[fig-grid\]A); their sum is much smaller (in absolute value) than any cluster-entropy taken separately [^1]. Figure \[fig-grid\]B shows the error $\epsilon _S$ on the entropy, when all cluster-entropies smaller than $\Theta$ are discarded. $\epsilon_S$ exhibits lower and lower plateaus, separated by higher barriers as the threshold $\Theta$ is decreased. The first low plateau, $\epsilon_S \simeq.002$, takes place at $\Theta^{*}_1= .012$, when all nearest-neighbor clusters ($L=2$) are selected. The second and lower plateau, $\epsilon_{S} \simeq 5\,10^{-6}$, is reached for $\Theta^{*}_2=0.002$, after all clusters with $L=4$ are taken into account. Barriers in between plateaus correspond to values of $\Theta$, for which the truncation interrupts the summation (and partial cancellation) of all the clusters sharing an interaction path; the error on the entropy is then $\epsilon_S\sim \Theta$.
Let us turn to the case of imperfect sampling (finite $B$). The measured correlations, $c_{ij}=p_{ij}-p_ip_j$, differ from the Gibbs correlations, $\langle \sigma_i\sigma_j\rangle - \langle \sigma_{i}\rangle\langle \sigma_{j}\rangle $, by random fluctuations of amplitude $\nu=O(B^{-\frac12})$. Those fluctuations do not affect much the largest correlations and the largest cluster-entropies. However, for the pairs $i,j$ with weak Gibbs correlations ($<\nu$ in absolute value), the measured correlations are dominated by the noise. This fact has two consequences. First, the norm of the 2-point susceptibility, $\displaystyle{|\chi _{2}| =\frac 1N \sum_{i,j} c_{ij}^2 \sim N \nu^2}$ is extensive: overfitting makes the inferred Ising model look like critical. Secondly, the distribution of the cluster entropies is universal for $\Delta S \to 0$ and $N\to\infty$: it coincides with the distribution for a system of Independent Spins, with the same $p_i$’s as the original system, and the same number $B$ of sampled configurations (Fig. \[fig-histo\]). The presence of this universal, noisy peak justifies the introduction of a threshold $\Theta$ and sets a lower bound to its value. Figure \[fig-grid\]B shows that the error $\epsilon _S$ behaves as in the perfect sampling case for large $\Theta$ and saturates at low $\Theta$ as expected. Again, the entropy is accurately estimated by taking into account only the top cluster-entropies, associated to the dominant interaction paths on the lattice.
Systematic enumeration of clusters is not possible for large systems. The example above suggests a fast, recursive procedure to build up clusters of increasing sizes, whose principle is based on the existence of paths of strong interactions connecting the spins. First we calculate the entropies associated to the $N$ clusters with $K=1$ spin. Then, two clusters $\Gamma _1$ and $\Gamma _2$ of size $K$ can be merged to give birth to the cluster $\Gamma = \Gamma_1 \cup \Gamma_2$ of size $K+1$ if $\Gamma_1$ and $\Gamma _2$ have exactly $K-1$ common spins, and if $|\Delta S(\Gamma)| >\Theta$. The underlying principle is, again, that the building-up prescription should be compatible with the existence of a path of strong interactions connecting the spins, and that clusters with low entropies can be discarded. Each time a new cluster $\Gamma$ is created and selected we store its contributions to the entropy, $\Delta S(\Gamma)$ and to the interaction parameters, $\Delta {\bf J}({\Gamma}) = - \frac{d}{d {\bf p}} \Delta S(\Gamma)$. The procedure naturally stops when no cluster of larger size can be built through the recursion. The sums of $\Delta S(\Gamma)$ and $\Delta {\bf J}({\Gamma})$ over the selected clusters, added to, respectively, $S_{0}$ and ${\bf J}_{0}=- \frac{d}{d {\bf p}} S_0$, are our approximations for the entropy and the interactions of the BM.
We now report the tests of the above inference algorithm on synthetic data generated from Ising models with known couplings. First we consider dilute ferromagnets on 2D-grids of sizes $M\times M$; BM learning is hindered by the huge thermalization time at low temperature, mean-field and message-passing methods are not expected to be efficient on such loopy lattices and the Pseudo-Likelihood (PL) algorithm of [@wain] fails outside the paramagnetic phase, even for $M=7$ [@montanari]. Our algorithm successfully retrieves the network of interactions at the critical point, in the low temperature phase, and for much larger sizes (Fig. \[fig-syn\]A). As $\Theta$ is lowered the error on $J_{ij}$ first decreases and then saturates to a value close to the Cramér-Rao bound, $\sqrt{\frac 1{B}\, {\boldsymbol\chi}^{-1}_{ij,ij}}$ [@cover] (Fig. \[fig-syn\]B). At the cross-over threshold the largest selected clusters have size 4, while $\xi\sim M$ as the system is critical (Fig. \[fig-syn\]B). The running time of the algorithm (at the cross-over $\Theta$) is $\sim 10$ millisec on one core of an AMD Opteron dual-core processor at 3 Ghz. The inference algorithm is also applied to glassy frustrated Ising models [@vb], of various sizes $N$ (Fig. \[fig-syn\]C). Performances do not seem to worsen as $N$ increases.
To better understand the saturation of the error and the quality of the inference we compare the difference $\delta {\bf p}$ between the frequencies calculated from the inferred BM, ${\bf \langle \boldsymbol\sigma \rangle}$ [^2], and the measures, ${\bf p}$, to the fluctuations expected from the sampling of $B$ configurations at equilibrium. The variance of these fluctuations are the diagonal elements of $\boldsymbol\chi$, divided by $B$. An estimate of the relative error for the one-site frequencies is thus $\epsilon _p = \sqrt{\frac BN \sum _{i}\frac { (\delta p_{i})^2}{\chi _{i,i}}}$; a similar expression can be written for the error on the correlations, $\epsilon _c$. Values of $\epsilon \gg 1$ signal a poor inference, while overfitting corresponds to $\epsilon \ll 1$. This criterion is justified if the Gibbs fluctuations are comparable to the error bars that can be computed using statistical methods such as bootstrap. We find that $\epsilon_p$ and $\epsilon_c$ are close to 1 at the cross-over threshold for which the error on the couplings saturates (Insets of Fig. \[fig-syn\]B&C). Lowering $\Theta$ further reduces $\epsilon_p,\epsilon_c$, but does not increase the accuracy on the interactions and is merely an overfitting of the data.
The running time of our algorithm depends on the complexity of the underlying interaction network rather than on the system size. We analyze in Fig. \[fig-errorvsT\] a 3180 second-long recording of the retinal activity of a salamander, previously studied in [@bialek] using BM learning ($N_1=40$ cells). As $\Theta$ is lowered, the number of selected clusters and their maximal size increase, and the entropy $S$ reaches a plateau (Fig. \[fig-errorvsT\]B&C). For $\Theta^*\simeq 6\, 10^{-6}$, the errors are $\epsilon \sim 1$, and the inferred Ising model reproduces the frequencies and correlations (Fig. \[fig-errorvsT\]D). We have also applied our algorithm to recordings of other neurobiological systems, including the cortical activity of $N_2=37$ cells in a behaving rat [@Pey09] (not shown). While the amplitudes of the interactions found with both data sets are similar, the maximal size of the selected clusters, which is a measure of the neighborhood of a cell on the interaction network, is much smaller for the cortical recording ($=3$) than for the retinal activity ($=8$). The lower complexity of the inferred network results in a lower CPU time ($t_2\simeq .1$ sec vs. $t_1\simeq 5$ min on the computer above), in spite of $N_2 \simeq N_1$.
While expanding $S$ alone in (\[recur-entro\]) would be possible, the cluster-entropies $|\Delta S_{\Gamma}|$ produced by the expansion of $S-S_{0}$ are generally smaller [@diag]. Therefore, less clusters are needed to achieve an accurate inference, and the fluctuations of $\epsilon_S$ (Fig. \[fig-grid\]B) and of $\epsilon_p,\epsilon_c$ (Figs. \[fig-syn\]B&C and \[fig-errorvsT\]A) are smaller, see discussion about barriers above. As $S_{0}$ coincides with $S$ for mean field models when $N\to\infty$ [@opper], it is a good starting point for the expansion even for systems with rather dense and weak interaction networks. In the case of severe undersampling, regularized versions of $S_0$ including a penalty over the couplings based on the $L_2$ ([@si]) or the $L_1$ [@lamfan] norm can be used. Note that $({\bf J}_0)_{ij} \propto - (\hat {\bf c}^{-1})_{ij}$ is regular even at criticality, [*i.e.*]{} even if $\hat{\bf c}$ has a diverging eigenvalue.
Our work suggests that the BM problem can be solved efficiently even when data exhibit strong correlations. The contribution to $\boldsymbol\chi^{-1}$ due to a cluster $\Gamma$, $-\frac{\partial^2 \Delta S_\Gamma}{\partial {\bf p}\partial{\bf p}}$, is highly sparse since ${\Delta S}_\Gamma$ depends on a few frequencies only. The success of our algorithm relies on the property that $\boldsymbol\chi^{-1}$ can be accurately approximated by such an expansion (while $\boldsymbol\chi$ cannot). We now list four examples for which this property holds. In the 1D-Ising model, ${\boldsymbol\chi}_{ij}^{-1}$ is of finite-range when $J_{ij}$ couples nearest neighbours only, and decays exponentially with $|i-j|$ in the presence of longer-range interactions [@percus]. Next, consider the $O(m)$ model, where the binary spins $\sigma_i$ are replaced with $m$-dimensional spins $\boldsymbol \sigma _i$ of fixed norms, with interactions $J_{ij}$ and zero fields. The model is exactly solvable in the $m\to\infty$ limit, with the result $(\boldsymbol \chi^{-1})_{ij,kl}= J_{ik} J_{jl } +J_{il} J_{jk}$ (diagonal elements $J_{ii}$ enforce the constraints on $|\boldsymbol \sigma _i|$). If ${\bf J}$ is sparse, so is ${\boldsymbol\chi}^{-1}$, even if all correlations are strong. In liquid theory, the Ornstein-Zernike direct correlation function, a quantity closely related to ${\boldsymbol \chi}^{-1}$, is widely believed to be short-range [@hansen]; this property is used in closure schemes, [*e.g.*]{} Percus-Yevick, to obtain the equation of state. Even at the critical point of a ferromagnet [@swendsen] the response of the field $h_i$ to changes in the magnetizations $m_j$ of spins at distance larger than $R$, $\int _{r>R} dr\, |{\boldsymbol \chi}^{-1}(r)|\sim R^{-(3-\eta)}$, quickly decays with $R$ [@hansen]. Intuitively, the $O(N^2)$ correlations contain a highly redundant information about the $O(N)$ non-zero couplings which have generated them. This redundancy is at the origin of the ’locality’ of ${\boldsymbol \chi}^{-1}$ and of the cancellation property of the cluster-entropies.\
We thank D. Chatenay, D. Huse, J. Lebowitz, S. Leibler, A. Montanari and V. Sessak for very useful discussions.
[9999]{}
E.T. Jaynes, [*Proc. IEEE*]{} [**70**]{}, 939 (1982).
E. Schneidman, M.J. Berry II, R. Segev, W. Bialek, [*Nature*]{} [**440**]{}, 1007 (2006); G. Tkacik, E. Schneidman, M.J. Berry II, W. Bialek, [*arXiv:q-Bio.NC*]{}, 0611072 (2006).
O. Marre, S. El Boustani, Y. Frégnac, A. Destexhe, [*Phys. Rev. Lett.*]{} [**102**]{}, 138101 (2009); Y. Roudi, J.Tyrcha, J. Hertz, [*Phys. Rev. E*]{} [**79**]{}, 051915 (2009).
S. Cocco, S. Leibler, R. Monasson, [*Proc. Nat. Acad. Sci.*]{} [**106**]{}, 14058 (2009).
D.H. Ackley, G.E. Hinton, T.J. Sejnowski, [*Cognitive Science*]{} [**9**]{}, 147 (1985).
M. Opper, D. Saad (eds), [*Advanced Mean Field Methods: Theory and Practice*]{}, MIT Press (2001).
A. Pelizzola, [*J. Phys. A*]{} [**38**]{}, R 309 (2005).
M. Mézard, T. Mora, [*J. Physiol. Paris*]{} [**103**]{}, 107 (2009); E. Marinari, V. Van Kerrebroeck, [*J. Stat. Mech.*]{} P02008 (2010).
P. Ravikumar, M.J. Wainwright, J. Lafferty, [*Annals of Statistics*]{} [**38**]{}, 1287 (2010).
J. Bento, A. Montanari, [*NIPS*]{} [**22**]{} (2009).
R.H. Swendsen, [*Phys. Rev. Lett.*]{} [**52**]{}, 1165 (1984).
See supplemental material for a brief description of the algorithm and the regularization. T.M. Cover, J.A. Thomas, [*Elements of Information Theory*]{}, Wiley (2006).
D. Zobin, [*Phys. Rev. B*]{} [**18**]{}, 2387 (1978).
L. Viana, A.J. Bray, [*J. Phys. C*]{} [**18**]{}, 3037 (1985).
A. Peyrache [*et al.*]{}, [*Nature Neurosci.*]{} [**12**]{}, 919 (2009).
V. Sessak, R. Monasson, [*J. Phys. A*]{} [**42**]{}, 055001 (2009).
A. d’Aspremont, O. Banerjee, L. El Ghaoui, [*SIAM J. on Matrix Analysis and its Applications*]{} [**30**]{}, 56 (2008).
C. Borzi, G. Ord, J.K. Percus, [*J. Stat. Phys.*]{} [**46**]{}, 51 (1987).
M. Fisher, [*J. Math. Phys.*]{} [**5**]{}, 944 (1964).
[^1]: This (partial) cancellation property ensures that $S$ is extensive in $N$.
[^2]: These can be calculated using Monte Carlo simulations, or a cluster expansion (this time, for the direct problem) with a threshold; details will be given elsewhere.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
English:\
In this paper, we consider the back and forth nudging algorithm that has been introduced for data assimilation purposes. It consists of iteratively and alternately solving forward and backward in time the model equation, with a feedback term to the observations. We consider the case of 1-dimensional transport equations, either viscous or inviscid, linear or not (Bürgers’ equation). Our aim is to prove some theoretical results on the convergence, and convergence properties, of this algorithm. We show that for non viscous equations (both linear transport and Bürgers), the convergence of the algorithm holds under observability conditions. Convergence can also be proven for viscous linear transport equations under some strong hypothesis, but not for viscous Bürgers’ equation. Moreover, the convergence rate is always exponential in time. We also notice that the forward and backward system of equations is well posed when no nudging term is considered.\
French:\
Ce travail étudie l’algorithme du nudging direct et rétrograde, qui a été introduit en assimilation de données. Cet algorithme consiste à résoudre itérativement l’équation du modèle, agrémentée d’un terme de rappel aux observations, dans le sens direct puis dans le sens rétrograde. Dans ce travail nous nous intéressons aux équations de transport en dimension 1, avec ou sans viscosité, linéaires ou non (Bürgers). Notre objectif est d’étudier la convergence éventuelle, et la vitesse de convergence le cas échéant, de cet algorithme. Nous prouvons que, pour les équations non visqueuses (linéaire ou Bürgers), la convergence a lieu sous des hypothèses d’observabilité. La convergence peut aussi être démontrée pour des équations de transport linéaires visqueuses sous des hypothèses fortes, mais pas pour l’équation de Bürgers visqueuse. En outre, lorsque la convergence a lieu, la vitesse de convergence est toujours exponentielle en temps. Nous remarquons aussi que le système d’équations directe et rétrograde est toujours bien posé lorsqu’aucun terme de rappel n’est présent.
author:
- 'Didier Auroux[^1] [^2]'
- 'Maëlle Nodet [^3] [^4]'
bibliography:
- 'BFN-burgers-thms.bib'
title: 'The Back and Forth Nudging algorithm for data assimilation problems: theoretical results on transport equations'
---
Introduction and main results {#sec:intro}
=============================
Data assimilation is the set of techniques aiming to combine in an optimal way the mathematical information provided by the model equations and the physical information given by observations, in order to retrieve the state of a system. Several types of methods have been widely studied in the past decades. We can cite here interpolation, variational and stochastic methods. The first ones interpolate the measurements from the points of observation towards the grid points, the interpolation being weighted by the statistics of the observations [@Kalnay]. Variational methods are based on the optimal control theory, and data assimilation is set as being a problem of constrained optimization. The goal is to minimize a cost function measuring the difference between the observations and the corresponding quantities provided by a model integration. The initial condition of the system can then be seen as a control vector [@LeDimet]. Finally, the basic idea of stochastic methods is to consider the fields as the realization of a stochastic process and carry out Kalman filtering methods [@Kalman; @Evensen]. We can also mention one of the very first data assimilation schemes: the nudging method. Also known as Newtonian relaxation or dynamic initialization, it consists of adding a feedback term to the observations directly in the model equations [@Anthes].
All these methods require extensive work, either from the implementation or the computation point of view. For instance, variational methods require the linearization of all operators and also the implementation of the adjoint model. They also need efficient optimization schemes, as the minimization is performed on spaces of huge dimension. On the other side, stochastic methods are somewhat easier to implement, but they require knowledge, storage and manipulations of huge matrices.
The Back and Forth Nudging (BFN) algorithm has recently been introduced as a simple and efficient method for solving data assimilation problems [@AurouxBlum1]. In most geophysical applications, data assimilation consists of estimating a trajectory, solution of a partial differential equation (PDE), from the knowledge of observations. These observations are usually sparse in time and space, and incorrect in the sense that they are not the restriction of a solution of the PDE model. One step of the BFN algorithm consists of solving first the model equation, in which a feedback to the observation solution is added, and then the same equation but backwards in time, with also a feedback term to the observations. Such forward and backward integrations provide a new value of the solution at the initial time $t=0$ and the aim of the BFN is to improve the quality of the initial condition.
The idea of the back and forth nudging is to use the difference between the observations and the model trajectory as a feedback control of the equations, both in the forward and backward integrations. This makes the numerical scheme extremely easy to implement, in comparison with both variational and stochastic methods, as we usually only consider diagonal (or even scalar) gain matrices. The back and forth nudging scheme can also be seen as an intermediate scheme between variational and stochastic methods, as the standard nudging technique has both variational (minimization of a compromise between the observations and the energy of the system) and stochastic (sub-optimal Kalman filter) interpretations [@AurouxBlum2].
As a first approximation, we consider in this paper that the observations are correct (i.e. no observation error), and hence the observations satisfy the model equation. We consider various observation domains: first we assume that the observations $u_{obs}(t,x)$ are available for any point $x$ and time $t$, second we assume that they are available for $t\in[t_{1},t_{2}]$ and for all $x$, and third we consider that they are available for all $t$ over a given space domain. This is done through the time and space dependency of the feedback (or nudging) gain matrix $K(t,x)$ that is equal to $0$ when the observations are not available. Many numerical experiments in almost realistic situations suggest that this algorithm works well, and that the identified solution gets closer to the observations [@AurouxBlum2]. The goal of this paper is to prove some theoretical results and convergence properties in the particular case of transport equations, either viscous or inviscid, either linear or non-linear (Bürgers’ equation).
In section \[sec:lin-visc\], we consider one step of the BFN algorithm applied to a linear viscous transport equation: $$\label{eq:1}
\begin{array}{rl}
(F) & \baa{rcl}
{\partial_t}u -\nu {\partial_{xx}}u + a(x) {\partial_x}u &=& -K (u-u_{obs}) \\
u|_{x=0}=u|_{x=1}&=&0\\
u|_{t=0} &=& u_{0}
{\end{array} \right.}\medskip\\
(B) &
\baa{rcl}
{\partial_t}{\widetilde{u}} -\nu {\partial_{xx}}{\widetilde{u}} + a(x) {\partial_x}{\widetilde{u}} &=& K^\prime ({\widetilde{u}}-u_{obs})\\
{\widetilde{u}}|_{x=0}={\widetilde{u}}|_{x=1}&=&0\\
{\widetilde{u}}|_{t=T} &=& u(T)
{\end{array} \right.}\end{array}$$ where the following notations hold for all further cases:
- the time period considered here is $t\in[0,T]$;
- the first equation $(F)$ is called the forward equation, the second one $(B)$ is called the backward one;
- $K$ and $K^\prime$ are positive and may depend on $t$ and $x$, but for simplicity reasons, we will always assume that there exists a constant $\kappa \in {\mathbb{R}}_{+}^*$ such that $K^\prime(t,x) = \kappa K(t,x)$;
- $a(x)\in W^{1,\infty}(\Omega)$, $\Omega$ being the considered space domain, either the interval $[0,1]$ or the torus $[0,1]$;
- $\nu$ is a constant;
- $u_{obs}$ is a solution of the forward equation with initial condition $u_{obs}^0$: $$\label{eq:1bis}
\baa{rcl}
{\partial_t}u_{obs} -\nu {\partial_{xx}}u_{obs} + a(x) {\partial_x}u_{obs} &=& 0 \\
u|_{x=0}=u|_{x=1}&=&0\\
u|_{t=0} &=& u_{obs}^0
{\end{array} \right.}$$
The following result holds true:
\[thm:lin-visc\] We consider the one-step BFN (\[eq:1\]) with observations $u_{obs}$ satisfying (\[eq:2\]). We denote $$\label{eq:2}
\begin{array}{rcl}
w(t) &=& u(t) - u_{obs}(t) \\
{\widetilde{w}}(t) &=& {\widetilde{u}}(t)-u_{obs}(t)
\end{array}$$ Then we have:
1. If $K(t,x) = K$, then we have, for all $t\in[0,T]$: $$\label{eq:3}
{\widetilde{w}}(t) = e^{(-K-K^\prime)(T-t)} w(t)$$
2. If $K(t,x)=K(x)$, with ${\textrm{Support }}(K) \subset [a,b]$ where $a<b$ and $a\neq 0$ or $b\neq 1$, then equation (\[eq:1\]) is ill-posed: there does not exist a solution $(u,{\widetilde{u}})$, in general.
3. If $K(t,x)=K {\mathbbm{1}}_{[t_{1},t_{2}]}(t)$ with $0\leq t_{1} < t_{2}\leq T$, then we have $$\label{eq:4}
{\widetilde{w}}(0) = e^{(-K-K^\prime)(t_{2}-t_{1})} w(0)$$
In section \[sec:bg-visc\], we consider one step of the BFN algorithm applied to the viscous Bürgers’ equation: $$\label{eq:26}
\begin{array}{rl}
(F) & \baa{rcl}
{\partial_t}u -\nu {\partial_{xx}}u + u {\partial_x}u &=& -K (u-u_{obs}) \\
u|_{x=0}=u|_{x=1}&=&0\\
u|_{t=0} &=& u_{0}
{\end{array} \right.}\medskip\\
(B) &
\baa{rcl}
{\partial_t}{\widetilde{u}} -\nu {\partial_{xx}}{\widetilde{u}} + {\widetilde{u}} {\partial_x}{\widetilde{u}} &=& K^\prime ({\widetilde{u}}-u_{obs})\\
{\widetilde{u}}|_{x=0}={\widetilde{u}}|_{x=1}&=&0\\
{\widetilde{u}}|_{t=T} &=& u(T)
{\end{array} \right.}\end{array}$$ with the same notations as before.\
The observations $u_{obs}$ satisfy the forward Bürgers’ equation: $$\label{eq:26bis}
\baa{rcl}
{\partial_t}u_{obs} -\nu {\partial_{xx}}u_{obs} + u_{obs} {\partial_x}u_{obs} &=& 0 \\
u|_{x=0}=u|_{x=1}&=&0\\
u|_{t=0} &=& u_{obs}^0
{\end{array} \right.}$$ We have the following result if $K\ne 0$:
\[thm:burg-visc\] The BFN iteration (\[eq:26\]) for viscous Bürgers’ equation, with observations $u_{obs}$ satisfying (\[eq:26bis\]), is ill-posed, even when $K(t,x)$ is constant (except for $K(t,x)\equiv 0$): there does not exist, in general, a solution $(u,{\widetilde{u}})$.
In the particular case when $K=K'=0$, the backward problem is ill-posed in the sense of Hadamard, but it has a unique solution if the final condition ${\widetilde{u}}|_{t=T}$ is set to a final solution of the direct equation. Moreover, in this particular case, the backward solution is exactly equal to the forward one: ${\widetilde{u}}(t) = u(t)$ for all $t\in[0,T]$. The main result is the following:
\[thrm:K0\] If $K=K'\equiv 0$, then problem (\[eq:26\]) is well-posed in the sense of Hadamard, and there exists a unique solution $(u,{\widetilde{u}})$. Moreover $u={\widetilde{u}}$.
Section \[sec:nonvisc\] considers the extension of theorem \[thm:lin-visc\] to the inviscid case, for both linear transport and Bürgers’ equations.\
We first consider the linear case. The BFN equations are: $$\label{eq:1nvl}
\begin{array}{rl}
(F) & \baa{rcl}
{\partial_t}u + a(x) {\partial_x}u &=& -K (u-u_{obs}) \\
u|_{x=0}&=&u|_{x=1}\\
{\partial_x}u|_{x=0}&=&{\partial_x}u|_{x=1}\\
u|_{t=0} &=& u_{0}
{\end{array} \right.}\medskip\\
(B) &
\baa{rcl}
{\partial_t}{\widetilde{u}} + a(x) {\partial_x}{\widetilde{u}} &=& K^\prime ({\widetilde{u}}-u_{obs})\\
{\widetilde{u}}|_{x=0}&=&{\widetilde{u}}|_{x=1}\\
{\partial_x}{\widetilde{u}}|_{x=0}&=&{\partial_x}{\widetilde{u}}|_{x=1}\\
{\widetilde{u}}|_{t=T} &=& u(T)
{\end{array} \right.}\end{array}$$ where $a(x)$ can be constant or not.
\[thm:lin-nonvisc\] We consider the non viscous one-step BFN (\[eq:1nvl\]), with observations $u_{obs}$ satisfying (\[eq:1nvl\]-F) with $K=0$. We denote $$\label{eq:2nv}
\begin{array}{rcl}
w(t) &=& u(t) - u_{obs}(t) \\
{\widetilde{w}}(t) &=& {\widetilde{u}}(t)-u_{obs}(t)
\end{array}$$ We denote by $$\label{mneq:2}
(s,\psi(s,x))$$ the characteristic curve of equation (\[eq:1nvl\]-F) with $K=0$, with foot $x$ in time $s=0$, i.e. such that $$\label{mneq:3}
(s,\psi(s,x))|_{s=0} = (0,x)$$ We assume that the final time $T$ is such that the characteristics are well defined and do not intersect over $[0,T]$.\
Then we have:
1. If $K(t,x) = K$, then we have, for all $t\in[0,T]$: $$\label{eq:3nvlbis}
{\widetilde{w}}(t) = w(t) e^{(-K-K^\prime)(T-t)}$$
2. If $K(t,x)=K {\mathbbm{1}}_{[t_{1},t_{2}]}(t)$ with $0\leq t_{1} < t_{2}\leq T$, then we have $$\label{eq:4nvl}
{\widetilde{w}}(0) = w(0) e^{(-K-K^\prime)(t_{2}-t_{1})}$$
3. If $K(t,x) = K(x)$, then we have, for all $t\in[0,T]$: $$\label{eq:3nvl}
{\widetilde{w}}(t,\psi(t,x)) = w(t,\psi(t,x)) \, \exp \left(-\int_{t}^T K(\psi(s,x))+K^\prime(\psi(s,x))\, ds \right)$$
We finally consider non viscous Bürgers’ equation, still with periodic boundary conditions, and for a time $T$ such that there is no shock in the interval $[0,T]$: $$\label{eq:1nvb}
\begin{array}{rl}
(F) & \baa{rcl}
{\partial_t}u + u {\partial_x}u &=& -K (u-u_{obs}) \\
u|_{x=0}&=&u|_{x=1}\\
{\partial_x}u|_{x=0}&=&{\partial_x}u|_{x=1}\\
u|_{t=0} &=& u_{0}
{\end{array} \right.}\medskip\\
(B) &
\baa{rcl}
{\partial_t}{\widetilde{u}} + {\widetilde{u}} {\partial_x}{\widetilde{u}} &=& K^\prime ({\widetilde{u}}-u_{obs})\\
{\widetilde{u}}|_{x=0}&=&{\widetilde{u}}|_{x=1}\\
{\partial_x}{\widetilde{u}}|_{x=0}&=&{\partial_x}{\widetilde{u}}|_{x=1}\\
{\widetilde{u}}|_{t=T} &=& u(T)
{\end{array} \right.}\end{array}$$
\[thm:bg-nonvisc\] We consider the non viscous one-step BFN (\[eq:1nvb\]), with observations $u_{obs}$ satisfying (\[eq:1nvb\]-F) with $K=0$. We denote $$\label{eq:2nvb}
\begin{array}{rcl}
w(t) &=& u(t) - u_{obs}(t) \\
{\widetilde{w}}(t) &=& {\widetilde{u}}(t)-u_{obs}(t)
\end{array}$$ We assume that $u_{obs}\in W^{1,\infty}([0,T]\times\Omega)$, i.e. there exists $M>0$ such that $$\label{eq:2bisnvb}
|{\partial_x}u_{obs}(t,x)|\le M, \quad \forall t\in [0,T], \forall x\in \Omega$$ Then we have:
1. If $K(t,x) = K$, then we have, for all $t\in[0,T]$: $$\label{eq:3nvb}
\|{\widetilde{w}}(t)\| \leq e^{(-K-K^\prime+M)(T-t)} \|w(t)\|$$
2. If $K(t,x)=K {\mathbbm{1}}_{[t_{1},t_{2}]}(t)$ with $0\leq t_{1} < t_{2}\leq T$, then we have $$\label{eq:4nvb}
\|{\widetilde{w}}(0)\| \leq e^{(-K-K^\prime)(t_{2}-t_{1})+MT} \|w(0)\|$$
\[prpstn:bg-nonvisc\] We consider one forward (resp. backward) BFN step of the non viscous Bürgers equation (\[eq:1nvb\]-F) (resp. (\[eq:1nvb\]-B)). With the notations of theorem \[thm:bg-nonvisc\], if $K(t,x)=K(x)$, then we have $$\label{eq:5nvb}
w(T,\psi(T,x)) = w(0,x) \exp \left( -{\displaystyle \int}_0^T K(\psi(\sigma,x)) d\sigma-{\displaystyle \int}_0^T \partial_x u_{obs}(\sigma,\psi(\sigma,x))d\sigma \right)
$$
\[rmk1\] For the special case $K(t,x) =K(x) = K {\mathbbm{1}}_{[a,b]}(x)$ where $K$ is a constant and $[a,b]$ is a non-empty sub-interval of $[0,1]$, we have $$\label{eq:5nvbis}
w(T,\psi(T,x)) = w(0,x) \exp \left( -K \chi(x)-{\displaystyle \int}_0^T \partial_x u_{obs}(\sigma,\psi(\sigma,x))d\sigma \right)$$ where $$\label{eq:6nvb}
\chi(x) = {\displaystyle \int}_0^T \mathbbm{1}_{[a,b]}(\psi(\sigma,x)) d\sigma$$ is the time during which the characteristic curve $\psi(\sigma,x)$ with foot $x$ of equation (\[eq:1nvb\]-F) with $K=0$ lies in the the support of $K$. The system is then *observable* if and only if the function $\chi$ has a non-zero lower bound, i.e. $m := \displaystyle \min_{x} \chi(x) > 0$, the observability being defined by (see [@Russell78]): $$\exists C, \forall u \textrm{ solution of (\ref{eq:1nvb}-F) with } K=0,\quad \|u(T,.)\|^2 \leq C \int_{0}^T \|K(.) u(s,.)\|^2 \, ds$$ In this case, proposition \[prpstn:bg-nonvisc\] proves the global exponential decrease of the error, provided $K$ is larger than $\displaystyle \frac{MT}{m}$, where $M$ is defined by equation (\[eq:2bisnvb\]).
From remark \[rmk1\], we can easily deduce that if for each iteration, both in the forward and backward integrations, the observability condition is satisfied, then the algorithm converges. Note that this is not a necessary condition, as even if $\chi(x)=0$, the last exponential of equation (\[eq:5nvbis\]) is bounded.
Note also that in real geophysical applications (either meteorology or oceanography), there is usually no viscosity. In this case, assuming the observability condition, the BFN algorithm is well posed, and theorem \[thm:bg-nonvisc\] and proposition \[prpstn:bg-nonvisc\] say that the solution tends to the observation trajectory everywhere, and not only on the support of $K$. From a numerical point of view, we can observe that even with discrete and sparse observations in space, the numerical solution is corrected everywhere [@AurouxBlum2]. We also observed that with a not too large viscosity coefficient, the behavior of the algorithm remains unchanged.
![Decrease rate of the error after one iteration of BFN (see equation \[eq:ww\]) as a function of $x$, for various times $T$; top: linear transport equation; bottom: inviscid Bürgers’ equation.[]{data-label="fig:1"}](lineaire2 "fig:"){width="12cm"} ![Decrease rate of the error after one iteration of BFN (see equation \[eq:ww\]) as a function of $x$, for various times $T$; top: linear transport equation; bottom: inviscid Bürgers’ equation.[]{data-label="fig:1"}](nonlineaire "fig:"){width="12cm"}
Figure \[fig:1\] illustrates the results given in theorem \[thm:lin-nonvisc\] in the case $3$ (top) and proposition \[prpstn:bg-nonvisc\] and remark \[rmk1\] (bottom). These numerical results correspond to a simple case: $u_{obs}\equiv 0$, $u_0(x) = \alpha \sin(2\pi x)$, $K = K' = {\mathbbm{1}}_{[0;0.5]}(x)$. Various final times $T$ are considered, from $0.05$ to $1$, and both figures show the following expression $$\label{eq:ww}
-\log \left( \frac{{\widetilde{w}}(0,x)}{w(0,x)} \right)$$ as a function of $x\in [0;1]$. Figure \[fig:1\]-top illustrates equation (\[eq:3nvl\]). The best possible decrease rate is then $\max(K+K')\times T=2T$. In the linear case, the transport is $a(x)\equiv 1$. As half of the domain is observed, the observability condition is satisfied iff $T> 0.5$, and this is confirmed by the figure. Concerning Bürgers’ equation, figure \[fig:1\]-bottom illustrates equation (\[eq:5nvb\]). After one iteration of BFN, the best possible decrease rate is also $2T$. We can see that in this case, due to the nonlinearities of the model, the solution is less corrected on $[0;0.1]$ but more on $[0.5;0.6]$. From this figure, we can see that the observability condition is satisfied for $T$ larger than approximately $1$.
Finally, some conclusions are given in section \[sec:concl\].
Linear transport equation with a viscous term {#sec:lin-visc}
=============================================
In this section we prove theorem \[thm:lin-visc\].
Case 1: $K$ constant
--------------------
The differences $w$ and ${\widetilde{w}}$ satisfy the following equations: $$\label{eq:5}
\baa{rcl}
{\partial_t}w -\nu {\partial_{xx}}w + a(x) {\partial_x}w +Kw &=& 0 \\
w|_{x=0}=w|_{x=1}&=&0\\
w|_{t=0} &=& w_{0} \medskip\\
{\partial_t}{\widetilde{w}} -\nu {\partial_{xx}}{\widetilde{w}} + a(x) {\partial_x}{\widetilde{w}} -K^\prime {\widetilde{w}}&=& 0\\
{\widetilde{w}}|_{x=0}={\widetilde{w}}|_{x=1}&=&0\\
{\widetilde{w}}|_{t=T} &=& w(T)
{\end{array} \right.}$$ We denote by $S_{+}$ and $S_{-}$ the operators associated to these equations, seen as forward equations on $[t_{0},t]$ with initial conditions given in $t_{0}$: $$\label{eq:6}
S_{+}(t_{0},t)(w(t_{0})) = w(t), \quad S_{-}(t_{0},t)({\widetilde{w}}(t_{0})) = {\widetilde{w}}(t)$$ The BFN algorithm has a solution if and only if we have $$\label{eq:7}
w(T) \in {\textrm{Im\,}}(S_{-}(0,T))$$ We re-write equation (\[eq:6\]) associated to $w$: $$\label{eq:8}
\baa{rcl}
{\partial_t}w -\nu {\partial_{xx}}w + a(x) {\partial_x}w -K^\prime w &=& (-K-K^\prime) w \\
w|_{x=0}=w|_{x=1}&=&0\\
w|_{t=0} &=& w_{0}
{\end{array} \right.}$$ so that we have, thanks to Duhamel’s formula: $$\label{eq:9}
w(t) = S_{-}(0,t)(w_{0}) + \int_{0}^t S_{-}(s,t)((-K-K^\prime)w(s))\, ds$$ If we assume that the expected result is true, i.e. $w(T) \in {\textrm{Im\,}}(S_{-}(0,T))$, then we can assume that it is also true for all $t$, i.e. we can assume that: $$\label{eq:10}
\forall t, \exists \varphi(t), w(t) = S_{-}(0,t)\varphi(t)$$ In that case, we replace (\[eq:10\]) in (\[eq:9\]) and we get: $$\label{eq:11}
w(t) = S_{-}(0,t)(w_{0}) + \int_{0}^t S_{-}(s,t)((-K-K^\prime)S_{-}(0,s)\varphi(s))\, ds$$ As the equation is linear, the scalar coefficient $(K+K^\prime)$ commutes with $S_{-}$ and we get: $$\label{eq:12}
\begin{array}{rcl}
w(t) &=& S_{-}(0,t)(w_{0}) + (-K-K^\prime){\displaystyle \int}_{0}^t S_{-}(s,t)(S_{-}(0,s)\varphi(s))\, ds\\
&=& S_{-}(0,t)(w_{0}) + (-K-K^\prime)S_{-}(0,t){\displaystyle \int}_{0}^t \varphi(s)\, ds\\
S_{-}(0,t)\varphi(t) &=& S_{-}(0,t)\left[(w_{0}) + (-K-K^\prime){\displaystyle \int}_{0}^t \varphi(s)\, ds\right]
\end{array}$$ So that we have $$\label{eq:13}
\varphi(t) = w_{0}+(-K-K^\prime)\int_{0}^t \varphi(s)\, ds$$ i.e., $\varphi$ satisfies $$\label{eq:14}
\varphi^\prime(t) = (-K-K^\prime) \varphi, \quad \varphi(0)=w_{0}$$ and finally $$\label{eq:15}
\varphi(t) = w_{0} e^{(-K-K^\prime)t}$$ so that we get for $w(T)$: $$\label{eq:16}
w(T) = S_{-}(0,T)(w_{0}e^{(-K-K^\prime)T})$$ Reciprocally, setting $$\label{eq:17}
{\widetilde{w}}(0) = w_{0} e^{(-K-K^\prime)T}$$ leads to ${\widetilde{w}}$ satisfying ${\widetilde{w}}(T) = w(T)$, so that $(w,{\widetilde{w}})$ is the solution of the one-step BFN (\[eq:5\]).\
Moreover, we have, for all $t\in [0,T]$: $$\label{eq:18}
\begin{array}{rcl}
{\widetilde{w}}(t) &=& S_{-}(0,t)({\widetilde{w}}(0)) \\
&=& S_{-}(0,t)(w_{0} e^{(-K-K^\prime)T}) \\
&=& e^{(-K-K^\prime)(T-t)} S_{-}(0,t)(w_{0} e^{(-K-K^\prime)t}) \\
&=& e^{(-K-K^\prime)(T-t)} S_{-}(0,t)(\varphi(t)) \\
&=& e^{(-K-K^\prime)(T-t)} w(t)
\end{array}$$
Case 2: $K(x)$
--------------
We assume that ${\textrm{Support }}(K) \subset [a,b]$ where $a<b$ and $a\neq 0$ or $b\neq 1$, i.e. the support of $K$ is not $[0,1]$. We can follow the same reasoning as previously up to equation (\[eq:11\]): $$\label{eq:19}
w(t) = S_{-}(0,t)\varphi(t) = S_{-}(0,t)(w_{0}) + \int_{0}^t S_{-}(s,t)\left[(-K(x)-K^\prime(x))S_{-}(0,s)\varphi(s)\right]\, ds$$ Let us assume, by contradiction, that $-K(x)-K^\prime(x)$ commutes with $S_{-}$. Then we get: $$\label{eq:20}
S_{-}(0,t)(\varphi(t) - w_{0}) = (-K(x)-K^\prime(x)) S_{-}(0,t)\int_{0}^t \varphi(s)\, ds$$ But we know that $S_{-}$ has the unique continuation property, that is:
\[prop:unicity\] If $S_{-}(0,t)(X) = 0$ on a non-empty subset of $[0,1]$, then $S_{-}(0,t)(X) = 0$ on $[0,1]$.
This result and (\[eq:20\]) give: $$\label{eq:21}
w(t) = S_{-}(0,t)(\varphi(t)) = S_{-}(0,t)( w_{0}) = S_{+}(0,t)(w_{0})$$ As this stands for every $w_{0}$, we have $S_{-}=S_{+}$ and finally $K=K^\prime=0$, which is a contradiction. Therefore, $K+K^\prime$ does not commute with $S_{-}$. Thus, in general, we cannot find any function $\psi$ such that: $$\label{eq:22}
\int_{0}^t S_{-}(s,t)\left[(-K(x)-K^\prime(x))S_{-}(0,s)\varphi(s)\right]\, ds = S_{-}(0,t) \psi$$
Case 3: $K(t)$
--------------
We assume that $K(t,x) = K(t) = K {\mathbbm{1}}_{[t_{1},t_{2}]}(t)$ with $0\leq t_{1}<t_{2}\leq T$. We can follow the same reasoning as for $K$ constant, up to the Duhamel formula (\[eq:11\]): $$\label{eq:23}
w(t) = S_{-}(0,t)\varphi(t) = S_{-}(0,t)(w_{0}) + \int_{0}^t S_{-}(s,t)\left[(-K-K^\prime) {\mathbbm{1}}_{[t_{1},t_{2}]}(s) S_{-}(0,s)\varphi(s)\right]\, ds$$ As $K+K^\prime$ is independent of $x$, it commutes with $S_{-}$, and we have for $\varphi$: $$\label{eq:24}
S_{-}(0,t)\varphi(t) = S_{-}(0,t)(w_{0}) +
\baa{ll}
0 & \textrm{if } t \leq t_{1}\smallskip\\
(-K-K^\prime) S_{-}(0,t) {\displaystyle \int}_{t_{1}}^t \varphi(s)\, ds & \textrm{if } t_{1} < t < t_{2}\smallskip\\
(-K-K^\prime) S_{-}(0,t) {\displaystyle \int}_{t_{1}}^{t_{2}} \varphi(s)\, ds & \textrm{if } t \leq t_{2}
{\end{array} \right.}$$ So that the corresponding $\varphi$ is given by: $$\label{eq:25}
\varphi(t) =
\baa{ll}
w_{0} & \textrm{if } t \leq t_{1}\smallskip\\
w_{0} e^{(-K-K^\prime)(t-t_{1})} & \textrm{if } t_{1} < t < t_{2}\smallskip\\
w_{0} e^{(-K-K^\prime)(t_{2}-t_{1})} & \textrm{if } t \leq t_{2}
{\end{array} \right.}$$ And thus the result follows.
Bürgers’ equation with a viscous term {#sec:bg-visc}
=====================================
Proof of theorem \[thm:burg-visc\]
----------------------------------
Without loss of generality we assume that the observations are identically zero: $u_{obs}(t,x) = 0$ for all $(t,x)$. Let us first introduce some notations.
Let us denote by $w$ (resp. ${\widetilde{w}}$) the differences between $u$ (resp. ${\widetilde{u}}$) and the observations, as in (\[eq:2\]), they satisfy the following equations: $$\label{eq:27}
\begin{array}{rl}
(F) &
\baa{rcl}
{\partial_t}w -\nu {\partial_{xx}}w + w {\partial_x}w + K w &=& 0\\
w|_{x=0}=w|_{x=1}&=&0\\
w|_{t=0} &=& w_{0}
{\end{array} \right.}\medskip\\
(B) &
\baa{rcl}
{\partial_t}{\widetilde{w}} -\nu {\partial_{xx}}{\widetilde{w}} + {\widetilde{w}} {\partial_x}{\widetilde{w}} - K^\prime{\widetilde{w}} &=& 0\\
{\widetilde{w}}|_{x=0}={\widetilde{w}}|_{x=1}&=&0\\
{\widetilde{w}}|_{t=T} &=& w(T)
{\end{array} \right.}\end{array}$$ Let us denote also by $S_{+}$ and $S_{-}$ the non-linear operator associated to the forward equations with $K$ or $K^\prime$: $$\label{eq:28}
S_{+}(t_{0},t)(w(t_{0})) = w(t), \quad S_{-}(t_{0},t)({\widetilde{w}}(t_{0})) = {\widetilde{w}}(t), \quad \forall t\geq t_{0}$$ We will also use the linear operators $U_{+}$ and $U_{-}$ associated to the following linear equations: $$\label{eq:29}
\baa{rcl}
{\partial_t}\phi -\nu {\partial_{xx}}\phi + K \phi &=& 0\\
\phi |_{x=0}=\phi |_{x=1}=0, \qquad
\phi |_{t=0} &=& \phi_{0}
{\end{array} \right.}\quad \Longleftrightarrow \quad U_{+}(0,t)(\phi _{0}) = \phi (t)$$ $$\label{eq:29bis}
\baa{rcl}
{\partial_t}\phi -\nu {\partial_{xx}}\phi -K^\prime \phi &=& 0\\
\phi |_{x=0}=\phi |_{x=1}=0, \qquad
\phi |_{t=0} &=& \phi_{0}
{\end{array} \right.}\quad \Longleftrightarrow \quad U_{-}(0,t)(\phi _{0}) = \phi (t)$$
To prove theorem \[thm:burg-visc\] we will prove that $w$ is not in the image of $S_{-}$, in general. To do so we will use perturbations theory. We can easily show that $S_{+}$ is infinitely continuous with respect to the data $w_{0}$. So if we suppose that $w_{0}$ is small: $$\label{eq:30}
w_{0} = \varepsilon \varphi_{0}$$ then we have that $w(t)$, solution of the forward equation (\[eq:27\],$F$) is also small and can be developed in series of $\varepsilon$ $$\label{eq:31}
w = \varepsilon \sum_{n\geq 0} \varepsilon^n w^n$$ Similarly, we develop ${\widetilde{w}}$ in series of $\varepsilon$ $$\label{eq:31bis}
{\widetilde{w}} = \varepsilon \sum_{n\geq 0} \varepsilon^n {\widetilde{w}}^n$$ As previously, $w$ satisfies: $$\label{eq:32}
\baa{rcl}
{\partial_t}w - \nu {\partial_{xx}}w +K w &=& - w {\partial_x}w\\
w|_{x=0}=w|_{x=1}&=&0\\
w|_{t=0} &=& w_{0}
{\end{array} \right.}$$ so that if we develop in series of $\varepsilon$ we get, for $w^0$: $$\label{eq:33}
\baa{rcl}
{\partial_t}w^0 - \nu {\partial_{xx}}w^0 +K w^0 &=& 0 \\
w^0|_{x=0}=w^0|_{x=1}&=&0\\
w^0|_{t=0} &=& \varphi_{0}
{\end{array} \right.}$$ For $w^1$ we have: $$\label{eq:34}
\baa{rcl}
{\partial_t}w^1 - \nu {\partial_{xx}}w^1 +K w^1 &=& -w^0 {\partial_x}w^0\\
w^1|_{x=0}=w^1|_{x=1}&=&0\\
w^1|_{t=0} &=& 0
{\end{array} \right.}$$ Similarly we have for ${\widetilde{w}}^0$ and ${\widetilde{w}}^1$: $$\label{eq:33bis}
\baa{rcl}
{\partial_t}{\widetilde{w}}^0 - \nu {\partial_{xx}}{\widetilde{w}}^0 -K^\prime {\widetilde{w}}^0 &=& 0 \\
{\widetilde{w}}^0|_{x=0}={\widetilde{w}}^0|_{x=1}&=&0\\
{\widetilde{w}}^0|_{t=T} &=& w^0(T)
{\end{array} \right.}$$ $$\label{eq:34bis}
\baa{rcl}
{\partial_t}{\widetilde{w}}^1 - \nu {\partial_{xx}}{\widetilde{w}}^1 -K^\prime {\widetilde{w}}^1 &=& -{\widetilde{w}}^0 {\partial_x}{\widetilde{w}}^0\\
{\widetilde{w}}^1|_{x=0}={\widetilde{w}}^1|_{x=1}&=&0\\
{\widetilde{w}}^1|_{t=T} &=& w^1(T)
{\end{array} \right.}$$ We can compute $w^0$ and $w^1$ thanks to $U_{+}$: $$\label{eq:35}
{\begin{array}}{rcl}
w^0(t) &=& U_{+}(0,t)(\varphi_{0})\\
w^1(t) &=& -{\displaystyle \int}_{0}^t U_{+}(s,t)[w^0(s) {\partial_x}w^0(s)]\, ds
{\end{array}}$$ If we assume that ${\widetilde{w}}^0$ is well defined, with $$\label{eq:35bis}
{\widetilde{w}}^0(t)=U_{-}(0,t)(\psi_{0})$$ then the condition ${\widetilde{w}}(T)=w(T)$ leads to $$\label{eq:36}
{\begin{array}}{rcrcl}
&& U_{-}(0,T)(\psi_{0}) &=& U_{+}(0,T)(\varphi_{0})\\
&\Rightarrow& \psi_{0} &=& U_{-}(0,T)^{-1}U_{+}(0,T)(\varphi_{0})\\
&\Rightarrow& \psi_{0} &=& {\textrm{e}}^{-(K+K^\prime)T} \varphi_{0}
{\end{array}}$$ Then we have for ${\widetilde{w}}^0$: $$\label{eq:37}
{\begin{array}}{rcl}
{\widetilde{w}}^0(t) &=& U_{-}(0,t)(\psi_{0})\\
&=& U_{-}(0,t) {\textrm{e}}^{-(K+K^\prime)T} \varphi_{0}
{\end{array}}$$ For ${\widetilde{w}}^1$ the final condition ${\widetilde{w}}^1(T)=w^1(T)$ gives, thanks to (\[eq:35\]): $$\label{eq:38}
{\begin{array}}{rcl}
{\widetilde{w}}^1(T) &=& w^1(T)\\
&=& -{\displaystyle \int}_{0}^T U_{+}(s,t)[w^0(s) {\partial_x}w^0(s)] \, ds
{\end{array}}$$ On the other hand, if we assume that ${\widetilde{w}}^1$ is well defined, with ${\widetilde{w}}^1(0)=\psi_{T}$, then equation (\[eq:34bis\]) and the Duhamel formula give $$\label{eq:39}
{\widetilde{w}}^1(T) = U_{-}(0,T)[\psi_{T}]-\int_{0}^T U_{-}(s,T)[{\widetilde{w}}^0(s) {\partial_x}{\widetilde{w}}^0(s)] \, ds$$ Then, equalling (\[eq:39\]) and (\[eq:38\]) we should have $$\label{eq:40}
U_{-}(0,T)[\psi_{T}]-\int_{0}^T U_{-}(s,T)[{\widetilde{w}}^0(s) {\partial_x}{\widetilde{w}}^0(s)] \, ds \ \ = \ \ -\int_{0}^T U_{+}(s,T)[w^0(s) {\partial_x}w^0(s)] \, ds$$ Therefore $$\label{eq:41}
2 U_{-}(0,T)[\psi_{T}]\ \ = \ \ \int_{0}^T U_{-}(s,T)[{\partial_x}({\widetilde{w}}^0(s)^2) ] \, ds\ -\int_{0}^T U_{+}(s,T)[{\partial_x}(w^0(s) ^2)] \, ds$$ If we assume that $\psi_{T} = \displaystyle \frac12 {\partial_x}g_{T}$, then we obtain, up to a constant $$\label{eq:42}
U_{-}(0,T)[g_{T}]\quad = \quad\int_{0}^T U_{-}(s,T)[{\widetilde{w}}^0(s)^2] \, ds -\int_{0}^T U_{+}(s,T)[w^0(s) ^2] \, ds$$ We now use (\[eq:35\]), (\[eq:35bis\]) and (\[eq:36\]): $$\label{eq:43}
{\begin{array}}{rcl}
U_{-}(0,T)[g_{T}]\quad &=& \quad{\displaystyle \int}_{0}^T U_{-}(s,T)[(U_{-}(0,s)({\textrm{e}}^{-(K+K^\prime)T}\varphi_{0})]^2 \, ds \\
&& -{\displaystyle \int}_{0}^T U_{+}(s,T)[(U_{-}(0,s)(\varphi_{0})]^2 \, ds\\
&=& ({\textrm{e}}^{-2(K+K^\prime)T}-1) {\displaystyle \int}_{0}^T U_{+}(s,T)[(U_{-}(0,s)(\varphi_{0})]^2 \, ds
{\end{array}}$$ And if $K>0$ and $K^\prime>0$ this last equation is in general impossible: such $g_{T}$ does not, in general, exist. Indeed, let us do an explicit computation thanks to Fourier series: $$\label{eq:44}
\varphi_{0} = \sum_{n\geq 1}a_{n} {\textrm{e}}^{inx}, \quad g_{T} = \sum_{n\geq 1}b_{n} {\textrm{e}}^{inx}$$ We recall that we have $$\label{eq:45}
{\begin{array}}{rcl}
U_{+}(s,t)\left[{\displaystyle \sum}_{n\geq 1} c_{n}{\textrm{e}}^{inx}\right] &=& {\displaystyle \sum}_{n\geq 1} c_{n}{\textrm{e}}^{inx} {\textrm{e}}^{(-K-\nu n^2)(t-s)}\\
U_{-}(s,t)\left[{\displaystyle \sum}_{n\geq 1} c_{n}{\textrm{e}}^{inx}\right] &=& {\displaystyle \sum}_{n\geq 1} c_{n}{\textrm{e}}^{inx} {\textrm{e}}^{(K^\prime-\nu n^2)(t-s)}\\
{\end{array}}$$ Then we can compute the right hand side of equation (\[eq:43\]): $$\label{eq:46}
{\begin{array}}{rcl}
&& ({\textrm{e}}^{-2(K+K^\prime)T}-1) {\displaystyle \int}_{0}^T U_{+}(s,T)[(U_{-}(0,s)(\varphi_{0})]^2 \, ds\\
&=& ({\textrm{e}}^{-2(K+K^\prime)T}-1) {\displaystyle \int}_{0}^T U_{+}(s,T) \left[{\displaystyle \sum}_{n} a_{n} {\textrm{e}}^{K^\prime s} {\textrm{e}}^{inx} {\textrm{e}}^{-s\nu n^2}\right]^2 \, ds\\
&=& ({\textrm{e}}^{-2(K+K^\prime)T}-1) {\displaystyle \int}_{0}^T U_{+}(s,T) \left[{\displaystyle \sum}_{n} {\textrm{e}}^{2sK^\prime } {\textrm{e}}^{inx} \sum_{p+q=n}a_{p} a_{q} {\textrm{e}}^{-s\nu (p^2+q^2)}\right] \, ds\\
&=& ({\textrm{e}}^{-2(K+K^\prime)T}-1) {\displaystyle \int}_{0}^T \left[{\displaystyle \sum}_{n} {\textrm{e}}^{-K(T-s)} {\textrm{e}}^{2sK^\prime } {\textrm{e}}^{-\nu (T-s) n^2}{\textrm{e}}^{inx} \sum_{p+q=n}a_{p} a_{q} {\textrm{e}}^{-s\nu (p^2+q^2)}\right] \, ds\\
&=& ({\textrm{e}}^{-2(K+K^\prime)T}-1) {\displaystyle \int}_{0}^T \left[{\displaystyle \sum}_{n} {\displaystyle \sum}_{p+q=n} a_{p} a_{q} {\textrm{e}}^{-KT -\nu T n^2 + inx} {\textrm{e}}^{2sK^\prime +sK+\nu s n^2-s\nu (p^2+q^2)}\right] \, ds\\
&=& \displaystyle ({\textrm{e}}^{-2(K+K^\prime)T}-1) \left[{\displaystyle \sum}_{n} {\displaystyle \sum}_{p+q=n} a_{p} a_{q} {\textrm{e}}^{-KT -\nu T n^2 + inx} \frac{{\textrm{e}}^{2TK^\prime +TK+2\nu pq T } - 1}{2K^\prime+K+2 \nu pq }\right] \\
{\end{array}}$$ For the left hand side of (\[eq:43\]) we have: $$\label{eq:47}
{\begin{array}}{rcl}
U_{-}(0,T)[g_{T}] &=& {\displaystyle \sum}_{n} b_{n}{\textrm{e}}^{inx} {\textrm{e}}^{K^\prime T -\nu n^2 T}
{\end{array}}$$ So that we get, for all $n$: $$\label{eq:48}
{\begin{array}}{rcl}
b_{n} &=& \displaystyle {\textrm{e}}^{(-K^\prime+K) T}({\textrm{e}}^{-2(K+K^\prime)T}-1) \left[ {\displaystyle \sum}_{p+q=n} a_{p} a_{q} \frac{{\textrm{e}}^{2TK^\prime +TK+2\nu pq T } - 1}{2K^\prime+K+2\nu pq}\right]
{\end{array}}$$ This defines a distribution iff $b_{n}$ has polynomial growth, iff $\underline{b_{n}}$ has polynomial growth, where $$\label{eq:49}
{\begin{array}}{rcl}
\underline{b_{n}} &=& \displaystyle ({\textrm{e}}^{-2(K+K^\prime)T}-1) \left[ {\displaystyle \sum}_{p+q=n} a_{p} a_{q} \frac{{\textrm{e}}^{2TK^\prime +TK+2\nu pq T }}{2K^\prime+K+2\nu pq}\right]
{\end{array}}$$ which is clearly not the case for every sequence $(a_{n})$ with polynomial growth, unless $K=K^\prime=0$.
Particular case: $K=K'=0$
-------------------------
We consider the particular case where $K=K'=0$, i.e. there is no nudging term either in the forward or backward equations. In this case, proposition \[thrm:K0\] holds true.
Of course, the backward equation itself is ill-posed, as even if there is existence and unicity of the solution (e.g. if the final condition ${\widetilde{u}}(T)$ comes from a resolution of the forward equation over the same time period), it does not depend in a continuous way of the data.
The proof is straightforward by using the following Cole-Hopf transformations [@Cole; @Hopf]: $$\label{eq:colehopf}
\begin{array}{rclcrcl}
u &=& \displaystyle -2\nu \frac{{\partial_x}v}{v}, & \quad & v(t,x) = v(t,0) e^{-\frac{1}{2\nu}{\displaystyle \int}_0^x u(t,s)\ ds}\\[0.3cm]
{\widetilde{u}} &=& \displaystyle -2\nu \frac{{\partial_x}{\widetilde{v}}}{{\widetilde{v}}}, & \quad & {\widetilde{v}}(t,x) = {\widetilde{v}}(t,0) e^{-\frac{1}{2\nu}{\displaystyle \int}_0^x {\widetilde{u}}(t,s)\ ds}
\end{array}$$ in the forward and backward equations respectively. These transformations allow us to consider the same forward and backward problem, but on the heat equation. The Fourier transform gives the existence and uniqueness of a solution to the forward and backward heat equation, and the equality between the forward $v$ and backward ${\widetilde{v}}$ solutions. Equations (\[eq:colehopf\]) extend the result to the viscous Bürgers’ equation.
Non viscous transport equations {#sec:nonvisc}
===============================
Linear case: proof of theorem \[thm:lin-nonvisc\]
-------------------------------------------------
In this section we prove theorem \[thm:lin-nonvisc\].
The first two points of the theorem are easily proven as in theorem \[thm:lin-visc\] with a vanishing viscosity.
Thus we only prove the third point. To do so, we recall that the curves $(s,\psi(s,x))$ are the characteristics of the direct equation (\[eq:1nvl\]-F) with $K=0$, such that $(s,\psi(s,x))|_{s=0} = (0,x)$ (see [@Courant; @Evans] for characteristics theory).
For the forward equation (\[eq:1nvl\]-F), this change of variable gives $$\label{mneq:4}
{\partial_s}w(s,\psi(s,x)) = -K(\psi(s,x)) w(s,\psi(s,x))$$ So that $$\label{mneq:5}
w(s,\psi(s,x)) = w(0,x) \, \exp \left(-\int_{0}^s K(\psi(\sigma,x))\, d\sigma \right)$$ And in particular for $w(T)$ we have $$\label{mneq:10}
w(T,\psi(T,x)) = w(0,x) \, \exp \left(-{\displaystyle \int}_{0}^T K(\psi(\sigma,x))\, d\sigma \right)$$ For ${\widetilde{w}}$ we have similarly $$\label{mneq:8}
{\partial_s}{\widetilde{w}}(s,\psi(s,x)) = K^\prime(\psi(s,x)) {\widetilde{w}}(s,\psi(s,x))$$ So that we have: $$\label{mneq:9}
{\begin{array}}{rcl}
{\widetilde{w}}(s,\psi(s,x)) &=& {\widetilde{w}}(T,\psi(T,x)) \, \exp \left(-{\displaystyle \int}_{s}^T K^\prime(\psi(\sigma,x))\, d\sigma \right)\\
&=& w(T,\psi(T,x)) \, \exp \left(-{\displaystyle \int}_{s}^T K^\prime(\psi(\sigma,x))\, d\sigma \right)
{\end{array}}$$ Using (\[mneq:10\]) and (\[mneq:5\]) we get $$\label{mneq:11}
{\begin{array}}{cl}
&{\widetilde{w}}(s,\psi(s,x))\\
=& w(0,x) \, \exp \left(-{\displaystyle \int}_{0}^T K(\psi(\sigma,x))\, d\sigma \right)\exp \left(-{\displaystyle \int}_{s}^T K^\prime(\psi(\sigma,x))\, d\sigma \right)\\
=& w(s,\psi(s,x)) \, \exp \left({\displaystyle \int}_{0}^s K(\psi(\sigma,x))\, d\sigma \right)\exp \left(-{\displaystyle \int}_{0}^T K(\psi(\sigma,x))\, d\sigma \right) \exp \left(-{\displaystyle \int}_{s}^T K^\prime(\psi(\sigma,x))\, d\sigma \right)\\
=& w(s,\psi(s,x)) \, \exp \left(-{\displaystyle \int}_{s}^T K(\psi(\sigma,x))+K^\prime(\psi(\sigma,x))\, d\sigma \right)
{\end{array}}$$
Non linear case: proof of theorem \[thm:bg-nonvisc\] and proposition \[prpstn:bg-nonvisc\]
------------------------------------------------------------------------------------------
From equation (\[eq:1nvb\]), we deduce that the forward error $w$ satisfies the following equation: $${\partial_t}w + w{\partial_x}w + u_{obs} {\partial_x}w + w {\partial_x}u_{obs} = -K w$$ By multiplying by $w$ and integrating over $\Omega$, we obtain $$\frac{1}{2}\, {\partial_t}\left( {\displaystyle \int}_\Omega w^2 \right) + {\displaystyle \int}_\Omega w^2 {\partial_x}w + {\displaystyle \int}_\Omega ( u_{obs} w {\partial_x}w + w^2 {\partial_x}u_{obs} ) = - {\displaystyle \int}_\Omega K w^2$$ Some integrations by part give the following: $${\partial_t}(\| w(t)\|^2) = {\displaystyle \int}_\Omega (-2K-{\partial_x}u_{obs}) w^2$$ We set $M = \| {\partial_x}u_{obs} \|_\infty$, and as $K$ does not depend on $x$, $${\partial_t}(\| w(t)\|^2) \le (-2K+M) \| w(t)\|^2$$ We have a similar result for the backward error: $${\partial_t}(\| {\widetilde{w}}(t)\|^2) \le (-2K'+M) \| {\widetilde{w}}(t)\|^2$$ We first consider the first point of theorem \[thm:bg-nonvisc\], i.e. $K(t,x)=K$. Grönwall’s lemma between times $t$ and $T$ gives $$\begin{aligned}
\|w(T)\|^2 &\le& e^{(-2K+M)(T-t)} \|w(t)\|^2 \\
\|{\widetilde{w}}(t)\|^2 &\le& e^{(-2K'+M)(T-t)} \|{\widetilde{w}}(T)\|^2\end{aligned}$$ from which equation (\[eq:3nvb\]) is easily deduced.
In the second case, i.e. $K(t,x) = K {\mathbbm{1}}_{[t_{1},t_{2}]}(t)$ and by successively applying Grönwall’s lemma between times $0$ and $t_1$, $t_1$ and $t_2$, and $t_2$ and $T$, one obtains equation (\[eq:4nvb\]).
Finally, in the case $K(t,x) = K(x)$, by considering a similar approach as in section \[sec:nonvisc\].1, i.e. using the characteristics of the direct equation (\[eq:1nvb\]-F) (resp. B), it is straightforward to prove that $$\label{eq:prp1}
w(s,\psi(s,x)) = w(0,x) e^{-{\displaystyle \int}_0^s K(\psi(\sigma,x)) d\sigma} e^{-{\displaystyle \int}_0^s \partial_x u_{obs}(\sigma,\psi(\sigma,x)) d\sigma}$$ and then, $$\label{eq:prp2}
w(T,\psi(T,x)) = w(0,x) e^{-{\displaystyle \int}_0^T K(\psi(\sigma,x)) d\sigma} e^{-{\displaystyle \int}_0^T \partial_x u_{obs}(\sigma,\psi(\sigma,x)) d\sigma}$$ from which equation (\[eq:5nvb\]) is easily deduced.
Conclusion {#sec:concl}
==========
Several conclusions can be drawn from all these results. First of all, in many situations, the coupled forward-backward problem is well posed, and the nudging terms allow the solution to be corrected (towards the observation trajectory) everywhere and with an exponential convergence. From a numerical point of view, these results have been observed in several geophysical situations, and many numerical experiments have confirmed the global convergence of the BFN algorithm [@AurouxBlum2].
The second remark is that the worst situation, i.e. for which there is no solution to the BFN problem, is the viscous Bürgers’ equation. But in real geophysical applications, there is most of the time no theoretical viscosity in the equation, and one should consider the inviscid equation instead, for which some convergence results are given. From the numerical point of view, these phenomenon are easily confirmed, as well as the exponential decrease of the error $w$. But we also noticed that if the observations are not too sparse, the algorithm works well even with a quite large viscosity.
Finally, these results extend the theory of linear observers in automatics [@Luenberger]: instead of considering an infinite time interval (only one forward equation but for $T\to +\infty$), one can consider an infinite number of BFN iterations on a finite time interval. This is of great interest in almost all real applications, for which it is not possible to consider a very large time period.
Acknowledgement {#acknowledgement .unnumbered}
---------------
The authors are thankful to Prof. G. Lebeau (University of Nice Sophia-Antipolis) for his fruitful ideas and comments. This work has been partially supported by ANR JCJC07 and INSU-CNRS LEFE projects.
[^1]: Institut de Mathématiques de Toulouse, Université Paul Sabatier Toulouse 3, 31062 Toulouse cedex 9, France; [auroux@math.univ-toulouse.fr]{}
[^2]: INRIA, Grenoble, France
[^3]: Université de Grenoble, Laboratoire Jean Kuntzmann, UMR 5224, Grenoble, France; [maelle.nodet@inria.fr]{}
[^4]: INRIA, Grenoble, France
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This work establishes the algebraic structure of the Kohn-Sham equations to be solved in a density formulation of electron and phonon dynamics, including the superconducting order parameter. A Bogoliubov transform is required to diagonalize both the fermionic and bosonic Kohn-Sham Hamiltonians since they both represent a non-interacting quantum field theory. The Bogoliubov transform for phonons is non-Hermitian in the general case, and the corresponding time-evolution is non-unitary. Several sufficient conditions for ensuring that the bosonic eigenvalues are real are provided and a practical method for solving the system is described. Finally, we produce a set of approximate mean-field potentials which are functionals of the electronic and phononic density matrices and depend on the electron-phonon vertex.'
author:
- 'Chung-Yu Wang'
- 'T. Müller'
- 'S. Sharma'
- 'E. K. U. Gross'
- 'J. K. Dewhurst'
title: 'Coupled Kohn-Sham equations for electrons and phonons'
---
In this work we determine time-dependent Kohn-Sham matrix equations used for combined systems of electron and phonons. Ultimately, the potentials which enter the equations are considered to be functionals of the density matrices produced from the time-evolving Kohn-Sham state. One particular aim of this work is to include lattice degrees of freedom in simulations of intense laser pulses acting on solids. This is necessary for the recovery of the magnetic moment or the superconducting order parameter which are typically destroyed by the laser pulse.
Densities of the electron-nuclear system
========================================
Consider the electron-nuclear Schrödinger equation in atomic units: $$\begin{aligned}
\label{en_se}
\hat{H}=\frac{1}{2}\sum_{i}\nabla_i^2
+\sum_{I=1}\frac{1}{2M_I}\nabla_I^2
+\sum_{i>j}\frac{1}{|{\bf r}_i-{\bf r}_j|}
+\sum_{i,I}\frac{Z_I}{|{\bf r}_i-{\bf R}_I|}
+\sum_{I>J}
\frac{Z_I Z_J}{|{\bf R}_I-{\bf R}_J|}\end{aligned}$$ for $i,j=1\ldots N_{\rm e}$ electrons and $I,J=1\ldots N_{\rm n}$ nuclei, where $M_I$ is the nuclear mass and $Z_I$ is the nuclear charge, assumed negative. The wave function $\Psi(\dub{\bf r},\dub{s},\dub{\bf R},\dub{S},t)$, where $\dub{s}$ and $\dub{S}$ are electron and nuclear spin coordinates, is determined in a finite (but large) box with periodic boundary conditions.
Conventional densities obtained from this wave function are spatially constant and therefore not useful as variational quantities and a different approach to density functional theory (DFT) is required. The electron-nuclear wave function can be factored exactly[@Abedi2010] as: $$\begin{aligned}
\label{wf_fact}
\Psi(\dub{\bf r},\dub{s},\dub{\bf R},\dub{S},t)=
\Phi_{\dub{\bf R},\dub{S}}(\dub{\bf r},\dub{s},t)
\chi(\dub{\bf R},\dub{S},t),\end{aligned}$$ where $\sum_{\dub{s}}\int d\dub{\bf r}\,|\Phi_{\dub{\bf R},\dub{S}}(\dub{\bf r},\dub{s},t)|^2=1$ for all $\dub{\bf R}$, $\dub{S}$ and $t$.
Let $V_{\rm BO}(\dub{\bf R})$ be the Born-Oppenheimer (BO) potential energy surface (PES)[^1] and suppose this has a unique minimum at $\dub{\bf R}^0$.
Electronic densities
--------------------
A purely electronic wave function is obtained by evaluating $\Phi_{\dub{\bf R}^0\dub{S}}(\dub{\bf r},\dub{s},t)$. From this, a variety of familiar electronic densities may be obtained, for example $$\begin{aligned}
\label{rho_0}
\rho_{\dub{\bf R}^0}({\bf r},t)\equiv
\sum_{\dub{S}}\int d^3r_2\ldots d^3r_{N_{\rm e}}
\left|\Phi_{\dub{\bf R}^0,\dub{S}}(\dub{\bf r},\dub{s},t)\right|^2,\end{aligned}$$ with similar definitions for the magnetization ${\bf m}({\bf r})$, current density ${\bf j}({\bf r})$, superconducting order parameter, $\chi({\bf r},{\bf r}')$ and so on. Such a density is plotted in Fig. \[hydrogen\] for the hydrogen atom using various masses. Note that this density is not a constant and also varies with the nuclear mass. The densities for $M=\infty$ and the physical mass of a proton, $M\simeq 1836$, are indistinguishable. However, the density is considerably different when the nuclear and electronic masses are the same, $M=1$. In the same figure is a plot of the density evaluated at a particular point against $1/M$. The density decreases monotonically with reciprocal mass and has a non-zero derivative at $1/M=0$.
A Kohn-Sham Hamiltonian defined to reproduce the density in (\[rho\_0\]) as its ground state can be written as $$\begin{aligned}
\label{KS_fm_r}
\hat{H}_{\rm KS}=-\frac{1}{2}\nabla^2+V_{\dub{\bf R}^0}({\bf r})
+V_{\rm H}({\bf r})+V_{\rm xc}({\bf r})
+V_{\rm fmc}({\bf r},t),\end{aligned}$$ where $V_{\dub{\bf R}^0}({\bf r})$ is the external potential determined from the nuclei fixed at $\dub{\bf R}^0$; $V_{\rm H}$ and $V_{\rm xc}$ are the usual Hartree and exchange-correlation potential; and $V_{\rm fmc}$ is a correction term to account for the finite mass of the nuclei. Note that this potential vanishes in the infinite mass limit, i.e. $\lim_{M\rightarrow\infty}V_{\rm fmc}({\bf r},t)=0$, and the regular Kohn-Sham equations for a fixed external potential are recovered. The finite mass correction potential is plotted in Fig. \[hydrogen\] for hydrogen with an artificially light $M=2$. Not surprisingly, the potential is mainly repulsive. Mass correction potentials corresponding to other densities can also be defined such as a magnetic field ${\bf B}_{\rm fmc}({\bf r},t)$ or a pairing potential $\Delta_{\rm fmc}({\bf r},{\bf r}',t)$. In the latter case, the finite mass correction constitutes the entire potential for phonon-coupled superconductors.
![On the left is a plot of the electronic charge density times $r^2$, as defined in (\[rho\_0\]), versus $r$ for various nuclear masses. In the middle is the charge density evaluated at $r=0$ and $r=1$ plotted as a function of $1/M$. On the right is a plot of the finite mass correction potential, evaluated for $M=2$, plotted alongside the nuclear potential $-1/r$.[]{data-label="hydrogen"}](hydrogen.pdf){width="95.00000%"}
Phonon densities
----------------
We now consider the expansion of the BO PES around $\dub{\bf R}^0$ and assume that the leading order, apart from a constant, is quadratic: $$\begin{aligned}
V_{\rm BO}(\dub{\bf R})=V_{\rm BO}(\dub{\bf R}^0)
+\frac{1}{2}\sum_{I\alpha,J\beta}u_{I\alpha}
K_{I\alpha,J\beta}u_{J\beta}+\cdots\end{aligned}$$ where $K_{I\alpha,J\beta}\equiv
\left.\partial^2V_{\rm BO}/\partial R_{I\alpha}
\partial R_{J\beta}\right|_{\dub{\bf R}^0}$, $\dub{\bf u}\equiv \dub{\bf R}-\dub{\bf R}^0$ and $\alpha$, $\beta$ represent Cartesian directions. The associated classical modes, called phonons, are determined by solving the eigenvalue equation $$\begin{aligned}
\label{evphn}
K{\bf e}_n=\nu_n^2 M{\bf e}_n\end{aligned}$$ for $\nu_n$ and ${\bf e}_n$, where $M_{I\alpha,J\beta}\equiv M_I\delta_{IJ}\delta_{\alpha\beta}$ is the diagonal matrix of nuclear masses. Let $\hat{p}_{I\alpha}\equiv-i\partial_{I\alpha}$ be the momentum operator which acts on a particular nuclear coordinate, then $[\hat{u}_{I\alpha},\hat{p}_{J\beta}]=i\delta_{IJ}\delta_{\alpha\beta}$. We can also define $$\begin{aligned}
\hat{\mathcal{U}}\equiv \mathcal{S}\hat{\bf u} \qquad
\hat{\mathcal{P}}\equiv \mathcal{T}\hat{\bf p},\end{aligned}$$ where $\mathcal{S}=2^{-\frac{1}{2}}\nu^{\frac{1}{2}}{\bf e}^t$, $\mathcal{T}=2^{-\frac{1}{2}}\nu^{-\frac{1}{2}}{\bf e}^t M^{-1}$ and $\nu$ is the diagonal matrix of eigenvalues, then $[\hat{\mathcal{U}},\hat{\mathcal{P}}]=\frac{i}{2}I$ and $\hat{H}^{\rm b}=\hat{\mathcal{P}}^t\nu\hat{\mathcal{P}}
+\hat{\mathcal{U}}^t\nu\hat{\mathcal{U}}$. Writing $$\begin{aligned}
\hat{d}=\hat{\mathcal{U}}+i\hat{\mathcal{P}} \qquad
\hat{d}^{\dag}=\hat{\mathcal{U}}^t-i\hat{\mathcal{P}}^t,\end{aligned}$$ the Hamiltonian is cast in diagonal form $$\begin{aligned}
\hat{H}^{\rm b}=\sum_i\nu_i\left(\hat{d}_i^{\dag}\hat{d}_i+\frac{1}{2}\right).\end{aligned}$$
We will equate the [*exact*]{} expectation values of nuclear positions, momenta and bilinear combinations thereof with those of a fictitious, non-interacting bosonic system. Thus if the expectation values $\langle\hat{d}_i^{\dag}\rangle$ and $\langle\hat{d}_i\rangle$ are known, then expectation values of the displacement and momentum operators can be reconstructed from $\langle\hat{\bf u}\rangle=\frac{1}{2}\mathcal{S}^{-1}
(\langle\hat{d}^{\dag}\rangle^t+\langle\hat{d}\rangle)$ and $\langle\hat{\bf p}\rangle=\frac{i}{2}\mathcal{T}^{-1}
(\langle\hat{d}^{\dag}\rangle^t-\langle\hat{d}\rangle)$. Bilinear expectation values $\langle\hat{d}_i^{\dag}\hat{d}_j^{\dag}\rangle$, $\langle\hat{d}_i\hat{d}_j\rangle$ and $\langle\hat{d}_i^{\dag}\hat{d}_j\rangle$ can be used to evaluate corresponding products of momentum and position. For instance $$\begin{aligned}
\langle\hat{\bf u}\otimes\hat{\bf p}\rangle
=\frac{i}{4}\mathcal{S}^{-1}\left\langle
(\hat{d}^{\dag})^t\hat{d}^{\dag}
-(\hat{d}^{\dag})^t(\hat{d})^t
-\hat{d}\hat{d}^{\dag}
+\hat{d}(\hat{d})^t\right\rangle(\mathcal{T}^{-1})^t.\end{aligned}$$ Note that in the unperturbed harmonic oscillator ground state, all these expectation values are zero. A further point is that the Hermiticity of the second-quantized bosonic system described below renders some of these expectation values inaccessible, one of which is the nuclear current density. By removing the Hermitian constraint this restriction is lifted.
Algebraic form of the electron and phonon Kohn-Sham equations
=============================================================
In this section, the details of the Kohn-Sham Hamiltonian, such as that in (\[KS\_fm\_r\]), are removed and we focus on the algebraic structure instead. This is done by considering only the matrix elements of the electron and phonon Hamiltonians. In the following section all matrices are taken to be finite in size.
Kohn-Sham Hamiltonian for electrons
-----------------------------------
The most general fermionic Kohn-Sham Hamiltonian of interest here has the form $$\begin{aligned}
\label{Hfm_ks}
\hat{H}_s^{\rm f}=
\sum_{i,j=1}^{n_{\rm f}}A_{ij}\hat{a}_i^{\dag}\hat{a}_j
+B_{ij}\hat{a}_i^{\dag}\hat{a}_j^{\dag}
-B_{ij}^*\hat{a}_i\hat{a}_j,\end{aligned}$$ where $A$ is a Hermitian matrix representing (\[KS\_fm\_r\]); $B$ is antisymmetric and corresponds to the matrix elements of the superconducting pairing potential $\Delta({\bf r},{\bf r}')$. The sum runs to the number of fermionic basis vectors $n_{\rm f}$. The matrix $A$ includes a chemical potential term $A_{ij}\rightarrow A_{ij}+\mu\delta_{ij}$ which is used to fix the total electronic number to $N_{\rm e}$. The Hermitian eigenvalue problem $$\begin{aligned}
\label{hm_bog_fm}
\Bigg(\begin{matrix}
A & B \\
B^{\dag} & -A^*
\end{matrix}\Bigg)
\Bigg(\begin{matrix}
\vec{U}_j \\
\vec{V}_j
\end{matrix}\Bigg)
=\varepsilon_j
\Bigg(\begin{matrix}
\vec{U}_j \\
\vec{V}_j
\end{matrix}\Bigg)\end{aligned}$$ yields $2n_{\rm f}$ solutions. However, if $\varepsilon_j$ and $(\vec{U}_j,\vec{V}_j)$ are an eigenpair, then so are $-\varepsilon_j$ and $(\vec{V}_j^*,\vec{U}_j^*)$. Now we select $n_{\rm f}$ eigenpairs with each corresponding to either a positive or negative eigenvalues but with its conjugate partner not in the set. This choice will not affect the eventual Kohn-Sham ground state. Let $U$ and $V$ be the $n_{\rm f}\times n_{\rm f}$ matrices with these solutions arranged column-wise. Orthogonality of the vectors is then expressed as $$\begin{aligned}
\Bigg(\begin{matrix}
\,U\, & \,V^*\, \\
\,V\, & \,U^*\,
\end{matrix}\Bigg)^{\dag}
\Bigg(\begin{matrix}
\,U\, & \,V^*\, \\
\,V\, & \,U^*\,
\end{matrix}\Bigg)
=I,\end{aligned}$$ which implies $U^{\dag}U+V^{\dag}V=I$ and $U^{\dag}V^*+V^{\dag}U^*=0$. Completeness further implies $UU^{\dag}+V^*V^t=I$ and $UV^{\dag}+V^*U^t=0$. The Hamiltonian (\[Hfm\_ks\]) can now be diagonalized with the aid of $U$ and $V$ via a Bogoliubov transformation: $$\begin{aligned}
\label{bog_tfm}
\begin{split}
\hat{\alpha}_j^{\dag}&=\sum_{i=1}^{n_{\rm f}} U_{ij}\hat{a}_i^{\dag}+V_{ij}\hat{a}_i \\
\hat{\alpha}_j&=\sum_{i=1}^{n_{\rm f}} U_{ij}^*\hat{a}_i+V_{ij}^*\hat{a}_i^{\dag},
\end{split}\end{aligned}$$ in other words $$\begin{aligned}
\hat{H}_s=\sum_{i=1}^{n_{\rm f}}\varepsilon_i\hat{\alpha}_i^{\dag}\hat{\alpha}_i+W_0,\end{aligned}$$ where $W_0=-{\rm tr}(V\varepsilon V^{\dag})$. The fermionic algebra is also preserved for $\hat{\alpha}$: $$\begin{aligned}
\label{alpha_acr}
\bigl\{\hat{\alpha}_i,\hat{\alpha}_j^{\dag}\bigr\}=\delta_{ij} \qquad
\bigl\{\hat{\alpha}_i,\hat{\alpha}_j\bigr\}=0 \qquad
\bigl\{\hat{\alpha}_i^{\dag},\hat{\alpha}_j^{\dag}\bigr\}=0.\end{aligned}$$
### Non-interacting ground state
Given $A$ and $B$, the matrices $U$, $V$ and $\varepsilon$ are fixed by the Kohn-Sham-Bogoliubov equations (\[hm\_bog\_fm\]). What remains is to construct from these the eigenstates of (\[Hfm\_ks\]) in the Fock space. To do so, one first needs to find a normalized vacuum state which is anihilated by all the $\hat{\alpha}_j$. Here it is (denoted $|\bar{0}\rangle$ so as to distinguish it from the normal vacuum state $|0\rangle$): $$\begin{aligned}
|\bar{0}\rangle\equiv\prod_{j=1}^{n_{\rm f}}\hat{U}_j\prod_{k=1}^{n_{\rm f}}
\hat{a}_k^{\dag}|0\rangle+\prod_{j=1}^{n_{\rm f}}\hat{V}_j^{\dag}|0\rangle,\end{aligned}$$ where $\hat{U}_j\equiv\sum_i U_{ij}^*\hat{a}_i$ and $\hat{V}_j^{\dag}\equiv\sum_i V_{ij}^*\hat{a}_i^{\dag}$. It is readily verified that $\hat{\alpha}_j|\bar{0}\rangle=0$ for all $j$; the vacuum has the correct normalisation $\langle\bar{0}|\bar{0}\rangle=1$; and the vacuum energy $\langle\bar{0}|H_s|\bar{0}\rangle=W_0$. The non-interacting many-body ground state can be constructed in analogy with the usual fermionic situation. Let $M$ be the number of $\varepsilon_j<0$, then the ground state $$\begin{aligned}
\label{gs_fm}
|\Phi_0\rangle=\prod_{j=1}^M\hat{\alpha}_j^{\dag}|\bar{0}\rangle,\end{aligned}$$ so that $$\begin{aligned}
\hat{H}_s|\Phi_0\rangle=E_0^s|\Phi_0\rangle,\end{aligned}$$ where $E_0^s=\sum_{j=1}^M\varepsilon_j+W_0$.
Normal and anomalous densities
------------------------------
To determine the densities, both normal and anomalous, one first has to find the expectation values of pairs of $\hat{a}$ and $\hat{a}^{\dag}$. These in turn are linear combinations of expectation values of pairs of $\hat{\alpha}$ and $\hat{\alpha}^{\dag}$. Using the anti-commutation relations (\[alpha\_acr\]) and remembering that $\hat{\alpha}|\bar{0}\rangle=0$, we get $$\begin{aligned}
\label{alpha_mat_1}
\langle\Phi_0|\hat{\alpha}_i^{\dag}\hat{\alpha}_j|\Phi_0\rangle=
\begin{cases}
\delta_{ij} & i,j\le M \\
0 & i,j>M
\end{cases} \qquad
\langle\Phi_0|\hat{\alpha}_i\hat{\alpha}_j^{\dag}|\Phi_0\rangle=
\begin{cases}
0 & i,j\le M \\
\delta_{ij} & i,j>M
\end{cases}\end{aligned}$$ and $$\begin{aligned}
\label{alpha_mat_2}
\langle\Phi_0|\hat{\alpha}_i^{\dag}\hat{\alpha}_j^{\dag}|\Phi_0\rangle=0
\qquad
\langle\Phi_0|\hat{\alpha}_i\hat{\alpha}_j|\Phi_0\rangle=0.\end{aligned}$$ Equations (\[bog\_tfm\]), (\[alpha\_mat\_1\]) and (\[alpha\_mat\_2\]) give the normal and anomalous density matrices: $$\begin{aligned}
\label{dm_fm}
\langle\Phi_0|\hat{a}_i^{\dag}\hat{a}_j|\Phi_0\rangle=
\sum_{k=1}^M U_{ik}^*U_{jk}+\sum_{k=M+1}^{n_{\rm f}}V_{ik}V_{jk}^*\end{aligned}$$ and $$\begin{aligned}
\label{dma_fm}
\langle\Phi_0|\hat{a}_i^{\dag}\hat{a}_j^{\dag}|\Phi_0\rangle=
\sum_{k=1}^M U_{ik}^*V_{jk}+\sum_{k=M+1}^{n_{\rm f}}V_{ik}U_{jk}^*.\end{aligned}$$
### Time evolution
What remains is to determine how the Kohn-Sham state evolves with time in the time-dependent density function theory (TDDFT) version of the method. The form of the ground state equations dictates that of the time-dependent equations. Thus if we assume that the matrices $A$ and $B$ are now functions of time, then the time-dependent generalization of the orbital equation (\[hm\_bog\_fm\]) is $$\begin{aligned}
\label{hmt_bog_fm}
i\frac{\partial}{\partial t}
\Bigg(\begin{matrix}
\vec{U}_j \\
\vec{V}_j
\end{matrix}\Bigg)
=
\Bigg(\begin{matrix}
A(t) & B(t) \\
B^{\dag}(t) & -A^*(t)
\end{matrix}\Bigg)
\Bigg(\begin{matrix}
\vec{U}_j \\
\vec{V}_j
\end{matrix}\Bigg)\end{aligned}$$ with the Kohn-Sham state given by $|\Phi(t)\rangle=\prod_{i=1}^M\hat{\alpha}_i^{\dag}(t)|\bar{0}\rangle$. It is easy to show that this state satisfies $$\begin{aligned}
i\frac{\partial |\Phi(t)\rangle}{\partial t}
=\left(\sum_{ij}A_{ij}(t)\hat{a}_i^{\dag}\hat{a}_j
+B_{ij}(t)\hat{a}_i^{\dag}\hat{a}_j^{\dag}
-B_{ij}^*(t)\hat{a}_i\hat{a}_j\right)|\Phi(t)\rangle\end{aligned}$$ with $|\Phi(t=0)\rangle=|\Phi_0\rangle$. Note that the number of ‘occupied orbitals’ $M$ remains constant with time. Here we have assumed that the system has evolved from its ground state.
Kohn-Sham Hamiltonian for phonons
---------------------------------
The most general bosonic Kohn-Sham Hamiltonian of interest here has the form $$\begin{aligned}
\label{Hbs_ks}
\hat{H}_s^{\rm b}=\sum_{ij}D_{ij}\hat{d}_i^{\dag}\hat{d}_j
+\tfrac{1}{2}E_{ij}\hat{d}_i^{\dag}\hat{d}_j^{\dag}
+\tfrac{1}{2}E_{ij}^*\hat{d}_i\hat{d}_j
+\sum_i F_i\hat{d}_i^{\dag}+F_i^*\hat{d}_i,\end{aligned}$$ where $D$ is Hermitian and contains the kinetic energy operator; $E$ is a complex symmetric matrix and $F$ is a complex vector. Note that $\hat{H}_{\rm KS}^{\rm b}$ contains the anomalous terms $\hat{d}_i^{\dag}\hat{d}_j^{\dag}$ and $\hat{d}_i\hat{d}_j$. In analogy with the fermionic case, this Hamiltonian can be diagonalized $$\begin{aligned}
\label{Hbs_bog}
\hat{H}_s^{\rm b}=\sum_{i=1}^{n_{\rm b}}
\omega_i\hat{\gamma}_i^{\dag}\hat{\gamma}_i+\Omega_0\end{aligned}$$ with the Bogoliubov-type transformation $$\begin{gathered}
\label{bog_bs}
\begin{split}
\hat{\gamma}_j=\sum_{i=1}^{n_{\rm b}} W_{ij}^*\hat{d}_i+X_{ij}^*\hat{d}_i^{\dag}+y_j^* \\
\hat{\gamma}_j^{\dag}=\sum_{i=1}^{n_{\rm b}} W_{ij}\hat{d}_i^{\dag}+X_{ij}\hat{d}_i+y_j,
\end{split}\end{gathered}$$ where $W$ and $X$ are complex matrices and $y$ is a complex vector. The index $j$ runs from $1$ to twice the number of bosonic modes. Requiring that $\hat{\gamma}$ and $\hat{\gamma}^{\dag}$ obey bosonic algebra (the complex numbers $y_j$ obviously commute with themselves and the operators, maintaining the algebra) yields $$\begin{aligned}
W^{\dag}W-X^{\dag}X=I \label{wx_cond1} \\
W^tX-X^tW=0. \label{wx_cond2}\end{aligned}$$ After some manipulation, we arrive at the Kohn-Sham-Bogoliubov equations for phonons: $$\begin{aligned}
\label{hm_bog_bs}
\Bigg(\begin{matrix}
D & -E \\
E^* & -D^*
\end{matrix}\Bigg)
\Bigg(\begin{matrix}
\vec{W}_j \\
\vec{X}_j
\end{matrix}\Bigg)
=\omega_j
\Bigg(\begin{matrix}
\vec{W}_j \\
\vec{X}_j
\end{matrix}\Bigg).\end{aligned}$$ The above equation can not be reduced to a symmetric eigenvalue problem because the conditions (\[wx\_cond1\]) and (\[wx\_cond2\]) correspond to the indefinite inner product $\eta={\rm diag}(1,\ldots,1,-1,\ldots,-1)$. Such matrix Hamiltonians can still possess real eigenvalues [@Sudarshan1961; @Mostafazadeh2002].
### Real case
We now consider the special case where the matrices $D$ and $E$ are real symmetric and the vector $F$ is also real. The bosonic Hamiltonian can be written as $$\begin{aligned}
\label{Hbs_ks_r}
\hat{H}_s^{\rm b}=\sum_{ij}D_{ij}\hat{d}_i^{\dag}\hat{d}_j
+\tfrac{1}{2}E_{ij}\left(\hat{d}_i^{\dag}\hat{d}_j^{\dag}
+\hat{d}_i\hat{d}_j\right)
+\sum_i F_i\left(\hat{d}_i^{\dag}+\hat{d}_i\right).\end{aligned}$$ We now prove that under certain conditions, the matrix equation (\[hm\_bog\_bs\]) always possesses $n_{\rm b}$ solutions which satisfy (\[wx\_cond1\]) and (\[wx\_cond2\]). This requires the observation that if the vector $v\equiv(w,x)$ with eigenvalue $\omega$ is a solution to (\[hm\_bog\_bs\]), then so is $\bar{v}\equiv(x,w)$ with eigenvalue $-\omega$.
Let $$\begin{aligned}
H=
\Bigg(\begin{matrix}
D & -E \\
E & -D
\end{matrix}\Bigg),\end{aligned}$$ where $D$ and $E$ are real symmetric $n_{\rm b}\times n_{\rm b}$ matrices. Suppose $H$ has only real, non-degenerate eigenvalues and every eigenvector $v$ satisfies $v^t\eta v\ne 0$. Then
1. The eigenvectors of $H$ may be chosen real.
2. The eigenvalue equation (\[hm\_bog\_bs\]) has exactly $n_{\rm b}$ solutions which satisfy the conditions (\[wx\_cond1\]) and (\[wx\_cond2\]).
The proof that the eigenvectors may be chosen real is straight-forward, so we now prove the second statement. Let $v_1$ and $v_2$ be two real eigenvectors of $H$ with corresponding real eigenvalues $\omega_1$ and $\omega_2$. Now $Hv_1=\omega_1v_1\Rightarrow \eta Hv_1=\omega_1\eta v_1$ and because $\eta H$ is symmetric we have $v_1^t\eta H=\omega_1v_1^t\eta$ and thus $v_1^t\eta Hv_2=\omega_1v_1^t\eta v_2$. We also have that $Hv_2=\omega_2v_2$ and so $v_1^t\eta Hv_2=\omega_2v_1^t\eta v_2$. Subtracting and using the fact that $\omega_1\ne\omega_2$ yields $v_1^t\eta v_2=0$. This is equivalent to the off-diagonal part of condition (\[wx\_cond1\]). Consider an eigenvector $v=(w,x)$ of $H$. Now $v^t\eta v\ne 0$, thus if $v^t\eta v<0$ then choose the other eigenvector $\bar{v}$ for which $\bar{v}^t\eta \bar{v}>0$. Such an eigenvector can be rescaled arbitrarily to ensure $v^t\eta v=1$. This corresponds to the diagonal part of (\[wx\_cond1\]) but is valid for only half of the total number of eigenvectors since rescaling cannot change the sign of $v^t\eta v$. These remaining vectors are discarded. Condition (\[wx\_cond2\]) is trivially satisfied for the diagonal. For any two vectors $v_i$ and $v_j$ suppose $v_j\ne\bar{v}_i$ then $\bar{v}_j=v_k$ for some other $k$. The off-diagonal part of condition (\[wx\_cond1\]) is satisfied for all vectors, thus $v_i^t\eta v_k=v_i^t\eta \bar{v}_j=0$. If $v_j=\bar{v}_i$ then one of these vectors will have been discarded.
The theorem is easily extended to the case where $H$ has degenerate eigenvalues. There is no guarantee that the eigenvalues of $H$ are real since the matrix is not Hermitian. We therefore need additional restrictions on the matrices $D$ and $E$ to ensure this; the following conditions are sufficient but not necessary. We use the notation $P\succ 0$ to mean that the symmetric matrix $P$ is positive definite, and that $P\succ Q$ implies $P-Q\succ 0$.
\[th\_LH\] Let $D\succ 0$, and suppose that $E$ is a symmetric matrix. If any of the following are true then $H$ has real eigenvalues:
1. $D\succ E D^{-1}E$.\[pos1\]
2. The largest eigenvalue of $(ED^{-1})^2$ is less than $1$.\[pos2\]
3. $z^{\dag}Dz>|z^{\dag}Ez|$ for all $z\in\mathbb{C}^{n_{\rm b}}$.\[pos3\]
4. $E\succ 0$ and $D\succ E$.\[pos4\]
5. $E\succ 0$ and $D^p\succ E^p$, where $p\ge 1$.\[pos5\]
6. $D^2\succ E^2$.\[pos6\]
Furthermore, if all eigenvalues are non-zero then all eigenvectors satisfy $v^t\eta v\ne 0$.
Let $\omega$ and $v$ be an eigenvalue and eigenvector of $H$. The matrix $$\begin{aligned}
\eta H=
\Bigg(\begin{matrix}
D & -E \\
-E & D
\end{matrix}\Bigg)\end{aligned}$$ is symmetric, therefore both sides of $v^{\dag}\eta H v=\omega v^{\dag}\eta v$ are real. The only requirement for $\omega$ to be real is that $v^{\dag}\eta H v$ be non-zero, which is ensured so long as $\eta H\succ 0$. This follows from either of the conditions \[pos1\] or \[pos2\] (see, for example, Ref. [@Horn1990]). Condition \[pos3\] follows from Theorem 2.1 in Ref [@Fitzgerald1977] and \[pos4\] follows immediately. The Löwner-Heinz theorem [@Zhan2002] reduces condition \[pos5\] to \[pos4\]. Finally, suppose $D^2\succ E^2$ where $E$ may not be positive definite. $E$ is symmetric therefore $E^2\succ 0$ which means that there exists a symmetric matrix $e\succ 0$ such that $e^2=E^2$. The Löwner-Heinz theorem implies that $D\succ e$, therefore $z^{\dag}Dz > z^{\dag}ez$ for all complex vectors $z\in\mathbb{C}^{n_{\rm b}}$. $E$ and $e$ can be simultaneously diagonalized and for each eigenvalue $\lambda$ of $E$ there is a corresponding positive eigenvalue $|\lambda|$ of $e$. In this eigenvector basis, it is easy to see that $z^{\dag}ez\ge|z^{\dag}Ez|$ for all $z$ which in turn gives condition \[pos3\], thereby proving \[pos6\]. In fact, all of the above conditions imply [@Fitzgerald1977] that $\eta H\succ 0$. Thus if all eigenvalues $\omega\ne 0$ then $v^t\eta v\ne 0$.
\[cor\_psd\] Let $D_0\succ 0$ and $E\succeq 0$ (positive semi-definite) then $D=D_0+E$ yields real eigenvalues for $H$.
Let $D$ be an arbitrary real symmetric matrix and let $f$ be a real function such that $|f(x)|<|x|$ for all $x\in\mathbb{R}$, then by setting $E=f(D)$ (in the usual ‘function of matrices’ sense [@Rinehart1955]) $H$ has real eigenvalues and every eigenvector $v$ satisfies $v^t\eta v\ne 0$.
We first note that $$\begin{aligned}
H^2=
\Bigg(\begin{matrix}
D^2-E^2 & [E,D] \\
[E,D] & D^2-E^2
\end{matrix}\Bigg).\end{aligned}$$ It is obvious for any $E=f(D)$ that $[E,D]=0$ and $D^2\succ E^2$. Therefore all the eigenvalues of $H^2$ are real and positive. We conclude that the eigenvalues of $H$ are real and non-zero, thus $v^t\eta v\ne 0$ follows from Theorem \[th\_LH\].
Let $D$ be a real symmetric matrix which has no zero eigenvalues and which commutes with all the matrices in a group representation $S=\{S_i\}$. Further suppose that any degenerate eigenvalues of $D$ correspond only to irreducible representations of $S$ (i.e. there are no accidental degeneracies). If $E$ is a real symmetric matrix which also commutes with all the matrices in $S$ then there exists a $\xi>0$ such that if $E\rightarrow \xi E$ then $H(\xi)$ has real eigenvalues.
From the properties of the determinant applied to blocked matrices, the eigenvalues of $H^2$ are also the eigenvalues of $Q\coloneqq D^2-E^2+[E,D]$. Since $[D,S_i]=[E,S_i]=0$ for all $i$ then $D^2$, $E^2$, $[E,D]$ and thus $Q(\xi)$ also commute with $S_i$. Schur’s lemma applies equally well to non-Hermitian matrices therefore the degeneracies of $Q(\xi)$ are not lost as $\xi$ increases. We also note that the roots of a polynomial depend continuously on its coefficients and hence the eigenvalues of $Q(\xi)$ depend continuously on $\xi$. From the conjugate root theorem, if $Q(\xi)$ has a complex eigenvalue then it must also have its complex conjugate as an eigenvalue. For sufficiently small $\xi>0$ the eigenvalues of $D^2$ cannot become complex because this would require lifting of a degeneracy. Also because of continuity and because $D^2$ has strictly positive eigenvalues, a sufficiently small $\xi>0$ will keep them positive. Hence the eigenvalues of $H(\xi)$ are real.
Once these equations are solved, the vector $y$ is determined from $$\begin{aligned}
\label{hm_bog_bs2}
y=\omega^{-1}\left(W^t-X^t\right)F,\end{aligned}$$ where $\omega={\rm diag}(\omega_1,\ldots,\omega_{n_{\rm b}})$. The constant term in (\[Hbs\_bog\]) given by $$\begin{aligned}
\Omega_0=-{\rm tr}\left(X\omega X^{\dag}\right)-y^{\dag}\omega y.\end{aligned}$$
### Existence and nature of the vacuum state
We now show that the state which is annihilated by all the $\hat{\gamma}_i$ exists. Let $$\begin{aligned}
\hat{w}_j\coloneqq\sum_{i=1}^{n_{\rm b}} W_{ij}^*\hat{d}_i \qquad
\hat{x}_j^{\dag}\coloneqq\sum_{i=1}^{n_{\rm b}} X_{ij}^*\hat{d}_i^{\dag}\end{aligned}$$ then $$\begin{aligned}
\left[\hat{w}_j,\hat{x}_j^{\dag}\right]=\sum_{i=1}^{n_{\rm b}} W_{ij}^*X_{ij}^*
\eqqcolon \tau_j.\end{aligned}$$ Now consider the eigenvalue equation $$\begin{aligned}
\label{eig_bs}
\left(\hat{w}_j+\hat{x}_j^{\dag}\right)|\bar{0}_j\rangle=-y_j^*|\bar{0}_j\rangle.\end{aligned}$$ Using the ansatz $$\begin{aligned}
\label{coh_bs}
|\bar{0}_j\rangle=\sum_{n=0}^{\infty}
\frac{\kappa_n^j}{n!}(\hat{x}_j^{\dag})^n|0\rangle,\end{aligned}$$ we obtain a recurrence relation $$\begin{aligned}
\kappa_n^j=\left[-y_j^*\kappa_{n-1}^j-(n-1)\kappa_{n-2}^j\right]/\tau_j\end{aligned}$$ with $y_j^*\kappa_0^j=-\kappa_1^j\tau_j$ and $\kappa_0^j$ chosen so that $\langle\bar{0}_j|\bar{0}_j\rangle=1$. Note that if $\kappa_n^j=1$ for all $n$ then (\[coh\_bs\]) is a coherent state. The vacuum state $$\begin{aligned}
\label{gs_bs}
|\bar{0}\rangle=\zeta\hat{S}\bigotimes_{j=1}^{n_{\rm b}}|\bar{0}_j\rangle,\end{aligned}$$ where $\zeta$ is a normalization constant and $\hat{S}$ is the symmetrizing operator, is annihilated by all $\hat{\gamma}_j$ and, because $\omega_j>0$ for all $j$, is also the bosonic Kohn-Sham ground state, which is the lowest energy Fock space eigenstate of (\[Hbs\_ks\]), as required.
### Phononic observables and time evolution
To make the theory useful, observables which are products of the original $c_i$ and $c_i^{\dag}$ operators have to be computed. After some straight-forward algebra one finds that linear operators may be evaluated using $$\begin{aligned}
\label{bs_obs1}
Y_i\coloneqq\langle\bar{0}|\hat{d}_i|\bar{0}\rangle=
\langle\bar{0}|\hat{d}_i^{\dag}|\bar{0}\rangle^*=
\sum_{j=1}^{n_{\rm b}} X_{ij}^*y_j-W_{ij}y_j^*.\end{aligned}$$ Observables which are quadratic are more complicated: $$\begin{aligned}
\label{bs_obs2}
\begin{split}
\langle\bar{0}|\hat{d}_i^{\dag}\hat{d}_j|\bar{0}\rangle=
Y_i^*Y_j+\left(XX^{\dag}\right)_{ij} \qquad
\langle\bar{0}|\hat{d}_i\hat{d}_j^{\dag}|\bar{0}\rangle=
Y_iY_j^*+\left(WW^{\dag}\right)_{ij} \\
\langle\bar{0}|\hat{d}_i^{\dag}\hat{d}_j^{\dag}|\bar{0}\rangle=
Y_i^*Y_j^*-\left(XW^{\dag}\right)_{ij} \qquad
\langle\bar{0}|\hat{d}_i\hat{d}_j|\bar{0}\rangle=
Y_iY_j-\left(WX^{\dag}\right)_{ij}.
\end{split}\end{aligned}$$ The extension to the time-dependent case follows the same procedure as that for fermions, namely that the matrices and vector $D$, $E$ and $F$ in (\[Hbs\_ks\]) become time-dependent as, consequently, do $\hat{\gamma}_i^{\dag}$ and $|\bar{0}\rangle$ after solving the equation of motion $$\begin{aligned}
\label{hmt_bog_bs1}
i\frac{\partial}{\partial t}
\Bigg(\begin{matrix}
\vec{W}_j \\
\vec{X}_j
\end{matrix}\Bigg)=
\Bigg(\begin{matrix}
D(t) & -E(t) \\
E^*(t) & -D^*(t)
\end{matrix}\Bigg)
\Bigg(\begin{matrix}
\vec{W}_j \\
\vec{X}_j
\end{matrix}\Bigg).\end{aligned}$$ This time evolution is not unitary but rather pseudo-unitary [@Mostafazadeh2002b] and will not preserve ordinary vector lengths in general but will preserve the indefinite inner product. The vector $y$ can be determined analogously from $$\begin{aligned}
\label{hmt_bog_bs2}
i\frac{\partial y}{\partial t}=\left(W^t(t)-X^t(t)\right)F(t).\end{aligned}$$ Evolving (\[hmt\_bog\_bs1\]) and (\[hmt\_bog\_bs2\]) in time is equivalent to doing the same for the second-quantized Hamiltonian and the Fock space state vector: $$\begin{aligned}
i\frac{\partial |\Psi(t)\rangle}{\partial t}
=\left(\sum_{ij}D_{ij}(t)\hat{d}_i^{\dag}\hat{d}_j
+\tfrac{1}{2}E_{ij}(t)\hat{d}_i^{\dag}\hat{d}_j^{\dag}
+\tfrac{1}{2}E_{ij}^*(t)\hat{d}_i\hat{d}_j
+\sum_i F_i(t)\hat{d}_i^{\dag}+F_i^*(t)\hat{d}_i\right)|\Psi(t)\rangle.\end{aligned}$$
### Numerical aspects
In order to determine the phonon ground state or perform time-evolution with (\[hmt\_bog\_bs1\]) for real systems, we require a numerical algorithm for finding the eigenvalues and eigenvectors of (\[hm\_bog\_bs\]). This is not a symmetric or Hermitian problem and while a general non-symmetric eigenvalue solver could be employed, a simple modification of Jacobi’s method can be used to diagonalize the matrix efficiently.
Let $G(i,j,\theta)$ be a Givens rotation matrix, i.e. for $i<j$, $G_{kk}=1$ for $k\ne i,j$, $G_{kk}=\cos\theta$ for $k=i,j$, $G_{ji}=-G_{ij}=\sin\theta$ and zero otherwise. Further define the hyperbolic Givens rotation, $G^{\rm h}(i,j,\theta)$, which is the same except that $G_{kk}=\cosh\theta$ and $G_{ji}=G_{ij}=\sinh\theta$. The Givens and hyperbolic Givens rotations can be combined to diagonalize the matrix in (\[hm\_bog\_bs\]). For $i<j$ where $1<j\le 2N_{\rm b}$ we can define a combined Givens rotation, $G^{\rm c}(i,j,\theta)$, as $G^{\rm c}=G^{\rm h}$ for $i\le N_{\rm b}$ and $j>N_{\rm b}$; and $G^{\rm c}(i,j,\theta)=G(i,j,\theta)G(i+N_{\rm b},j+N_{\rm b},\theta)$ for $i,j\le N_{\rm b}$.
A pair of real, symmetric matrices $A$, $B$ is called positive definite if there exists a real $\mu$ such that $A-\mu B$ is positive definite.
Let $\eta H$ and $\eta$ be a positive definite pair. Then applying the combined Givens rotations to $H$ with row-cyclic strategy results in convergence to a diagonal matrix.
See Veselić[@veselic93] for proof.
### Solids
Solid state calculations normally use periodic boundary conditions and Bloch orbitals. Phonon displacements are of the form $$\begin{aligned}
\label{uphn}
\mathcal{U}_{n{\bf q}}({\bf R})=N_q^{-1/2}
2^{-\frac{1}{2}}\nu^{\frac{1}{2}}{\bf e}_{n{\bf q}}e^{i{\bf q}\cdot{\bf R}},\end{aligned}$$ where ${\bf q}$ is a reciprocal lattice vector, $\alpha$ labels a phonon branch, ${\bf R}$ is a primitive lattice vector and ${\bf e}_{n{\bf q}}$ is determined along with $\nu_{n{\bf q}}$ by solving (\[evphn\]) for each ${\bf q}$-vector individually. These displacements are thus complex-valued but by noting that $\nu_{n-{\bf q}}=\nu_{n{\bf q}}$ and ${\bf e}_{n-{\bf q}}={\bf e}_{n{\bf q}}^*$ we can form their real-valued counterparts $$\begin{aligned}
\mathcal{U}_{n{\bf q}}^{(+)}({\bf R})=
\frac{1}{\sqrt{2}}\left(\mathcal{U}_{n{\bf q}}({\bf R})
+\mathcal{U}_{n-{\bf q}}({\bf R})\right) \qquad
\mathcal{U}_{n{\bf q}}^{(-)}({\bf R})=
\frac{-i}{\sqrt{2}}\left(\mathcal{U}_{n{\bf q}}({\bf R})
-\mathcal{U}_{n-{\bf q}}({\bf R})\right).\end{aligned}$$ These are the displacements to which $\hat{d}_i$ and $\hat{d}_i^{\dag}$ refer and will thus keep the phonon Hamiltonian in (\[Hbs\_ks\_r\]) real. An approximate electron-phonon vertex is obtained as a by-product of a phonon calculation: $$\begin{aligned}
\Gamma_{i{\bf k}+{\bf q},j{\bf k},n{\bf q}}
=\frac{1}{2}\langle\varphi_{j{\bf k}+{\bf q}}|
\partial\hat{V_s}/\partial\mathcal{U}_{n{\bf q}}|\varphi_{i{\bf k}}\rangle\end{aligned}$$ where $\hat{V_s}$ is the Kohn-Sham potential and the derivative is with respect to the magnitude of the displacement in (\[uphn\]). This is not Hermitian in the indices $i$ and $j$ because the potential derivative corresponds to a complex displacement. The vertex associated with $\mathcal{U}_{n{\bf q}}^{(\pm)}$ has the form $$\begin{aligned}
\label{hvertex}
\bordermatrix{
& {\bf k}-{\bf q} & {\bf k} & {\bf k}+{\bf q} \cr
{\bf k}-{\bf q} & 0 & \Gamma_{n-{\bf q}} & 0 \cr
{\bf k} & \Gamma_{n-{\bf q}}^{\dag} & 0 & \Gamma_{n{\bf q}} \cr
{\bf k}+{\bf q} & 0 & \Gamma_{n{\bf q}}^{\dag} & 0}\end{aligned}$$ which is a Hermitian matrix for all ${\bf q}$ and $n$.
One final point regarding solids is the requirement of keeping the electronic densities lattice periodic. This implies that the potentials $A$ and $B$ should only couple the Bloch vector ${\bf k}$ with itself.
Mean-field functionals
======================
The final (and possibly most difficult) step in this theory is the determination of potentials represented by the matrices $A$, $B$, $D$, $E$ and vector $F$. In principle, these are chosen to reproduce the exact conditional density $\rho_{\dub{\bf R}^0}({\bf r},t)$ in (\[rho\_0\]) as well as the phononic expectation values $\langle\hat{d}_i\rangle$, $\langle\hat{d}_i^{\dag}\hat{d}_j\rangle$, etc., which themselves reproduce exact nuclear positions, momenta and so on. In practice, these potentials need to be approximated and here we will employ a simple mean-field approach by considering the lowest order diagrams which enter the self-energy. These are plotted in Fig. \[fig\_fd\] and involve the normal and anomalous, Kohn-Sham electronic Green’s functions $i\mathcal{G}_{ij}(t,t')
=\langle\Phi_0|T[\hat{a}_i(t)\hat{a}_j^{\dag}(t')]|\Phi_0\rangle$ and $i\mathcal{F}_{ij}(t,t')
=\langle\Phi_0|T[\hat{a}_i^{\dag}(t)\hat{a}_j^{\dag}(t')]|\Phi_0\rangle$, etc., as well as the phonon propagators $i\mathcal{C}_i(t)=\langle\bar{0}|\hat{d}_i^{\dag}(t)|\bar{0}\rangle$, etc. and $i\mathcal{D}_{ij}(t,t')
=\langle\bar{0}|T[\hat{d}_i(t)\hat{d}_j^{\dag}(t')]|\bar{0}\rangle$, etc. These quantities are evaluated around their respective Kohn-Sham ground states, (\[gs\_fm\]) and (\[gs\_bs\]). The quantity $A_0$ is given by the matrix elements of the single particle Hamiltonian in (\[KS\_fm\_r\]) without $V_{\rm fmc}$, and $D_0=\nu$. 0.5cm
![The lowest order contributions to the self-energy from the vertex $\Gamma=\raisebox{-2pt}{\mbox{\includegraphics[height=11pt]{Gamma.pdf}}}$, the normal and anomalous Green’s functions $\mathcal{G}={\mbox{\includegraphics[height=6pt]{G.pdf}}}$ and $\mathcal{F}={\mbox{\includegraphics[height=6pt]{F.pdf}}}$, and the phonon propagators $\mathcal{C}={\mbox{\includegraphics[height=7pt]{C.pdf}}}$ and $\mathcal{D}={\mbox{\includegraphics[height=7pt]{D.pdf}}}$. These are evaluated in the static limit as mean-field potentials for $A$, $B$, $D$, $E$ and $F$.[]{data-label="fig_fd"}](diagrams.pdf){width="\textwidth"}
Explicit expressions for the potentials are found by substituting instantaneous densities or density matrices of the electrons and phonons for the retarded correlation functions in the diagrams. For example, the electronic state would be affected by the phonon system via the expectation values of the phonon operators, yielding a contribution to $A$: $$\begin{aligned}
A_{ij}^2(t)=\sum_k\Gamma_{ijk}\left(\langle\hat{d}_k^{\dag}\rangle_t
+\langle\hat{d}_k\rangle_t\right),\end{aligned}$$ where the expectation values are evaluated with (\[bs\_obs1\]) and $\Gamma_{ijk}$ is shorthand for the vertex in (\[hvertex\]). At first glance, the matrix $A^3$ appears to be an improper part of the self-energy which is already accounted for by $A^2$. Such a term is still valid for solids with since $A^2$ can only ever couple ${\bf k}$ with itself. However, the Green’s function line in $A^3$ can carry momentum ${\bf q}\ne 0$ and yet have the potential preserve lattice periodicity.
The mean-field potential that gives rise to superconductivity is a little more complicated: $$\begin{aligned}
B_{ij}^2(t)=-\sum_{klmn}\Gamma_{ikl}\Gamma_{mjn}
\left(\langle\hat{a}_m^{\dag}\hat{a}_k^{\dag}\rangle_t
+\langle\hat{a}_m\hat{a}_k\rangle_t\right)
\left(\langle\hat{d}_l^{\dag}\hat{d}_n^{\dag}\rangle_t
+\langle\hat{d}_l^{\dag}\hat{d}_n\rangle_t
+\langle\hat{d}_l\hat{d}_n^{\dag}\rangle_t
+\langle\hat{d}_l\hat{d}_n\rangle_t\right),\end{aligned}$$ where the density matrices are determined from (\[dma\_fm\]) and (\[bs\_obs2\]). The potential represented by $F$ would be $$\begin{aligned}
F_k^1(t)=\sum_{ij}\Gamma_{ijk}\gamma_{ij}(t)\end{aligned}$$ where $\gamma_{ij}(t)=\langle\hat{a}_i^{\dag}\hat{a}_j\rangle_t$ is the electronic one-reduced density matrix calculated using (\[dm\_fm\]). The matrix $E^1$ is evaluated as: $$\begin{aligned}
\label{mat_e1}
E_{ij}^1(t)=\sum_{klmn}\Gamma_{kli}\Gamma_{mnj}
\gamma_{kn}(t)\gamma_{ml}(t).\end{aligned}$$ This matrix should be positive semi-definite in order to satisfy Corollary \[cor\_psd\] and guarantee real eigenvalues for the bosonic Hamiltonian in (\[Hbs\_ks\_r\]).
The matrix $E^1$ is positive semi-definite.
We first note that $\Gamma_{kli}=\Gamma_{lki}^*$ for all $i$, i.e. $\Gamma$ is Hermitian in the electronic indices. Since $\Gamma_{kli}\Gamma_{mnj}\gamma_{kn}\gamma_{ml}$ and $\Gamma_{lki}\Gamma_{nmj}\gamma_{lm}\gamma_{nk}
=\Gamma_{kli}^*\Gamma_{mnj}^*\gamma_{ml}^*\gamma_{kn}^*$ both appear in the sum in (\[mat\_e1\]) then $E^1$ must be real and symmetric. Let $v$ be a real vector of the same dimension as $E^1$, then $R_{kl}\equiv\sum_i v_i \Gamma_{kli}$ is also Hermitian. The quantity $s\equiv v^t E^1 v$ can be written as $s={\rm tr}(R^{\dag}\gamma R^{\dag}\gamma)$. Let $U$ be the unitary transformation that diagonalizes $\gamma$ and define ${\rm diag}(\tilde{\gamma})\equiv U^{\dag}\gamma U$ and $\tilde{R}\equiv U^{\dag}RU$, then $s={\rm tr}(\tilde{R}^{\dag}\tilde{\gamma}\tilde{R}^{\dag}\tilde{\gamma})$ is left invariant. One of the $N$-representable properties[@coleman63] of $\gamma$ is that its eigenvalues satisfy $0\le\tilde{\gamma}_i\le 1$. Then $s=\sum_{kl}|\tilde{R}_{kl}|^2\tilde{\gamma}_k\tilde{\gamma}_l\ge 0$. Since $v$ was chosen arbitrarily we conclude that $E^1$ is positive semi-definite.
Summary
=======
We have defined Kohn-Sham equations for fermions and bosons which are designed to reproduce conditional electronic densities as well as expectation values of the phonon creation and annihilation operators. Sufficient conditions which guarantee real eigenvalues for the bosonic system were found. In practice, the potential matrix elements $A$, $B$, $D$, $E$ and $F$ can be approximated using mean-field potentials inspired from a diagrammatic expansion of the self-energy. The electron and phonon density matrices are determined either self-consistently in a ground state calculation or via simultaneous propagation in the time-dependent case. Any solution obtained in this way is thus non-perturbative. These equations can be implemented in both finite and solid-state codes using quantities determined from linear-response phonon calculations.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank James Annett for pointing out the similarity of our bosonic analysis to that in Ref. [@Colpa1978]. We acknowledge DFG for funding through SPP-QUTIF and SFB-TRR227.
[10]{}
Ali Abedi, Neepa T. Maitra, and E. K. U. Gross. , 105:123002, 2010.
E. C. G. Sudarshan. , 123:2183, 1961.
A. Mostafazadeh. , 43(1):205, 2002.
R. A. Horn and C. R. Johnson. . Cambridge University Press, Cambridge, 1990.
C. H. Fitzgerald and R. A. Horn. , 15:419, 1977.
X. Zhan. . Springer-Verlag, Berlin, 2002.
R. F. Rinehart. , 62:395, 1955.
A. Mostafazadeh. , 2002.
K. Veselic. , 64:241, 1993.
A. J. Coleman. , 35:668, 1963.
J. H. P. Colpa. , 93:327, 1978.
[^1]: The BO PES is defined to be the ground state electronic eigenvalue obtained from (\[en\_se\]) where the nuclear kinetic operator is removed and the dependence on $\dub{\bf R}$ is parametric.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Developing personal robots that can perform a diverse range of manipulation tasks in unstructured environments necessitates solving several challenges for robotic grasping systems. We take a step towards this broader goal by presenting the first RL-based system, to our knowledge, for a mobile manipulator that can (a) achieve targeted grasping generalizing to unseen target objects, (b) learn complex grasping strategies for cluttered scenes with occluded objects, and (c) perform active vision through its movable wrist camera to better locate objects. The system is informed of the desired target object in the form of a single, arbitrary-pose RGB image of that object, enabling the system to generalize to unseen objects without retraining. To achieve such a system, we combine several advances in deep reinforcement learning and present a large-scale distributed training system using synchronous SGD that seamlessly scales to multi-node, multi-GPU infrastructure to make rapid prototyping easier. We train and evaluate our system in a simulated environment, identify key components for improving performance, analyze its behaviors, and transfer to a real-world setup.'
author:
- |
Yasuhiro Fujita, Kota Uenishi, Avinash Ummadisingu, Prabhat Nagarajan,\
Shimpei Masuda, and Mario Ynocente Castro [^1]
bibliography:
- 'IEEEabrv.bib'
- 'citations.bib'
title: '**Distributed Reinforcement Learning of Targeted Grasping with Active Vision for Mobile Manipulators** '
---
Introduction {#sec:intro}
============
Background {#sec:background}
==========
Problem Description and Formulation {#sec:formulation}
===================================
System {#sec:system}
======
Experiments {#sec:experiments}
===========
Related Work {#sec:relatedwork}
============
Conclusion {#sec:conclusion}
==========
We have presented a distributed deep reinforcement learning system for targeted grasping with active vision that can grasp unseen objects in dense clutter. We have shown that some of our proposed extensions and the large-scale distributed training are key to learning efficiently on this challenging task.
While we also have shown a proof-of-concept demonstration of real-world transfer, simulation-to-real (sim2real) transfer of a vision-based manipulation system is a difficult challenge in itself. We expect our system can benefit from recent advances in sim2real transfer to improve its performance in the real-world setup.
Acknowledgment {#acknowledgment .unnumbered}
==============
We thank Koichi Ikeda, Kunihiro Iwamoto, and Takashi Yamamoto from Toyota Motor Corporation for their support of the HSR robots, and Kentaro Imajo for his help in designing and implementing the distributed training software.
[^1]: All authors are with Preferred Networks, Inc., Tokyo, Japan. [@preferred.jp]{}
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Youichi [Yanase]{}[^1], Masahito [Mochizuki]{} and Masao [Ogata]{}'
title: ' Multi-orbital Analysis on the Superconductivity in ${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ '
---
Introduction
============
Since the discovery of High-[$T_{\rm c}$ ]{}superconductivity [@rf:bednortz] and heavy fermion superconductors [@rf:steglich], the mechanism of superconductivity induced by electron correlation has been one of the central issues in the condensed matter physics. In this study, recently discovered superconductor [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{} is analyzed in details.
Immediately after the discovery of superconductivity in water-intercalated Cobalt oxides ${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$, [@rf:takada] both experimental [@rf:ong; @rf:sugiyama; @rf:chou; @rf:yoshimura; @rf:kobayashi; @rf:zheng; @rf:ishida; @rf:higemoto; @rf:motohashi; @rf:li; @rf:miyosi; @rf:uemura; @rf:kanigel; @rf:hdyang; @rf:lorenz; @rf:oeschler; @rf:sakurai] and theoretical [@rf:Atanaka; @rf:koshibae; @rf:baskaran; @rf:shastry; @rf:lee; @rf:ogata; @rf:ikeda; @rf:Ytanaka; @rf:honerkamp; @rf:kuroki; @rf:nisikawa; @rf:motrunich] studies have been performed extensively. While some controversial results exist, many experimental evidences for the non-$s$-wave superconductivity [@rf:sakurai] has been reported by NMR [@rf:yoshimura; @rf:kobayashi; @rf:zheng; @rf:ishida] and specific heat measurements. [@rf:hdyang; @rf:lorenz; @rf:oeschler] The characteristic behaviors in strongly correlated electron systems have been observed in the non-water-intercalated compounds. [@rf:chou; @rf:li; @rf:miyosi; @rf:ando] The existence of the magnetic phase [@rf:motohashi; @rf:ong; @rf:sugiyama] in ${\rm Na_{x}Co_{}O_{2}}$ with $x \sim 0.75$ also indicates an importance of electron correlation. These compounds have a layered structure like cuprate [@rf:bednortz] and ruthenate [@rf:maeno], and the two-dimensionality is enhanced by the water-intercalation. These circumstantial evidences indicate that [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}is an unconventional superconductor induced by the electron correlation.
The theoretical interests are turned on also by the symmetry of crystal structure. In contrast to the square lattice in cuprates and ruthenates, the layer is constructed from the triangular lattice of Co ions. Then, a novel symmetry of Cooper pairing is possible in principle. The $d$-wave superconductivity in cuprate superconductors and $p$-wave superconductivity in ruthenates have been established before. In addition to them, the spin triplet $f$-wave superconductivity and spin singlet $i$-wave one are possible from the analysis of pairing symmetry (see Table. I).
The effect of frustration, which is characteristic in the spin system on the triangular lattice, has also attracted much attention. The RVB theory has been applied to the triangular lattice [@rf:baskaran; @rf:shastry; @rf:lee; @rf:ogata] and basically concluded the spin singlet $d$-wave superconductivity. Then, $d_{\rm x^{2}-y^{2}} \pm$ i$d_{\rm xy}$-wave symmetry is expected below [$T_{\rm c}$ ]{}owing to the six-fold symmetry of triangular lattice. However, the time-reversal symmetry breaking has not been observed until now. [@rf:higemoto] Some authors have pointed out the frustration of charge ordering for the electron filling $n=4/3$, [@rf:lee] and the $f$-wave superconductivity due to the charge fluctuation has been discussed. [@rf:Ytanaka; @rf:motrunich]
Another interesting property of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}is the orbital degeneracy. The conduction band of this material mainly consists of three $t_{\rm 2g}$-orbitals in Co ions which hybridize with O2p-orbitals. Thus far, most of theoretical studies on the superconductivity have been performed on the basis of the single-orbital model. These investigations have successfully achieved microscopic understandings on the cuprate, organic and ruthenate superconductors. [@rf:yanasereview] However, we consider that the theoretical analysis including the orbital degeneracy is highly desired in order to understand a variety of superconductors including [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}and heavy fermion compounds. The superconductivity in $d$-electron systems provides a favorable subject for the theoretical development along this line, because a simple electronic structure is expected compared to heavy fermion superconductors. Although Sr$_2$RuO$_4$ has been a precious compound in this sense, then the orbital degree of freedom is not important for the basic mechanism of superconductivity. [@rf:nomura; @rf:yanaseRuSO] In this study, we show that the orbital degeneracy plays an essential role in [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}in contrast to the ruthenate superconductor. We conclude that [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}is a typical multi-orbital superconductor in this sense.
We adopt a perturbative method for the unconventional superconductivity, [@rf:yanasereview] which is a systematic approach for the electron correlation. Note that the spin fluctuation theory [@rf:moriyaAD] which is widely used for superconductivity is microscopically formulated in this method. It is expected that this approach is reliable from weak to intermediate coupling region. Before the discovery of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}, this method has been applied to the single-orbital triangular lattice model. Then, the $d$-wave, [@rf:vojta] $f$-wave [@rf:kuroki2001] and $p$-wave superconductivity [@rf:nisikawa2002] have been obtained. Some authors have applied this calculation to [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}, and reported the spin singlet $d$- or $i$-wave superconductivity, [@rf:honerkamp] spin triplet $f$-wave superconductivity [@rf:kuroki] and nearly degeneracy between $d$- and $f$-wave superconductivity. [@rf:nisikawa] We consider that this puzzling problem should be resolved by the multi-orbital analysis involving the microscopic aspects of electronic structure.
In this paper, we analyze a multi-orbital Hubbard model constructed from three Co $t_{\rm 2g}$-orbitals. This model appropriately reproduces the electronic structure obtained in the LDA calculation. [@rf:singh; @rf:pickett] The wave function of quasi-particles, which is neglected in the single-orbital Hubbard models, is appropriately taken into account in this multi-orbital model. We show that the momentum dependence of this wave function plays an essential role for the mechanism of superconductivity. We determine the most stable superconducting state with use of the perturbation theory. According to the results of second order perturbation (SOP), third order perturbation (TOP) and renormalized third order perturbation (RTOP) theories, it is concluded that the spin triplet $p$-wave or $f$-wave superconductivity is stable in the wide region of parameter space. The pairing interaction is closely related to the ferromagnetic character of spin susceptibility, although the pairing interaction is not simply described by the spin susceptibility like in the single-orbital model. [@rf:yanasereview] While the momentum dependence of spin susceptibility is usually not remarkable in the frustrating system, the ferromagnetic character clearly appears in the present case owing to the orbital degree of freedom.
From a comparison with single-orbital Hubbard models, the important roles of orbital degeneracy are illuminated in §4.1. Alternatively, we propose a reduced two-orbital model including the $e_{\rm g}$-doublet in §4.2. It is shown that results for the superconductivity is appropriately reproduced in this simplified model. On the basis of the two-orbital model, we investigate the roles of vertex correction terms in §5. Then, we show that the vertex correction term, which significantly enhances the spin triplet pairing in Sr$_{2}$RuO$_{4}$, [@rf:nomura] is not important in case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. Thus, the superconducting instability is basically described within the SOP. Therefore, we first explain in details the results of SOP in §3, and discuss the reduced models in §4 and the role of vertex corrections in §5.
Multi-orbital model
===================
First, we construct a multi-orbital model for [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. [@rf:motiduki] We consider a two-dimensional model which represents the Co ions on the triangular lattice. Note that the superconductivity occurs when the two-dimensionality is enhanced by the water-intercalation. We also note that the conduction band mainly consists of Co [$t_{\rm 2g}$-orbitals]{}. [@rf:singh; @rf:pickett] Co ion is enclosed by an octahedron of oxygens and nearest neighbor Co ions share the edge of the octahedron. We describe the dispersion relation by using a tight-binding model and adopt a multi-orbital Hubbard Hamiltonian written as, $$\begin{aligned}
&& H_{3} = H_{0}+H_{{\rm I}},
\\
&& H_{0} = \sum_{i,j,s} \sum_{a,b} t_{a,b,i,j}
c_{i,a,s}^{\dag} c_{j,b,s},
\\
&& H_{{\rm I}} =
U \sum_{i} \sum_{a} n_{i,a,\uparrow} n_{i,a,\downarrow}
+ U' \sum_{i} \sum_{a>b} n_{i,a} n_{i,b}
\nonumber \\
&& \hspace{10mm}
- {J_{{\rm H}}}\sum_{i} \sum_{a>b} (2 {\mbox{\boldmath$S$}}_{i,a} {\mbox{\boldmath$S$}}_{i,b} + \frac{1}{2} n_{i,a} n_{i,b})
\nonumber \\
&& \hspace{10mm}
+ J \sum_{i} \sum_{a \neq b}
c_{i,a,\downarrow}^{\dag}
c_{i,a,\uparrow}^{\dag}
c_{i,b,\uparrow}
c_{i,b,\downarrow}.
\label{eq:multi-orbital-model}\end{aligned}$$ The first term $H_{0}$ is a tight-binding Hamiltonian where $t_{a,b,i,j}$ are hopping matrix elements. Here, the indices $i$ and $j$ denote the sites in the real space and indices $a$ and $b$ denote the orbitals. We assign the $d_{\rm xy}$-, $d_{\rm yz}$- and $d_{\rm xz}$-orbitals to $a=1$, $a=2$ and $a=3$, respectively. The largest matrix element is the inter-orbital hopping through O2p-orbitals, which are $t_{1,2,i,j}$ for $j=i \pm$(+), $t_{2,3,i,j}$ for $j=i \pm$[ ]{}and $t_{1,3,i,j}$ for $j=i \pm$. We choose the lattice constant as a unit length and denote the unit vectors as =$(\sqrt3/2,-1/2)$ and =$(0,1)$ which are the basis of the triangular lattice. If we assume only the largest matrix elements, the system is regarded to be a superposition of the kagome lattice. [@rf:koshibae] However, the long range hopping through the O2p-orbitals and direct hopping between Co ions are necessary to reproduce the Fermi surface obtained in the LDA calculation.
We take account of the matrix elements within third-nearest-neighbor sites according to the symmetry of orbitals and lattice. They are described by nine parameters from $t_1$ to $t_9$. The non-interacting Hamiltonian is described in the matrix representation, $$\begin{aligned}
\label{eq:three-band-model-kinetic}
&& H_0 = \sum_{{\mbox{\boldmath$k$}},s} c_{{\mbox{\boldmath$k$}},s}^{\dag} \hat{H}({\mbox{\boldmath$k$}}) c_{{\mbox{\boldmath$k$}},s},
\\
&& \hat{H}({\mbox{\boldmath$k$}}) =
\left(
\begin{array}{ccc}
\varepsilon_{11}({\mbox{\boldmath$k$}}) & \varepsilon_{12}({\mbox{\boldmath$k$}}) & \varepsilon_{13}({\mbox{\boldmath$k$}})\\
\varepsilon_{21}({\mbox{\boldmath$k$}}) & \varepsilon_{22}({\mbox{\boldmath$k$}}) & \varepsilon_{23}({\mbox{\boldmath$k$}})\\
\varepsilon_{31}({\mbox{\boldmath$k$}}) & \varepsilon_{32}({\mbox{\boldmath$k$}}) & \varepsilon_{33}({\mbox{\boldmath$k$}})\\
\end{array}
\right), \end{aligned}$$ where $c_{{\mbox{\boldmath$k$}},s}^{\dag}=
(c_{{\mbox{\boldmath$k$}},1,s}^{\dag},c_{{\mbox{\boldmath$k$}},2,s}^{\dag},c_{{\mbox{\boldmath$k$}},3,s}^{\dag})$ is a vector representation of the Fourier transformed creation operators with spin $s$. The matrix elements are obtained as, $$\begin{aligned}
\label{eq:e11}
&& \hspace{-10mm}
\varepsilon_{11}({\mbox{\boldmath$k$}}) = 2 t_1 \cos k_1 + 2 t_2 (\cos k_2 +\cos k_3)
\nonumber \\
&& \hspace{-5mm}
+2 t_4 (\cos(k_1 - k_3)+\cos(k_1-k_2)) + 2 t_5 \cos 2 k_1,
\\
&& \hspace{-10mm}
\varepsilon_{22}({\mbox{\boldmath$k$}}) = 2 t_1 \cos k_2 + 2 t_2 (\cos k_1 +\cos k_3)
\nonumber \\
&& \hspace{-5mm}
+2 t_4 (\cos(k_1 - k_2)+\cos(k_2 - k_3)) + 2 t_5 \cos 2 k_2,
\\
&& \hspace{-10mm}
\varepsilon_{33}({\mbox{\boldmath$k$}}) = 2 t_1 \cos k_3 + 2 t_2 (\cos k_1 +\cos k_2)
\nonumber \\
&& \hspace{-5mm}
+2 t_4 (\cos(k_1 - k_3)+\cos(k_2 - k_3)) + 2 t_5 \cos 2 k_3,
\\
&& \hspace{-10mm}
\varepsilon_{12}({\mbox{\boldmath$k$}}) = 2 t_3 \cos k_3 + 2 t_6 \cos 2 k_3
+2 t_7 \cos(k_1 - k_3)
\nonumber \\
&& \hspace{0mm}
+ 2 t_8 \cos(k_2 - k_3) + t_9 \cos(k_1-k_2)
- e_{\rm c}/3,
\\
&& \hspace{-10mm}
\varepsilon_{13}({\mbox{\boldmath$k$}}) = 2 t_3 \cos k_2 + 2 t_6 \cos 2 k_2
+2 t_7 \cos(k_2 - k_3)
\nonumber \\
&& \hspace{0mm}
+ 2 t_8 \cos(k_1 - k_2) + t_9 \cos(k_1 - k_3)
- e_{\rm c}/3,
\\
&& \hspace{-10mm}
\varepsilon_{23}({\mbox{\boldmath$k$}}) = 2 t_3 \cos k_1 + 2 t_6 \cos 2 k_1
+2 t_7 \cos(k_1 - k_2)
\nonumber \\
\label{eq:e23}
&& \hspace{0mm}
+ 2 t_8 \cos(k_1 - k_3) + t_9 \cos(k_2 - k_3)
- e_{\rm c}/3, \end{aligned}$$ where $k_1=\sqrt{3}/2 k_{\rm x} - 1/2 k_{\rm y}$, $k_2=k_{\rm y}$ and $k_3=-k_1-k_2$. The parameter $e_{\rm c}$ represents the crystal field splitting of [$t_{\rm 2g}$-orbitals ]{}arising from the distortion of octahedron. A typical dispersion relation and Fermi surface are shown in Fig. 1. There is a hole pocket enclosing the $\Gamma$-point and six hole pockets near the K-points, which are consistent with LDA calculations. [@rf:singh; @rf:pickett] We choose the unit of energy as $t_{3}=1$ throughout this paper.
Although $e_{\rm c}$ seems to be small, it is useful to use a non-degenerate $a_{\rm 1g}$-orbital and doubly-degenerate $e_{\rm g}$-orbitals. They are defined from the three [$t_{\rm 2g}$-orbitals ]{}as $$\begin{aligned}
\label{eq:e1}
|e_{\rm g}, 1> = \frac{1}{\sqrt{2}}(|{\rm xz}>-|{\rm yz}>),
\\
\label{eq:e2}
|e_{\rm g}, 2> = \frac{1}{\sqrt{6}}(2|{\rm xy}>-|{\rm xz}>-|{\rm yz}>),
\\
\label{eq:a1g}
|a_{\rm 1g}> = \frac{1}{\sqrt{3}}(|{\rm xy}>+|{\rm xz}>+|{\rm yz}>).\end{aligned}$$ The wave function of $a_{\rm 1g}$-orbital spreads along the [*c*]{}-axis, and those of $e_{\rm g}$-orbitals spread along the two-dimensional plane. We will show later that this representation is appropriate for understanding the mechanism of superconductivity (§4.2).
![(a) Fermi surfaces and (b) dispersion relation obtained from the tight-binding Hamiltonian. The dashed line in (a) shows the first Brillouin zone. The parameters are chosen to be $(t_1,t_2,t_3,t_4,t_5,t_6,t_7,t_8,t_9)=
(0.08,0.16,1,0.24,-0.16,-0.04,0.16,0.16,-0.2)$. []{data-label="fig:fermisurface"}](Fig1tate.eps){width="7cm"}
The hole pocket around the $\Gamma$-point in Fig. \[fig:fermisurface\](a) mainly consists of the $a_{\rm 1g}$-orbital and the six hole pockets near the K-points mainly consist of the $e_{\rm g}$-orbitals. Thus, we denote these Fermi surfaces as $a_{\rm 1g}$-Fermi surface and $e_{\rm g}$-Fermi surface, respectively. This nature of the Fermi surface is consistent with LDA calculations. [@rf:singh; @rf:pickett] Note that recent ARPES measurements [@rf:hasan; @rf:yang] for non-superconducting Na$_x$CoO$_{2}$ observed the $a_{\rm 1g}$-Fermi surface, but the $e_{\rm g}$-Fermi surface has not been found. Fermi surface of water-intercalated Na$_x$CoO$_{2}$ is not clear at present. Moreover, the valence of Co ion in superconducting materials is also under debate. [@rf:karppinen] Therefore, we investigate a wide region in the parameter space and study the possible pairing instability. It is one of the goals of this paper to study the relation between the electronic state and superconductivity. It will be shown that the superconductivity is hard to be stabilized when $e_{\rm g}$-Fermi surface vanishes.
The second term $H_{{\rm I}}$ describes the short range Coulomb interactions which include the intra-orbital repulsion $U$, inter-orbital repulsion $U'$, Hund’s rule coupling ${J_{{\rm H}}}$ and pair hopping term $J$. The relations $U=U'+{J_{{\rm H}}}+J$ and ${J_{{\rm H}}}=J$ are satisfied in a simple estimation. Under these conditions, the interaction term $H_{{\rm I}}$ is invariant for the local unitary transformation between orbitals which will be used later. If these relations are violated, the symmetry of triangular lattice is artificially broken. Therefore, we impose these relations through this paper. Although possible roles of the long range Coulomb interaction have been investigated, [@rf:baskaran; @rf:Ytanaka; @rf:motrunich] we concentrate on the short range interaction in this paper.
Note that previous studies based on a perturbative method for cuprates, organics and ruthenate have succeeded in identifying the dominant scattering process leading to the superconductivity. [@rf:yanasereview] This theory is complementary to the fluctuation theory which is represented by a random phase approximation (RPA) or fluctuation exchange approximation (FLEX). Generally speaking, the fluctuation theory will be appropriate in the vicinity of the magnetic or other instabilities, because the critical enhancement of the fluctuation is taken into account. On the other hand, the perturbation theory is more appropriate when the critical enhancement of any particular fluctuation is absent, because all terms in the same order are taken into account without any prejudice. We perform the second order perturbation as well as the third order perturbation in this paper. The results of FLEX study will be published elsewhere, [@rf:motiduki] where qualitatively consistent results are obtained.
Second Order Perturbation
=========================
Details of calculation and classification of pairing symmetry
-------------------------------------------------------------
In this section, we investigate the superconducting instability by using the [$\acute{{\rm E}}$liashberg ]{}equation within the second order perturbation (SOP). The basic procedure has been explained in literatures [@rf:yanasereview] and the extension to multi-orbital model is straightforward. The [$\acute{{\rm E}}$liashberg ]{}equation is described by the Green function and the effective interaction. The latter is represented by an irreducible four point vertex in the particle-particle channel (Fig. 2(a)). The second order terms in the effective interaction are diagrammatically represented by Figs. 2(b-e). In case of the single-orbital Hubbard model, this term is simply expressed as $V(k,k')=U^{2}\chi_{0}(k-k')$ for spin singlet pairing and $V(k,k')=-U^{2}\chi_{0}(k-k')$ for spin triplet pairing, respectively, with a bare spin susceptibility $\chi_{0}(k-k')$. However, in the multi-orbital model, the four point vertex has indices of orbitals as $V_{abcd}(k,k')$ (see Fig. 2(a)), which is calculated from the possible combination of Coulomb interactions and Green functions.
![ (a) Diagrammatic representation of the effective interaction leading to the superconductivity. (b-e) The second order terms with respect to the Coulomb interactions (dashed lines). The solid line denotes the Green function having the indices of spin and orbital. []{data-label="fig:diagram"}](Fig2.eps){width="8cm"}
In order to make the following discussions clear, we introduce a unitary transformation $\hat{U}({\mbox{\boldmath$k$}})=(u_{ij}({\mbox{\boldmath$k$}}))$ which diagonalizes $\hat{H}({\mbox{\boldmath$k$}})$, namely $$\begin{aligned}
\label{eq:unitary}
\hat{U}^{\dag}({\mbox{\boldmath$k$}}) \hat{H}({\mbox{\boldmath$k$}}) \hat{U}({\mbox{\boldmath$k$}})
=
\left(
\begin{array}{ccc}
E_1({\mbox{\boldmath$k$}}) & 0 & 0\\
0 & E_2({\mbox{\boldmath$k$}}) & 0\\
0 & 0 & E_3({\mbox{\boldmath$k$}})\\
\end{array}
\right).\end{aligned}$$ Here, we choose $E_1({\mbox{\boldmath$k$}}) \leq E_2({\mbox{\boldmath$k$}}) \leq E_3({\mbox{\boldmath$k$}})$. With use of these matrix elements, the matrix form of Green function characterized by orbitals $\hat{G}(k) =
({\rm i}\omega_{n} \hat{1} - \hat{H}({\mbox{\boldmath$k$}}))^{-1}$ is described as, $$\begin{aligned}
\label{eq:Green-function}
G_{ij}(k)=\sum_{\alpha=1}^{3} u_{i\alpha}({\mbox{\boldmath$k$}}) u_{j\alpha}({\mbox{\boldmath$k$}}) G_{\alpha}(k),\end{aligned}$$ where $G_{\alpha}(k)=\frac{1}{{\rm i}\omega_{n}-
E_{\alpha}(\mbox{{\scriptsize \boldmath$k$}})}$.
In the following, we denote the energy band described by the dispersion relation $E_3({\mbox{\boldmath$k$}})$ as $\gamma$-band. As we have shown in Fig. 1, the $\gamma$-band crosses the Fermi level, and the others are below the Fermi level. Therefore, the superconducting transition is induced by the Cooper pairing in the $\gamma$-band. In this case, the [$\acute{{\rm E}}$liashberg ]{}equation is written in terms of an effective interaction within the $\gamma$-band, $$\begin{aligned}
\lambda_{\rm e} \Delta(k) = - \sum_{k'} V(k,k') |G_{3}(k')|^{2} \Delta(k'),
\label{eq:eliashberg-equation}\end{aligned}$$ with $$\begin{aligned}
\label{eq:effective-interaction}
&& \hspace{-8mm}
V(k,k')=\sum_{abcd} u_{a3}({\mbox{\boldmath$k$}}) u_{b3}(-{\mbox{\boldmath$k$}}) V_{abcd}(k,k')
u_{c3}({\mbox{\boldmath$k$}}') u_{d3}(-{\mbox{\boldmath$k$}}').
\nonumber
\\\end{aligned}$$ The [$\acute{{\rm E}}$liashberg ]{}equation (eq. (\[eq:eliashberg-equation\])) is regarded to be an eigenvalue equation and $\lambda_{\rm e}$ represents the maximum eigenvalue. The superconducting transition temperature is determined by the criterion $\lambda_{\rm e}=1$.
Here, we have ignored the normal self-energy which is important for a quantitative estimation of [$T_{\rm c}$]{}. However, qualitative nature of the superconductivity, such as the pairing symmetry and the pairing mechanism, is not affected in many cases including cuprates, ruthenates and organics. [@rf:yanasereview] This is highly expected in case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}, unless the electronic structure is significantly affected by the normal self-energy. We will show that the volume of [$e_{\rm g}$-Fermi surface ]{}, which will be denoted as $n_{\rm e}$ below, is an important parameter for the pairing symmetry. Therefore, it is possible that the pairing symmetry is affected by the normal self-energy through the modification of $n_{\rm e}$. It is, however, expected that the following results are still valid even in this case by regarding the $n_{\rm e}$ modified by the normal self-energy as a relevant parameter.
-------------------------------------------------------------------------------------
A$_1$ $s$-wave $1$
------- ------------ ----------------------------------------------------------------
E$_2$ $d$-wave $
\sin \frac{\sqrt{3}}{2}k_{\rm x} \sin \frac{1}{2}k_{\rm y}
$
A$_2$ $i$-wave $
\sin \frac{3\sqrt{3}}{2}k_{\rm x} \sin \frac{1}{2}k_{\rm y}
+\sin \frac{\sqrt{3}}{2}k_{\rm x} \sin \frac{5}{2}k_{\rm y}
-\sin \sqrt{3} k_{\rm x} \sin 2 k_{\rm y}
$
E$_1$ $p$-wave $\sin \frac{\sqrt{3}}{2}k_{\rm x} \cos \frac{1}{2}k_{\rm y}$
B$_1$ $f_1$-wave $
\sin \frac{1}{2}k_{\rm y}
(\cos \frac{\sqrt{3}}{2}k_{\rm x} - \cos \frac{1}{2}k_{\rm y})
$
B$_2$ $f_2$-wave $\sin \frac{\sqrt{3}}{2}k_{\rm x}
(\cos \frac{\sqrt{3}}{2}k_{\rm x} - \cos \frac{3}{2}k_{\rm y})
$
-------------------------------------------------------------------------------------
: Classification of the pairing symmetry in the triangular lattice. The first column shows the irreducible representations of D$_6$ group. The second column shows the notation adopted in this paper. The $s$-wave, $p$-wave, [*etc*]{} are the counterparts of the isotropic system. The third column shows the typical wave function of Cooper pairs.
Before showing the results, it is necessary to classify the pairing symmetry. The symmetry of Cooper pairs is classified into $s$-, $p$-, $d$-wave [*etc.*]{} in case of an isotropic system like $^3$He. For metals, the Cooper pairing is classified into the finite species according to the symmetry of crystals. [@rf:sureview] We show the classification in case of the triangular lattice in Table I. We denote “$s$-wave”, “$d$-wave” [*etc.*]{} in analogy with the isotropic case. While the $s$-, $d$- and $i$-wave are spin singlet pairings, the $p$-, $f_1$- and $f_2$-wave are spin triplet pairings. Note that there remains two-fold degeneracy in the $p$- and $d$-wave states, namely $p_{\rm x}$- and $p_{\rm y}$-wave, $d_{\rm xy}$- and $d_{\rm x^2-y^2}$-wave, respectively. The time-reversal-symmetry-breaking is expected below [$T_{\rm c}$ ]{} in the $d$-wave state, as discussed in the RVB theory. [@rf:baskaran; @rf:shastry; @rf:lee; @rf:ogata] On the contrary, time-reversal-symmetry is not necessarily broken in the $p$-wave case because there is an internal degree of freedom representing the direction of $S=1$, as discussed in Sr$_2$RuO$_4$. [@rf:yanaseRuSO]
The eigenvalues of the [$\acute{{\rm E}}$liashberg ]{}equation, eq. (\[eq:eliashberg-equation\]) are classified according to the symmetry of Cooper pairs. The pairing symmetry corresponding to the largest eigenvalue is stabilized below [$T_{\rm c}$]{}. Hereafter, we ignore the possibility of $s$-wave pairing because the strong on-site repulsion will destabilize even the extended $s$-wave pairing. When the symmetry of crystal is lowered, some candidates in Table I are classified into the same irreducible representation. For example, the $d_{\rm xy}$-wave and $s$-wave symmetries are included in the same representation for the anisotropic triangular lattice. [@rf:Ytanakaorganic; @rf:kurokiorganic] However, we can ignore this possibility in the isotropic triangular lattice.
Phase diagram of three-orbital model
------------------------------------
In order to search possible pairing symmetries in a phase diagram, we introduce two controlling parameters, $a$ and $n_{\rm e}$. Among the hopping matrix elements in eqs. (\[eq:e11\]-\[eq:e23\]), the largest one, namely $t_3$ is fixed to $1$ but the other matrix elements are chosen to be $$\begin{aligned}
\label{eq:minor-matrix}
&& \hspace{-10mm} (t_1,t_2,t_4,t_5,t_6,t_7,t_8,t_9)=
\nonumber \\
&& \hspace{-5mm} a (0.1,0.2,0.3,-0.2,-0.05,0.2,0.2,-0.25). \end{aligned}$$ We choose this parameter set so that the dispersion relation obtained in the LDA calculation [@rf:singh; @rf:pickett] is appropriately reproduced when $a \sim 1$. In case of $a=0$, the system is regarded to be a superposition of kagome lattice, [@rf:koshibae] but we have to choose $a \geq 0.6$ in order to obtain a realistic Fermi surface. Thus, the parameter $a$ indicates a deviation from the kagome lattice. Although there are many choices of controlling the minor matrix elements, we have confirmed that the following results are qualitatively independent of the choice.
As another controlling parameter, we use the hole number $n_{\rm e}$ in the $e_{\rm g}$-Fermi surface, which can be altered by adjusting the crystal field splitting $e_{\rm c}$. When we decrease $e_{\rm c}$, the energy of $e_{\rm g}$-orbitals is lowered and thus $n_{\rm e}$ decreases. We have confirmed that the value $n_{\rm e}$ is essential rather than the total electron number $n$ for the following results which are almost independent of the way to alter $n_{\rm e}$. Note that the total electron number is fixed as $n=5.33$ throughout this paper.
![Phase diagram for (a)$U'={J_{{\rm H}}}=J=U/3$ and (b)$U'=U/2$ and ${J_{{\rm H}}}=J=U/4$. The horizontal and vertical axes are described in the text. The solid line is the phase boundary obtained by the interpolation. []{data-label="fig:phasediagram3D"}](Fig3tate.eps){width="7cm"}
We divide the first Brillouin zone into $128 \times 128$ lattice points and take 512 Matsubara frequencies. We have confirmed that the following results do not depend on the numerical details, qualitatively. In the following, the temperature is fixed to be $T=0.01$ unless we mention explicitly. It will be shown in Fig. 5 that the stable pairing symmetry is almost independent of the temperature. We fix $U=5$ and change the value of ${J_{{\rm H}}}=J$. Under the reasonable conditions $U=U'+2{J_{{\rm H}}}$ and $U'-{J_{{\rm H}}}> 0$, ${J_{{\rm H}}}=U/3$ is the maximum value of the Hund’s rule coupling.
Figure. 3 shows the most stable pairing symmetry in the phase diagram of $a$ and $n_{\rm e}$ for two values of the interaction strength. As shown in Fig. 3(a), the spin triplet $p$-wave superconductivity is stabilized in the wide region of parameter space when ${J_{{\rm H}}}=U/3$. The $f_1$-wave superconductivity is also stabilized when $e_{\rm g}$-Fermi surface is very small or very large. For the values of $n_{\rm e}$ expected in the LDA calculation, namely $n_{\rm e}=0.1 \sim 0.3$, we obtain the $p$-wave superconductivity independent of the value of $a$. When the value of Hund’s rule coupling is decreased (Fig. 3(b)), the $f_1$-wave superconductivity becomes more stable. We see that in both cases the spin triplet superconductivity is stable.
By definition, the $e_{\rm g}$-Fermi surface vanishes in case of $n_{\rm e}=0$. Then, it is difficult to determine the pairing state since the tendency to superconductivity is very weak independent of the pairing symmetry. On the other hand, the superconductivity is not significantly affected by the disappearance of $a_{\rm 1g}$-Fermi surface which occurs at $n_{\rm e}=0.67$.
![ $n_{\rm e}$-dependence of eigenvalues of [$\acute{{\rm E}}$liashberg ]{}equation. We choose $T=0.01$, $a=0.8$ and (a)$U'={J_{{\rm H}}}=J=U/3$ or (b)$U'=U/2$ and ${J_{{\rm H}}}=J=U/4$. []{data-label="fig:ne-dependence"}](Fig4tate.eps){width="7cm"}
In order to make the situation clearer, we show the eigenvalues of [$\acute{{\rm E}}$liashberg ]{}equation for each pairing symmetry in Fig. 4. It is shown that the $p$- and $f_1$-wave superconductivity have nearly degenerate eigenvalues in a wide parameter range. If we assume the weak crystal field splitting $e_{\rm c} \sim 0$, we obtain $n_{\rm e} \sim 0.3$ which is consistent with LDA calculation. The eigenvalue in the $f_1$-wave symmetry shows a minimum around this value. As a result, the $p$-wave superconductivity is stable in this region. As the Hund’s rule coupling decreases, eigenvalues of both $p$- and $f_1$-wave symmetries increase, but that of the $f_1$-wave symmetry increases more rapidly (See also Fig. 7). Note that the eigenvalues for the $d$-wave, $i$-wave and $f_2$-wave states are very small compared to the $p$- and $f_1$-wave states. As is shown later, the $d$-wave state is stabilized when Hund’s rule coupling is very small (Figs. 7 and 8).
![ Temperature dependence of eigenvalues of [$\acute{{\rm E}}$liashberg ]{}equation. We choose $a=0.8$, $n_{\rm e}=0.238$, $U'=U/2$ and ${J_{{\rm H}}}=J=U/4$. []{data-label="fig:te-dependence"}](Fig5.eps){width="7cm"}
We see that $\lambda_{\rm e}$ is still less than $1$ at $T=0.01$ (Fig. 4). Therefore, the pairing instability occurs at lower temperature. Fig. 5 shows the temperature dependence of $\lambda_{\rm e}$ at ${J_{{\rm H}}}=U/4$, $a=0.8$ and $n_{\rm e}=0.238$ where the maximum eigenvalue is $\lambda_{\rm e} \sim 0.7$ at $T=0.01$. Then, we obtain $\lambda_{\rm e}=1$ at [$T_{\rm c}$ ]{}$ = 0.0037$ for the $p$-wave symmetry. If we assume $t_{3}=200 $meV so that the total band width is $W=1.8$eV, [$T_{\rm c}$ ]{}$ = 0.0037$ corresponds to [$T_{\rm c}$ ]{}$ = 8$K consistent with experimental value. Furthermore, Fig. 5 clearly shows that most stable pairing symmetry is almost independent of temperature. This means that the phase diagram obtained at $T=0.01$ is very accurate.
Another interesting result in Fig. 4 is that the maximum eigenvalue does not significantly depend on $n_{\rm e}$. Even if the size of $e_{\rm g}$-Fermi surface is remarkably reduced, the instability of superconductivity is not suppressed unless the $e_{\rm g}$-Fermi surface vanishes. This is mainly because the DOS of [$e_{\rm g}$-Fermi surface ]{}little depends on the value of $n_{\rm e}$. This is one of the characteristics of the two-dimensional system in the low density region. Note that the number of hole included in each hole pocket is very small as $n_{\rm e}/6 \sim 0.05$. Then, an analogy with the isotropic system like $^3$He is partly justified. This picture is important for the pairing mechanism as we will explain in §3.3. The $n_{\rm e}$-dependence of [$T_{\rm c}$ ]{}can be measured by varying the Na-content of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. However, experimental results seems to be controversial. [@rf:schaak; @rf:milne]
The eigenvalue rapidly decreases when [$e_{\rm g}$-Fermi surface ]{}vanishes. This result indicates that the [$e_{\rm g}$-Fermi surface ]{}plays an essential role for the superconductivity. This implication will be clearly confirmed in §4.2. Although the eigenvalues are very small, the $d$-wave symmetry seems to be most stable at $n_{\rm e}=0$. Then, the topology of Fermi surface is equivalent to the simple triangular lattice including only the nearest neighbor hopping. In this sense, our result at $n_{\rm e}=0$ is qualitatively consistent with the RVB theory based on the $t$-$J$ model in the triangular lattice, which shows the $d_{\rm x^{2}-y^{2}} \pm {\rm i} d_{\rm xy}$-wave superconductivity. [@rf:baskaran; @rf:shastry; @rf:lee; @rf:ogata] However, the used parameters are quite different. The $t$-$J$ model assumes $U/t > 8$, while $U/t=5$ in this paper. In the intermediate coupling region, the momentum dependence arising from the vertex correction is probably important when the SOP gives very small $\lambda_{\rm e}$. [@rf:yanasereview] In case of the simple triangular lattice, the lowest order vertex correction favors the $p$-wave state. [@rf:nisikawa2002] It should be stressed that the SOP gives much larger value of $\lambda_{\rm e}$ when [$e_{\rm g}$-Fermi surface ]{}exists, as shown in Fig. 4.
![ $a$-dependence of eigenvalues of [$\acute{{\rm E}}$liashberg ]{}equation. We choose the parameter ${J_{{\rm H}}}=U/3$ or ${J_{{\rm H}}}=U/4$. Here, we fix the parameter $e_{\rm c}=0$ instead of $n_{\rm e}$. Therefore, $n_{\rm e}$ slightly differs from $n_{\rm e}=0.33$ at $a=0.6$ to $n_{\rm e}=0.31$ at $a=1$. []{data-label="fig:a-dependence"}](Fig6.eps){width="7cm"}
Fig. 6 shows the $a$-dependence of eigenvalues. It is shown that the eigenvalue monotonically increases with decreasing $a$. This variation is basically owing to the increase of the DOS. In case of $a=0.5$, almost flat band is realized around the [$e_{\rm g}$-Fermi surface]{}. Therefore, a steep increase of the eigenvalue leading to the remarkable enhancement of [$T_{\rm c}$ ]{}occurs toward $a=0.5$. We note that most important parameter for the appearance of flat band is the next nearest neighbor hoppings. Although by changing the parameter $a$, the nearest and third nearest neighbor hoppings vary simultaneously, these parameters play only quantitative roles. From Figs. 3-6, we see that the variable $a$ is important for the value of [$T_{\rm c}$]{}, while the variable $n_{\rm e}$ plays an essential role for determining the pairing symmetry.
![ ${J_{{\rm H}}}$-dependence of eigenvalues of [$\acute{{\rm E}}$liashberg ]{}equation. The parameters are chosen to be $a=0.8$ and $n_{\rm e}=0.31$ []{data-label="fig:j-dependence"}](Fig7.eps){width="7cm"}
Before closing this subsection, let us discuss the possibility of $d$-wave superconductivity in case of the small Hund’s rule coupling. Fig. 7 shows the ${J_{{\rm H}}}$-dependence of eigenvalues for each pairing symmetry. It is shown that all eigenvalues increase with the decrease of Hund’s rule coupling. Among them, the eigenvalue in the $d$-wave symmetry increases most rapidly and the $d$-wave superconductivity is stabilized for ${J_{{\rm H}}}< U/12$. The phase diagram in the ${J_{{\rm H}}}$-$n_{\rm e}$ plane is shown in Fig. 8. We see that the $d$-wave superconductivity is more stable when $n_{\rm e}$ is small.
![Phase diagram in the ${J_{{\rm H}}}$-$n_{\rm e}$ plane at $a=0.8$. The solid line is the phase boundary obtained by the interpolation. []{data-label="fig:j-phase"}](Fig8.eps){width="8cm"}
This stability of the $d$-wave pairing is basically owing to the large value of $U'$ which is comparable to $U$. The inter-orbital repulsion $U'$ couples to the charge and orbital excitations which contribute to the effective interactions equivalently in the singlet and triplet channels. Therefore, the difference between singlet and triplet superconductivity is reduced when $U'$ is large. In other words, the Hund’s rule coupling favors the spin triplet superconductivity, although the value of [$T_{\rm c}$ ]{}is reduced. However, we expect that the $d$-wave superconductivity is less stable if we include the higher order terms because higher order terms significantly enhance the spin excitation rather than the orbital and charge excitation. In other words, the role of $U'$ will be reduced in the higher order theory. This is confirmed by the FLEX calculation. [@rf:motiduki]
Basic mechanism of superconductivity
------------------------------------
In order to clarify the basic mechanism of superconductivity, we study the momentum dependence of effective interaction $V(k,k')$ in the spin triplet channel. Figure 9 shows the ${\mbox{\boldmath$k$}}'$-dependence of $V(k,k')$ with ${\mbox{\boldmath$k$}}$ being fixed at the momentum shown by an arrow at which the order parameter in the $p$-wave symmetry takes maximum value. It is apparent that there is a strong attractive interaction between momenta included in the same hole pocket Fermi surface. This is the reason why the spin triplet superconductivity is favored. We can show that in case of ${J_{{\rm H}}}=U/3$, the effective interaction in the singlet channel has opposite sign to that in the triplet channel. This strong repulsive interaction remarkably suppresses the spin singlet superconductivity.
![ Contour plot of the effective interaction $V(k,k')$. The initial momentum ${\mbox{\boldmath$k$}}$ is shown in the figure. The horizontal and vertical axis show $k_{\rm x}'$ and $k_{\rm y}'$, respectively. Matsubara frequency is fixed to the lowest value $\omega_{\rm n}=\omega'_{\rm n}=\pi T$. The Fermi surface is simultaneously described by the thin solid line. The parameters are chosen to be $n_{\rm e}=0.31$, $a=0.8$, $U'=U/2$ and ${J_{{\rm H}}}=J=U/4$. []{data-label="fig:effectiveinteraction"}](Fig9.eps){width="8cm"}
![ (a) Momentum dependence of the static spin susceptibility at $a=0.8$. (b) Schematic figure for the classification of hole pockets. []{data-label="fig:kaitotal"}](Fig10tate.eps){width="7cm"}
The microscopic origin of this momentum dependence can be understood as follows. First, we point out the ferromagnetic character of spin fluctuation. Fig. 10(a) shows the spin susceptibility which is estimated by the Kubo formula within the bubble diagram. It is clearly that the spin susceptibility has a trapezoidal peak around ${\mbox{\boldmath$q$}}=0$. Note that the ferromagnetic spin fluctuation has been expected in the LDA calculation [@rf:singh] and observed by the NMR measurement [@rf:ishida]. Owing to the ferromagnetic character of spin susceptibility, the attractive interaction in the same hole pocket is very strong and favors the spin triplet superconductivity.
The ferromagnetic spin fluctuation is basically comes from the [$e_{\rm g}$-Fermi surface]{}. Each hole pocket gives rise to the ferromagnetic spin fluctuation like in the two-dimensional electron gas, which has a susceptibility with the trapezoidal structure. Actually, as shown in Fig. 10(a), when we increase the size of hole pockets by changing $n_{\rm e}$, the width of the trapezoidal peak around the $\Gamma$ point increases.
Next, we illuminate the essential roles of the orbital degree of freedom. First, we point out that the ferromagnetic spin fluctuation is indeed induced by the orbital degree of freedom. In the multi-orbital model, the spin susceptibility is determined by the dispersion relation and the structure factor arising from the orbital degree of freedom. If we neglect the momentum dependence of structure factor as was done in the previous studies, [@rf:johannes; @rf:nisikawa] we obtain two peaks of spin susceptibility which are quit different from ours. One is located around the M point and the other is slightly removed from the $\Gamma$ point. However, we obtain the trapezoidal peak centered at the $\Gamma$ point by appropriately taking account of the structure factor. Thus, the frustration inherent in the triangular lattice is removed by the orbital degree of freedom which gives rise to the ferromagnetic spin fluctuation.
Second, we point out that the roles of the orbital degree of freedom can be understood by considering the momentum dependence of the wave function which is expressed by the unitary matrix $\hat{U}({\mbox{\boldmath$k$}})$ in eq. (\[eq:unitary\]). This wave function indicates the orbital character of quasi-particles (see also §4). The structure factor of spin discussed above is also obtained by this wave function. Furthermore, the effective interaction $V(k,k')$ has another distinct property arising from this momentum dependence. As we have mentioned before, the [$e_{\rm g}$-Fermi surface ]{}mainly consists of the $e_{\rm g}$-doublet whose wave function is shown in eqs. (\[eq:e1\]) and (\[eq:e2\]). Furthermore, we find that the six hole pockets are divided into three pairs as is shown in Fig. 10(b). For example, more than $90$% of the weight of wave function in the Fermi surface “A” originates from the orbital $|e_{\rm g}, 1>$, while the other two pairs are dominated by respective linear combinations of $|e_{\rm g}, 1>$ and $|e_{\rm g}, 2>$. It is generally expected that the electron correlation between the same orbitals is stronger than that between the different orbitals. Actually, the effective interaction between different pairs “A”, “B” and “C” is significantly smaller than those between the same pairs, as shown in Fig. 9. This is the reason why the $p$- and $f_1$-wave superconductivities are stabilized with nearly degenerate eigenvalues as shown in Fig. 4. Which is more stable between $p$- and $f_1$-wave states depends on the coupling between different pairs of hole pockets, which is generally small as explained above. Note that if we apply the phenomenological theory on the ferromagnetic spin-fluctuation-induced superconductivity to [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}, the $f_1$-wave superconductivity is much more stable rather than the $p$-wave superconductivity. The single band model leading to the ferromagnetic spin fluctuation [@rf:kuroki] also concludes the $f_1$-wave symmetry. However, the $p$-wave superconductivity can be stabilized in the present case owing to the orbital degeneracy.
It should be noticed that the origin of trapezoidal peak of spin susceptibility around $\Gamma$ point is clearly understood by this momentum dependence of wave function. Although the wave functions are not orthogonal between different pairs of hole pockets, the matrix elements between them in calculating $\chi(q)$ are small. Therefore, in the zeroth order approximation, pairs of hole pockets are regarded to be decoupled from each other. Then, each hole pocket independently induces the trapezoidal peak of $\chi(q)$ as in the two-dimensional electron gas model.
Another point to stabilize the superconductivity is the disconnectivity of the [$e_{\rm g}$-Fermi surface ]{}as discussed before the discovery of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. [@rf:kuroki2001] Even in the anisotropic superconductivity such as $p$-wave or $f_1$-wave symmetry, the order parameter can take a same sign in each hole pocket, which stabilizes the superconductivity induced by the ferromagnetic spin fluctuation. Note that the difficulty of the ferromagnetic spin-fluctuation-induced superconductivity (superfluidity) has been discussed for $^3$He. [@rf:2dparamagnon] This difficulty is removed by the topological aspect of Fermi surface in case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}.
Momentum dependence of superconducting gap
------------------------------------------
Next, we show the momentum dependence of order parameter $\Delta({\mbox{\boldmath$k$}},{\rm i}\pi T)$ in Fig. 11. Although $\lambda_{\rm e}$ does not reach $1$ at $T=0.01$ (Fig. 5), it is generally expected that the amplitude $|\Delta({\mbox{\boldmath$k$}},{\rm i}\pi T)|$ shows the momentum dependence of superconducting gap below [$T_{\rm c}$ ]{}and determines the low energy excitation. We note that even if the superconducting instability is dominated by the [$e_{\rm g}$-Fermi surface]{}, the [$a_{\rm 1g}$-Fermi surface ]{}also contributes to the low energy excitations observed by NMR $1/T_1T$, specific heat and magnetic field penetration depth.
Fig. 11(a) shows the order parameter in the $p$-wave symmetry. We choose the Hund’s rule coupling as ${J_{{\rm H}}}=U/3$ where the $p$-wave superconductivity is stabilized. Among the two degenerate $p_{\rm x}$- and $p_{\rm y}$-states, only the $p_{\rm y}$-state is shown. Because of the discontinuity of the [$e_{\rm g}$-Fermi surface]{}, the order parameter is node-less on the [$e_{\rm g}$-Fermi surface]{}, while it has nodes on the [$a_{\rm 1g}$-Fermi surface]{}. Since $p_x \hat{x} \pm p_y \hat{y}$, $p_x \hat{y} \pm p_y \hat{x}$ or $(p_x \pm {\rm i} p_y) \hat{z}$ states are expected below [$T_{\rm c}$]{}, the superconducting gap becomes $\sqrt{\Delta_{\rm x}(k)^{2}+\Delta_{\rm y}(k)^{2}}$, where $\Delta_{\rm x}(k)$ and $\Delta_{\rm y}(k)$ are the order parameters for $p_{\rm x}$- and $p_{\rm y}$-states, respectively. In this case, the superconducting gap does not vanish even on the [$a_{\rm 1g}$-Fermi surface]{}. But, we find a remarkable anisotropy of the superconducting gap on the [$a_{\rm 1g}$-Fermi surface ]{}which can explain the power-law behaviors of NMR $1/T_{1}T$ and so on, like in the case of Sr$_2$RuO$_4$. [@rf:nomuragap] However, we note that this is an accidental result.
Fig. 11(b) shows the order parameter in the $f_1$-wave symmetry. We choose the Hund’s rule coupling as ${J_{{\rm H}}}=U/6$ where the $f_1$-wave superconductivity is most stable. We can see the clear six times alternation of the sign of order parameter. Also in this case, the [$e_{\rm g}$-Fermi surface ]{}is node-less and [$a_{\rm 1g}$-Fermi surface ]{}has line nodes. As we showed before for the magnetic penetration depth, [@rf:uemura] the combination of fully gaped [$e_{\rm g}$-Fermi surface ]{}and line nodes on [$a_{\rm 1g}$-Fermi surface ]{}gives an intermediate temperature dependence between $s$-wave and anisotropic superconductivity.
![Momentum dependence of order parameter (a) in the $p_{\rm y}$-wave symmetry, (b) in the $f_1$-wave symmetry and (c) in the $d_{\rm xy}$-wave symmetry. The parameters are chosen to be $a=0.8$ and $n_{\rm e}=0.31$. []{data-label="fig:wavefunction"}](Fig11tate.eps){width="7cm"}
In Fig. 11(c) we show the order parameter in the $d_{\rm xy}$-wave state which is stabilized when ${J_{{\rm H}}}$ is very small, ${J_{{\rm H}}}=U/12$. The $d_{\rm xy} \pm {\rm i}d_{\rm x^{2}-y^{2}}$ state is expected below [$T_{\rm c}$ ]{}and both [$a_{\rm 1g}$-Fermi surface ]{}and [$e_{\rm g}$-Fermi surface ]{}are node-less in this case. The exponential behaviors in many quantities are expected unless some accidental situation occurs as in the $p$-wave state. Our calculation does not support such an accidental situation in the $d$-wave symmetry.
It should be noticed that in all of the above cases we have shown, the amplitude of order parameter is large on the [$e_{\rm g}$-Fermi surface]{}, while it is small on the [$a_{\rm 1g}$-Fermi surface]{}. This result is expected from the fact that the [$e_{\rm g}$-Fermi surface ]{}is responsible for the pairing instability as discussed in §3.3. This point will be illuminated more clearly in the next section.
Reduced Models
==============
We have analyzed the possibility of unconventional superconductivity in [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}on the basis of the three-orbital model. Because calculations for this model need much computational time, a simplified model appropriate for studying the superconductivity is highly desired for a future development in the theoretical field. In this section, we try to find an appropriate model from the comparison to the three-orbital model. We show that the two-orbital model is satisfactory for this purpose, while the single-orbital model is not. The essential origin of the results in §3 will be clarified by these trials.
Failure of single-orbital Hubbard model
---------------------------------------
Thus far, we have stressed some essential roles of the orbital degeneracy. They are illuminated by showing the failure of single-orbital model. Some authors have already studied single-orbital Hubbard models reproducing the LDA Fermi surface. [@rf:nisikawa; @rf:kuroki] In this paper, we try a single-orbital Hubbard model by keeping only the $\gamma$-band, [*i.e,*]{} the highest-energy eigenstates obtained in eq. (\[eq:unitary\]). Hamiltonian is expressed in the following way. $$\begin{aligned}
H_{1} =\sum_{{\mbox{\boldmath$k$}},s} E_{3}({\mbox{\boldmath$k$}}) c_{{\mbox{\boldmath$k$}},s}^{\dag} c_{{\mbox{\boldmath$k$}},s} +
U \sum_{i} n_{i,\uparrow} n_{i,\downarrow}.
\label{eq:single-orbital-model}\end{aligned}$$ As has been shown in Fig. 1, the typical Fermi surface is reproduced in this model. Indeed, this is the minimal model describing the electron correlation in this material. However, as shown below, this model is inappropriate for the study of superconductivity because the results are qualitatively different from those in the multi-orbital model.
We clarify the term “single-orbital Hubbard model” in order to avoid any confusion. In this paper, “single-orbital Hubbard model” suggests the single-band model including only the [*momentum independent*]{} interaction like eq. (\[eq:single-orbital-model\]). As is shown later, we can construct a single-band model in which the roles of orbital degeneracy are appropriately represented in the momentum dependence of interaction term. Thus, we distinguish “single-orbital Hubbard model” from ‘single-band model’’.
![Phase diagram of the single-orbital Hubbard model. The qualitatively different results from Fig. 3 indicate the failure of this model. []{data-label="fig:singlebandmodel"}](Fig12.eps){width="7cm"}
In Fig. 12, we show the phase diagram obtained by the SOP applied to the single-orbital Hubbard model (eq. (\[eq:single-orbital-model\])). In the wide region of parameter space, the $d$-wave and $i$-wave superconductivities are stabilized instead of $p$-wave and $f_1$-wave states. The $f_1$-wave superconductivity competes with the $d$-wave one, but is stabilized only in a narrow region. The $p$-wave superconductivity is not stabilized in the whole parameter range.
This difference arises from the disregard of the momentum dependence of wave function which is represented by $\hat{U}({\mbox{\boldmath$k$}})$. If we neglect the momentum dependence of $\hat{U}({\mbox{\boldmath$k$}})$ in eq. (\[eq:unitary\]), the three-orbital model is reduced to the single-orbital Hubbard model in eq. (\[eq:single-orbital-model\]). The difference of stable pairing state is apparent if we check the spin susceptibility $\chi(q)$. In the single-orbital Hubbard model, $\chi(q)$ is similar to that obtained in Ref. 31 and we do not clearly see the ferromagnetic tendency (see also the discussion in §3.3). As a result, the momentum dependence of the effective interaction is qualitatively different form that in the three-orbital model.
This difference is partly improved by neglecting the $a_{\rm 1g}$-orbital like Ref. 30. Then, we obtain the nearly ferromagnetic spin fluctuation and spin triplet superconductivity. However, the coupling between different pairs of hole pocket Fermi surfaces (see Fig. 10(b)) is over-estimated, and therefore, the $f_1$-wave state is stabilized much more than the $p$-wave state. This is not consistent with the results in §3. We wish to stress again that the characteristic nature of orbital in each hole pocket Fermi surface induces the nearly degeneracy between the $p$-wave and $f_1$-wave states. This characteristic nature can not be taken into account in the single-orbital Hubbard model.
Effective two-orbital model
---------------------------
The results in the previous subsection show that the single-orbital Hubbard model is qualitatively inappropriate for studying the superconductivity. The important factor to be taken into account is the orbital character of quasi-particles on each Fermi surface. This is described by the momentum dependence of the unitary matrix $\hat{U}({\mbox{\boldmath$k$}})$ in eq. (\[eq:unitary\]). Considering these points, we propose a simplification of the three-orbital model in this subsection. The reduced model is an effective two-orbital model representing the $e_{\rm g}$-doublet. The simplification is performed by the following two steps.
\(i) The $a_{\rm 1g}$-orbital is simply ignored.
\(ii) The lower band below the Fermi level is ignored.\
The first step is justified because we find that the superconducting instability is dominated by the six hole pocket Fermi surfaces which mainly consist of the $e_{\rm g}$-orbitals. The second one is generally justified because the quasi-particles around the Fermi surface lead to the superconductivity.
In order to perform the first step, we transform the basis of local orbitals. This is carried out by using the unitary transformation as, $$\begin{aligned}
\label{eq:unitary-local}
&& \hspace{-20mm}
(d_{{\mbox{\boldmath$k$}},1,s}^{\dag},d_{{\mbox{\boldmath$k$}},2,s}^{\dag},d_{{\mbox{\boldmath$k$}},3,s}^{\dag})=
(c_{{\mbox{\boldmath$k$}},1,s}^{\dag},c_{{\mbox{\boldmath$k$}},2,s}^{\dag},c_{{\mbox{\boldmath$k$}},3,s}^{\dag}) \hat{U}_{\rm l},
\\
\hspace{-30mm}
&& \hat{U}_{\rm l} =
\left(
\begin{array}{ccc}
\frac{1}{\sqrt{3}} & 0 & \frac{2}{\sqrt{6}} \\
\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{6}} \\
\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{6}} \\
\end{array}
\right). \end{aligned}$$ The interaction term $H_{\rm I}$ in the Hamiltonian $H_{3}$ is invariant for this unitary transformation owing to the relations $U=U'+2 {J_{{\rm H}}}$ and ${J_{{\rm H}}}=J$. The non-interacting term is transformed as, $$\begin{aligned}
\label{eq:transformed-non-interactiong-part}
&& \hspace{-10mm}
H_0 = \sum_{{\mbox{\boldmath$k$}},s} d_{{\mbox{\boldmath$k$}},s}^{\dag} \hat{H}'({\mbox{\boldmath$k$}}) d_{{\mbox{\boldmath$k$}},s},
\\
&& \hspace{-10mm}
\hat{H}'({\mbox{\boldmath$k$}}) = \hat{U}_{\rm l}^{\dag} \hat{H}({\mbox{\boldmath$k$}}) \hat{U}_{\rm l}.\end{aligned}$$ The first step is performed by dropping the creation (annihilation) operator $d_{{\mbox{\boldmath$k$}},1,s}^{\dag}$ ($d_{{\mbox{\boldmath$k$}},1,s}$) which corresponds to the $a_{\rm 1g}$-orbital. As a result, the three-orbital model is reduced to the following two-orbital model. $$\begin{aligned}
&& \hspace{-10mm}
H_{2} = \sum_{{\mbox{\boldmath$k$}},s} a_{{\mbox{\boldmath$k$}},s}^{\dag} \hat{h}({\mbox{\boldmath$k$}}) a_{{\mbox{\boldmath$k$}},s}
+ U \sum_{i} \sum_{a=1}^{2} n_{i,a,\uparrow} n_{i,a,\downarrow}
\nonumber \\
&& \hspace{-5mm}
+ U' \sum_{i} \sum_{a>b} n_{i,a} n_{i,b}
- {J_{{\rm H}}}\sum_{i} \sum_{a>b} (2 {\mbox{\boldmath$S$}}_{i,a} {\mbox{\boldmath$S$}}_{i,b} + \frac{1}{2} n_{i,a} n_{i,b})
\nonumber \\
&& \hspace{-5mm}
+ J \sum_{i} \sum_{a \neq b}
a_{i,a,\downarrow}^{\dag}
a_{i,a,\uparrow}^{\dag}
a_{i,b,\uparrow}
a_{i,b,\downarrow}.
\label{eq:two-orbital-model}\end{aligned}$$ Here, we have introduced a $2 \times 2$ matrix $\hat{h}({\mbox{\boldmath$k$}})_{i,j}=\hat{H}'({\mbox{\boldmath$k$}})_{i+1,j+1}$ and two component vector $a_{{\mbox{\boldmath$k$}},s}^{\dag}=(d_{{\mbox{\boldmath$k$}},2,s}^{\dag},d_{{\mbox{\boldmath$k$}},3,s}^{\dag})$. Then, the Green function is described by a $2 \times 2$ matrix as $\hat{G}(k)=({\rm i}\omega_{n} \hat{1} - \hat{h}({\mbox{\boldmath$k$}}))^{-1}$, whose elements are expressed as $$\begin{aligned}
\label{eq:2by2-Green-function}
G_{ij}(k)=\sum_{\alpha=1}^{2} v_{i\alpha}({\mbox{\boldmath$k$}}) v_{j\alpha}({\mbox{\boldmath$k$}}) G_{\alpha}(k). \end{aligned}$$ Here, $v_{i\alpha}({\mbox{\boldmath$k$}})$ are components of the unitary matrix $\hat{V}^{\dag}({\mbox{\boldmath$k$}})$ which diagonalizes the matrix $\hat{h}({\mbox{\boldmath$k$}})$ $$\begin{aligned}
\label{eq:unitary2by2}
\hat{V}^{\dag}({\mbox{\boldmath$k$}}) \hat{h}({\mbox{\boldmath$k$}}) \hat{V}({\mbox{\boldmath$k$}})
=
\left(
\begin{array}{cc}
e_1({\mbox{\boldmath$k$}}) & 0 \\
0 & e_2({\mbox{\boldmath$k$}}) \\
\end{array}
\right), \end{aligned}$$ with $e_1({\mbox{\boldmath$k$}})<e_2({\mbox{\boldmath$k$}})$. The diagonalized Green function is obtained as $G_{\alpha}(k)=\frac{1}{{\rm i}\omega_{n}-
e_{\alpha}(\mbox{{\scriptsize \boldmath$k$}})}$.
We show the dispersion relation $e_1({\mbox{\boldmath$k$}})$ and $e_2({\mbox{\boldmath$k$}})$ in Fig. 13. Apparently the band structure around the [$e_{\rm g}$-Fermi surface ]{}is unchanged by this simplification, while the [$a_{\rm 1g}$-Fermi surface ]{}vanishes.
![ Dispersion relation in the two-orbital model (solid lines). The parameters are chosen to be $a=0.8$ and $n_{\rm e}=0.36$. We have shown the dispersion relation of the $a_{\rm 1g}$-orbital which is obtained as $\hat{H}'({\mbox{\boldmath$k$}})_{11}-\mu$ (dashed line). []{data-label="fig:fermisurface2by2"}](Fig13.eps){width="7cm"}
The second step is performed by ignoring the lower energy band, $e_1({\mbox{\boldmath$k$}})$. Then, the Green function is obtained as, $G_{ij}(k)=v_{i2}({\mbox{\boldmath$k$}}) v_{j2}({\mbox{\boldmath$k$}}) G_{2}(k)$. Owing to this procedure, the calculation becomes equivalent to that for a single band Hamiltonian with momentum-dependent interaction, $$\begin{aligned}
&& \hspace{-10mm}
H_{L} =\sum_{{\mbox{\boldmath$k$}},s} e_{2}({\mbox{\boldmath$k$}}) c_{{\mbox{\boldmath$k$}},s}^{\dag} c_{{\mbox{\boldmath$k$}},s}
\nonumber \\
&& \hspace{-5mm}
+ \sum_{{\mbox{\boldmath$q$}},{\mbox{\boldmath$k$}}',{\mbox{\boldmath$k$}}} S({\mbox{\boldmath$q$}},{\mbox{\boldmath$k$}}',{\mbox{\boldmath$k$}})
c_{{\mbox{\boldmath$q$}}-{\mbox{\boldmath$k$}},\uparrow}^{\dag} c_{{\mbox{\boldmath$q$}}-{\mbox{\boldmath$k$}}',\downarrow}^{\dag}
c_{{\mbox{\boldmath$k$}}',\downarrow} c_{{\mbox{\boldmath$k$}},\uparrow}
\nonumber \\
&& \hspace{-5mm}
+ \sum_{{\mbox{\boldmath$q$}},{\mbox{\boldmath$k$}}',{\mbox{\boldmath$k$}},\sigma} S'({\mbox{\boldmath$q$}},{\mbox{\boldmath$k$}}',{\mbox{\boldmath$k$}})
c_{{\mbox{\boldmath$q$}}-{\mbox{\boldmath$k$}},\sigma}^{\dag} c_{{\mbox{\boldmath$q$}}-{\mbox{\boldmath$k$}}',\sigma}^{\dag}
c_{{\mbox{\boldmath$k$}}',\sigma} c_{{\mbox{\boldmath$k$}},\sigma}.
\label{eq:long-range-model}\end{aligned}$$ The momentum dependent factors $S({\mbox{\boldmath$q$}},{\mbox{\boldmath$k$}}',{\mbox{\boldmath$k$}})$ and $S'({\mbox{\boldmath$q$}},{\mbox{\boldmath$k$}}',{\mbox{\boldmath$k$}})$ are expressed by the Coulomb interactions $U$, $U'$, ${J_{{\rm H}}}$ and $J$ and the wave function $v_{i2}({\mbox{\boldmath$k$}})$. If we neglect the momentum dependence of unitary matrix $\hat{V}^{\dag}({\mbox{\boldmath$k$}})$, the factor $S({\mbox{\boldmath$q$}},{\mbox{\boldmath$k$}}',{\mbox{\boldmath$k$}})$ becomes $U$ and $S'({\mbox{\boldmath$q$}},{\mbox{\boldmath$k$}}',{\mbox{\boldmath$k$}})=0$. Then, the model is exactly reduced to the single-orbital Hubbard model described by eq. (\[eq:single-orbital-model\]) with use of $e_{2}({\mbox{\boldmath$k$}})$ instead of $E_{3}({\mbox{\boldmath$k$}})$. We have discussed in §4.1 that this single-orbital Hubbard model is not appropriate. On the other hand, the Hamiltonian $H_{L}$ is appropriate because the roles of orbital degeneracy are taken into account in the momentum dependence of interaction.
![Eigenvalues of [$\acute{{\rm E}}$liashberg ]{}equation obtained in the effective two-orbital model. We choose the parameters $a=0.8$ and ${J_{{\rm H}}}=U/4$. []{data-label="fig:phasediagram2by2"}](Fig14.eps){width="7cm"}
We find that the results for the superconductivity are almost the same between the Hamiltonian $H_{2}$ and $H_{L}$. In Fig. 14 we show the $n_{\rm e}$-dependence of eigenvalues of [$\acute{{\rm E}}$liashberg ]{} equation for the simplified model, $H_{L}$. We see that the increase of eigenvalues with $n_{\rm e}$ is steeper than that in Fig. 4. This is mainly owing to the increase of DOS. However, the relation between each pairing symmetry closely resembles. For example, the $p$-wave superconductivity is stable around $n_{\rm e}=0.2$, while the $f_1$-wave superconductivity is realized for larger values of $n_{\rm e}$. The eigenvalue for the spin singlet $d$-wave superconductivity is far below that for the spin triplet one. These results mean that the effective two-orbital model described by eq. (\[eq:two-orbital-model\]) or eq. (\[eq:long-range-model\]) appropriately reproduces the results in the three-orbital model. The fact that the step (1) is appropriate clearly means that the superconductivity is basically led by the [$e_{\rm g}$-Fermi surface]{}. The [$a_{\rm 1g}$-Fermi surface ]{}plays only a secondary role.
Note that the eigenvalue of [$\acute{{\rm E}}$liashberg ]{}equation decreases owing to the step (1), mainly because the DOS in the [$e_{\rm g}$-Fermi surface ]{}decreases. We have confirmed that the step (2) slightly enhances the spin triplet superconductivity.
Effects of Vertex Corrections in a Two-Orbital Model
====================================================
In this section, we study the effects of vertex corrections. Although it is desirable to study these effects in the three-orbital model, we use the effective two-orbital model whose validity has been demonstrated in §4.2, because of numerical difficulties. Generally speaking, the higher order terms may play an important role for the superconducting instability, since it is considered that most of unconventional superconductors are in the intermediate coupling region. For example, vertex correction which is not included in the RPA plays an important role to stabilize the spin triplet pairing in Sr$_2$RuO$_4$. [@rf:nomura] Therefore, it is an important issue to investigate the role of higher order corrections in the present model.
![Diagrammatic representation of the third order terms in the effective interaction. (a-f) correspond to the spin singlet channel or spin triplet channel with d-vector $d \parallel {\hat z}$. (c’-f’) correspond to the spin triplet channel with d-vector $d \perp {\hat z}$. []{data-label="fig:3rddiagram"}](Fig15.eps){width="8cm"}
We apply the third order perturbation theory (TOP) and its renormalized version to the Hamiltonian $H_2$ (eq. (\[eq:two-orbital-model\])). We adopt this model instead of more simplified model $H_{\rm L}$ (eq. (\[eq:long-range-model\])) because the computational time is hardly reduced by the second step (ii) in §4.2. The parameter is chosen to be ${J_{{\rm H}}}=U/3$, where the interaction between electrons with same spin vanishes and thus the number of diagrams is much reduced. As discussed in §3.2, this region will be relevant rather than the region where the Hund’s rule coupling is small.
Fig. 15 shows the diagrammatic representation of third order terms in the effective interaction. Figs. 15(a) and (b) are classified into the RPA terms and others are the vertex corrections. The present theory is invariant for the rotation of spin, since we do not take account of the spin-orbit interaction. Therefore, the result on the spin triplet pairing does not depend on the direction of $d$-vector. Note that two RPA terms cancel each other in case of the spin triplet pairing with $d \parallel z$.
![ Eigenvalues of [$\acute{{\rm E}}$liashberg ]{}equation in the third order perturbation theory. The thick solid line shows the maximum eigenvalue in the second order perturbation theory, which is classified into the $p$-wave symmetry. We do not show the eigenvalue in the $d$-wave symmetry because the tendency to superconductivity is very weak. We fix the parameters $a=0.6$, $n_{\rm e}=0.35$ and ${J_{{\rm H}}}=U/3$. []{data-label="fig:naivetop"}](Fig16.eps){width="7cm"}
We numerically solve the [$\acute{{\rm E}}$liashberg ]{}equation within the TOP and show the eigenvalues in Fig. 16. We see that the $p$- and $f_2$-wave superconductivity are significantly stabilized for $U>4$, while the $f_1$-wave and spin singlet pairings are unfavored. However, as discussed below, we find that these results in the intermediate coupling region are fictitious. Within the third order terms in Fig. 15, dominant contributions for triplet channel come from the terms represented in Figs. 15(e’) and (f’), which include a particle-particle ladder. In contrast, the terms represented in Figs. 15(c’) and (d’) with a particle-hole ladder are negligible. As is well known in the Kanamori theory on the metallic ferromagnetism, [@rf:kanamori] the particle-particle ladder diagrams generally induce the screening of interaction as $U \rightarrow U(q)=U/(1+U\phi(q))$ where $\phi(q)$ is obtained by the particle-particle ladder diagram. If $q$-dependence of $U(q)$ is not important, this scattering process is incorporated by the renormalized coupling constant $\bar{U}$. In the above TOP calculation, only the lowest order term in the Kanamori-type correction was taken into account. Therefore, it is reasonable to think that the contributions from Figs. 15(e’) and (f’) can be suppressed if we include the higher order perturbation terms.
![ (a) Renormalization of the particle-particle ladder diagram. In (e”) and (f”), the renormalized particle-particle ladder is used instead of bare ladder. In the RTOP, we take account of the terms (e”) and (f”) instead of (e’) and (f’) in Fig. 15. []{data-label="fig:t-matrix"}](Fig17.eps){width="6cm"}
In order to investigate this possibility, we perform a calculation of a renormalized TOP (RTOP), as shown in Fig. 17. The particle-particle ladder in Figs. 15(e’) and (f’) are replaced by the T-matrix shown in Fig. 17(a). As a result, the infinite order terms representing the screening effect are taken into account as in the Kanamori theory. By using the diagrams in Figs. 2, 15(c’,d’) and 17(e”,f”), we estimate the effective interaction and solve the [$\acute{{\rm E}}$liashberg ]{}equation. The obtained eigenvalues are shown in Fig. 18. It is apparent that the results of naive TOP is significantly altered by the renormalization and that the correction to the SOP is small. In particular, the $p$-wave superconductivity is slightly stable over the $f_1$-wave superconductivity. The nearly degeneracy between these states is also reproduced. The order parameter in each pairing symmetry is very similar to Fig. 11, although that in the naive TOP is remarkably different. We see that the eigenvalues are slightly reduced from the SOP, however the $U$-dependence is almost unchanged. These results are naturally interpreted if we consider that the vertex corrections basically work as a screening effect. Then, the second order perturbation theory is justified by regarding the interactions to be the renormalized ones.
![ Eigenvalues of [$\acute{{\rm E}}$liashberg ]{}equation in the renormalized third order perturbation theory for $p$-wave (circles) and $f_1$-wave (diamonds) symmetry. Note that the eigenvalue in the $f_2$-wave symmetry is very small. The thick solid and dashed lines show the eigenvalues in the second order perturbation theory for the $p$-wave and $f_1$-wave symmetry, respectively. The parameters are the same as in Fig. 16. []{data-label="fig:rtop"}](Fig18.eps){width="7cm"}
Let us compare the present results to the case of high-[$T_{\rm c}$ ]{}cuprates and Sr$_2$RuO$_4$. For high-[$T_{\rm c}$ ]{}cuprates, the $d$-wave superconductivity is basically induced by the RPA terms and the vertex correction due to the particle-particle ladder diagrams effectively reduces the coupling constant. [@rf:yanasereview; @rf:bulut] Therefore, the situation is very similar to the present case, although there is a difference of singlet and triplet pairing. On the other hand, in case of Sr$_2$RuO$_4$, the effective interaction derived from the RPA terms has very weak momentum dependence, which does not work for the anisotropic pairing. However, the $q$-dependence of particle-particle ladder in TOP favors the spin triplet superconductivity. [@rf:nomura] Then, the naive discussion on the screening effect can not be applied. It has been confirmed that the qualitative results of TOP applied to Sr$_2$RuO$_4$ are not altered even when the renormalization of particle-particle ladder is taken into account. [@rf:nomuraforth] Thus, the basic mechanism of possible spin triplet superconductivity in [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}is qualitatively different from that in Sr$_2$RuO$_4$.
Discussions
===========
In this paper, we have investigated the multi-orbital model for [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}on the basis of the perturbation theory. The obtained results indicate a possibility of spin triplet superconductivity in this material, although the $d$-wave superconductivity is also stabilized in a part of parameter space. There are two candidates of spin triplet pairing; $p$-wave and $f$-wave superconductivity are nearly degenerate.
Although the spin triplet superconductivity is one of the most interesting issues in the condensed matter physics, the microscopic theory remains in the developing stage. This is mainly owing to very few $d$-electron materials showing the spin triplet superconductivity. Although we see many candidates in the heavy fermion materials, the theoretical treatment is generally difficult for $f$-electron systems. Therefore, a discovery of spin triplet superconductor in transition metal oxides will lead to an important development in the microscopic understandings.
Probably, most established spin triplet superconductor in $d$-electron systems is Sr$_2$RuO$_4$. [@rf:maeno] Therefore, we have provided detailed discussions on the comparison between Sr$_2$RuO$_4$ and [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. According to the results in this paper, [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}provides a qualitatively different example from Sr$_2$RuO$_4$ in the following two points.
First, the RPA terms give rise to the dominant scattering process leading to the spin triplet pairing. The spin excitation is clearly ferromagnetic and favorable for the spin triplet pairing. This is in sharp contrast to the case of Sr$_2$RuO$_4$ where the vertex corrections are essential for the $p$-wave pairing. In case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}, the vertex corrections induce only the screening effect which is not important for the qualitative results. While the ferromagnetic spin-fluctuation-induced spin triplet superconductivity has been discussed from early years, the corresponding superconductivity has not been established until now. We expect that [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}will be a first example realizing this mechanism.
Second, the orbital degeneracy plays an essential role in case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. The conduction band in [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}as well as that in Sr$_2$RuO$_4$ are basically described by three t$_{\rm 2g}$-orbitals. Although the single-orbital Hubbard model is an appropriate model for describing the pairing mechanism of Sr$_2$RuO$_4$, [@rf:nomura] such a simplification is qualitatively inappropriate for [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. The success of single-orbital Hubbard model for Sr$_2$RuO$_4$ is due to the electronic structure where the $\gamma$-band is basically described by the local $d_{\rm xy}$-orbital. The failure for [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}is due to the fact that the [$e_{\rm g}$-Fermi surface ]{}can not be described by any individual local orbital. In other words, the hybridization term in the unperturbed Hamiltonian is large in case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}, while it is negligible in Sr$_2$RuO$_4$ owing to the particular crystal symmetry. In this sense, [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}will be a more typical example of the multi-orbital superconductor. Then, the momentum dependence of the wave function of quasi-particles essentially affects the effective interaction leading to the Cooper pairing.
We have pointed out that the reduced two-orbital model is appropriate, instead of the failure of single-orbital model. This is because the Fermi surface in [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}can be classified according to the local orbitals. Then, the superconductivity is basically triggered by the [$e_{\rm g}$-Fermi surface]{}. Since a portion of $a_{\rm 1g}$-orbital in the [$e_{\rm g}$-Fermi surface ]{}is less than 5%, this orbital is safely ignored. This situation is similar to the case of Sr$_2$RuO$_4$. However, the orbital degeneracy in $e_{\rm g}$-doublet can not be ignored in case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}.
From the above comparisons, we obtain the following empirical rules.
\(1) When the RPA-terms are favorable for the anisotropic superconductivity, the non-RPA terms are not qualitatively important, and [*vice versa*]{}.
\(2) When a part of Fermi surface is described by a few local orbitals, the simplification of microscopic model is possible.
In particular, the second rule will be helpful for a future development of microscopic understanding on the multi-band superconductors. For example, several Fermi surfaces appear in heavy fermion materials. This fact as well as the 14-fold degeneracy in $f$-shell make the microscopic treatment difficult. However, it will be possible to obtain a simplified model by identifying the microscopic character of each Fermi surface.
Thus far, we have discussed the superconductivity in [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$ ]{}induced by the electron-electron correlation and highlighted the possibility of spin triplet pairing. However, any clear experimental evidence for the symmetry of superconductivity has not been obtained up to now. Instead, we see some experimental observations which restrict the pairing state. For example, the absence of (or very small) coherence peak in NMR $1/T_{1}T$, [@rf:yoshimura; @rf:kobayashi; @rf:ishida; @rf:zheng] power-law temperature dependence of $1/T_{1}T$ [@rf:ishida; @rf:zheng] and specific heat, [@rf:hdyang; @rf:lorenz; @rf:oeschler] NMR Knight shift below [$T_{\rm c}$ ]{} [@rf:yoshimura; @rf:kobayashi; @rf:ishidaprivate; @rf:zhengprivate] and time-reversal symmetry observed in $\mu$SR [@rf:higemoto] should be cited, although a part of them are controversial. As for the results in this paper, spin triplet $p$- or $f_1$-wave superconductivity is consistent with the absence of coherence peak and with the power-law behaviors below [$T_{\rm c}$ ]{}. In both cases, the (quasi-)line nodes appear in the [$a_{\rm 1g}$-Fermi surface]{}. In case of the $p$-wave pairing, the time-reversal-symmetry observed in $\mu$SR indicates a $d$-vector parallel to the plane, namely $\hat{d}=p_{\rm x}\hat{x} \pm p_{\rm y}\hat{y}$ or $\hat{d}=p_{\rm x}\hat{y} \pm p_{\rm y}\hat{x}$. This direction of $d$-vector is consistent with the recent measurements of NMR Knight shift under the parallel field [@rf:kobayashi; @rf:ishidaprivate; @rf:zhengprivate] as well as macroscopic $H_{\rm c2}$, [@rf:chou; @rf:sasaki] if we assume that the $d$-vector is strongly fixed against the applied magnetic field. We note that the qualitatively different result has been obtained in the NMR Knight shift, [@rf:yoshimura] which is consistent with this pairing state if the $d$-vector is weakly fixed against the magnetic field.
Although we have shown that the $d$-vector in Sr$_2$RuO$_4$ is very weakly fixed against the magnetic field, [@rf:yanaseRuSO] this is partly owing to the particular electronic structure of Sr$_2$RuO$_4$. Therefore, we expect that the anisotropy of $d$-vector is larger for [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. The symmetry breaking interaction leading to the anisotropy arises from the second order term with respect to the spin-orbit interaction for Sr$_2$RuO$_4$, while it arises from the first order term in case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. Therefore, it is possible that the $d$-vector is strongly fixed against the magnetic field in case of [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}. Quantitative estimations for the anisotropy will be one of the interesting future issues.
On the other hand, the possibility of spin singlet superconductivity has not been denied up to now. Then, the absence of time-reversal symmetry breaking will be a issue to be resolved for $d$-wave pairing because the $d_{\rm x^{2}-y^{2}} \pm {\rm i} d_{\rm xy}$ state is expected so as to gain the condensation energy. The local distortion of triangular lattice or the feedback effect will be a candidate of the resolution. It seems that the $i$-wave superconductivity [@rf:kurokiprivate] is consistent with the present experimental results except for the very weak impurity effects. [@rf:yokoi] However, the microscopic mechanism leading to the pairing with $T_{\rm c}=5$K will be difficult for such a high angular momentum state. In our study, we have not found the stable $i$-wave state. Although the observed impurity effect seems to support the $s$-wave pairing which is robust for the disorder, very short quasi-particle life time or significant anisotropy in the gap function has to be assumed for the absence of coherence peak in $1/T_{1}T$. We consider that further vigorous investigations are highly desired for the identification of pairing state in [${\rm Na_{x}Co_{}O_{2}} \cdot y{\rm H}_{2}{\rm O}$]{}.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors are grateful to Y. Ihara, K. Ishida, M. Kato, Y. Kitaoka, K. Kuroki, Y. Kobayashi, C. Michioka, M. Sato, Y. Tanaka, Y. J. Uemura and G-q. Zheng for fruitful discussions. Numerical computation in this work was partly carried out at the Yukawa Institute Computer Facility. The present work was partly supported by a Grant-In-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture, Japan.
[9]{} J. G. Bednorz and K. A. Müller, Z. Phys. B [**64**]{} (1986) 189. F. Steglich, J. Aarts, C. D. Bredl, W. Lieke, D. Meschede, W. Franz, H. Schäfer, Phys. Rev. Lett. [**43**]{} (1979) 1892. K. Takada, H. Sakurai, E. Takayama-Muromachi, F. Izumi, R. A. Dilanian and T. Sasaki, Nature [**422**]{} (2003) 53. T. Motohashi, R. Ueda, E. Naujalis, T. Tojo, I. Terasaki, T. Atake, M. Karppinen, H. Yamauchi, Phys. Rev. B [**67**]{} (2003) 64406. M. L. Foo, Y. Wang, S. Watauchi, H. W. Zandbergen, T. He, R. J. Cava and N. P. Ong, Phys. Rev. Lett. [**92**]{} (2004) 247001. J. Sugiyama, H. Itahara, J. H. Brewer, E. J. Ansaldo, T. Motohashi, M. Karppinen and H. Yamauchi, Phys. Rev. B [**67**]{} (2003) 214420; J. Sugiyama, J. H. Brewer, E. J. Ansaldo, B. Hitti, M. Mikami, Y. Mori and T. Sasaki, Phys. Rev. B [**69**]{} (2004) 214423; J. Sugiyama, J. H. Brewer, E. J. Ansaldo, H. Itahara, T, Tani, M. Mikami, Y. Mori, T. Sasaki, S. Hebert and A. Maignan, Phys. Rev. Lett. [**92**]{} (2004) 017602. F. C. Chou, J. H. Cho, P. A. Lee, E. T. Abel, K. Matan and Y. S. Lee, Phys. Rev. Lett. [**92**]{} (2004) 157004. S. Y. Li, L. Taillefer, D. G. Hawthorn, M. A. Tanatar, J. Paglione, M. Sutherland, R. W. Hill, C. H. Wang and X. H. Chen, Phys. Rev. Lett. [**93**]{} (2004) 056401. K. Miyoshi, E. Morikuni, K. Fujiwara, J. Takeuchi and T. Hamasaki, Phys. Rev. B [**69**]{} (2004) 132412. T. Waki, C. Michioka, M. Kato, K. Yoshimura, K. Takada, H. Sakurai and E. Takayama-Muromachi, cond-mat/0306036; C. Michioka, M. Kato, K. Yoshimura, K. Takada, H. Sakurai, E. Takayama-Muromachi and T. Sasaki, cond-mat/0403293. Y. Kobayashi, M. Yokoi and M. Sato, J. Phys. Soc. Jpn. [**72**]{} (2003) 2161; 2453. T. Fujimoto, G.-Q. Zheng, Y. Kitaoka, R. L. Meng, J. Cmaidalka and C. W. Chu, Phys. Rev. Lett. [**92**]{} (2004) 047004. K. Ishida, Y. Ihara, Y. Maeno, C. Michioka, M. Kato, K. Yoshimura, K. Takada, T. Sasaki, H. Sakurai and E. Takayama-Muromachi, J. Phys. Soc. Jpn. [**72**]{} (2003) 3041; Y. Ihara, K. Ishida, C. Michioka, M. Kato, K. Yoshimura, K. Takada, T. Sasaki, H. Sakurai and E. Takayama-Muromachi, J. Phys. Soc. Jpn. [**73**]{} (2004) 2069; [*ibid*]{} cond-mat/0410478. W. Higemoto, K. Ohishi, A. Koda, S. R. Saha, R. Kadono, K. Ishida, K. Takada, K. Sakurai, E. Takayama-Muromachi and T. Sasaki, Phys. Rev. B [**70**]{} (2004) 134508. Y. J. Uemura, P. L. Russo, A. T. Savici, C. R. Wiebe, G. J. MacDougall, G. M. Luke, M. Mochizuki, Y. Yanase, M. Ogata, M. L. Foo and R. J. Cava, cond-mat/0403031. A. Kanigel, A. Keren, L. Patlagan, K. B. Chashka, P. King and A. Amato, Phys. Rev. Lett. [**92**]{} (2004) 257007. H. D. Yang, J.-Y. Lin, C. P. Sun, Y. C. Kang, K. Takada, T. Sasaki, H. Sakurai and E. Takayama-Muromachi, cond-mat/0308031. B. Lorenz, J. Cmaidalka, R. L. Meng and C. W. Chu, Physica C [**402**]{} (2004) 106. N. Oeschler, R. A. Fisher, N. E. Phillips, J. E. Gordon, M. L. Foo and R. J. Cava, cond-mat/0409760. H. Sakurai, K. Takada, T. Sasaki and E. Takayama-Muromachi, cond-mat/0408426. A. Tanaka and X. Hu, Phys. Rev. Lett. [**91**]{} (2003) 257006. W. Koshibae and S. Maekawa, Phys. Rev. Lett. [**91**]{} (2003) 257003. G. Baskaran, Phys. Rev. Lett. [**91**]{} (2003) 097003; D. Sa, M. Sardar and G. Baskaran, Phys. Rev. B [**70**]{} (2004) 104505. B. Kumar and B. S. Shastry, Phys. Rev. B [**68**]{} (2003) 104508; Phys. Rev. B [**69**]{} (2004) 059901(E). Q.-H. Wang, D.-H. Lee and P. A. Lee, Phys. Rev. B [**69**]{} (2003) 092504. M. Ogata, J. Phys. Soc. Jpn. [**72**]{} (2003) 1839. C. Honerkamp, Phys. Rev. B [**68**]{} (2003) 104510. H. Ikeda, Y. Nisikawa and K. Yamada, J. Phys. Soc. Jpn. [**73**]{} (2004) 17. Y. Tanaka, Y. Yanase and M. Ogata, J. Phys. Soc. Jpn. [**73**]{} (2004) 319. K. Kuroki, Y. Tanaka and R. Arita, Phys. Rev. Lett. [**93**]{} (2004) 077001. Y. Nisikawa, H. Ikeda and K. Yamada, J. Phys. Soc. Jpn. [**73**]{} (2004) 1127. O. I. Motrunich and P. A. Lee, Phys. Rev. B [**69**]{} (2004) 214516; Phys. Rev. B [**70**]{} (2004) 024514. Y. Ando, N. Miyamoto, K. Segawa, T. Kawata and I. Terasaki, Phys. Rev. B [**60**]{} (1999) 10580. Y. Maeno, H. Hashimoto, K. Yoshida, S. NishiZaki, T. Fujita, J. G. Bednorz and F. Lichtenberg, Nature [**372**]{} (1994) 532. Y. Yanase, T. Jujo, T. Nomura, H. Ikeda, T. Hotta and K. Yamada, Phys. Rep. [**387**]{} (2004) 1. T. Nomura and K. Yamada, J. Phys. Soc. Jpn. [**69**]{} (2000) 3678; J. Phys. Soc. Jpn. [**71**]{} (2002) 1993. Y. Yanase and M. Ogata, J. Phys. Soc. Jpn. [**72**]{} (2003) 673. T. Moriya and K. Ueda, Adv. Phys. [**49**]{} (2000) 555. M. Vojta and E. Dagotto, Phys. Rev. B [**59**]{} (1999) R713. K. Kuroki and A. Arita, Phys. Rev. B [**63**]{} (2001) 174507; Phys. Rev. B [**64**]{} (2001) 024501. Y. Nisikawa and K. Yamada, J. Phys. Soc. Jpn. [**71**]{} (2002) 2629. D. J. Singh, Phys. Rev. B [**61**]{} (2000) 13397; [**68**]{} (2003) 020503; M. D. Johannes and D. J. Singh, Phys. Rev. B [**70**]{} (2004) 014507. J. Kuneš, K.-W. Lee and W. E. Pickett, cond-mat/0308388; K.-W. Lee, J. Kuneš and W. E. Pickett, Phys. Rev. B [**70**]{} (2004) 045104. M. Mochizuki, Y. Yanase and M. Ogata, cond-mat/0407094. M. Z. Hasan, Y.-D. Chuang, A. Kuprin, Y. Kong, D. Qian, Y. W. Li, B. Mesler, Z. Hussain, A. V. Fedorov, R. Kimmerling, E. Rotenberg, K. Rossnagel, H. Koh, N. S. Rogado, M. L. Foo and R. J. Cava, Phys. Rev. Lett. [**92**]{} (2004) 246402. H.-B. Yang, S.-C. Wang, A. K. P. Sekharan, H. Matsui, S. Souma, T. Sato, T. Takahashi, T. Takeuchi, J. C. Campuzano, R. Jin, B. C. Sales, D. Mandrus, Z. Wang and H. Ding, Phys. Rev. Lett. [**92**]{} (2004) 246403. M. Karppinen, I. Asako, T. Motohashi and H. Yamauchi, Chemistry of materials [**16**]{} (2004) 1693. M. Sigrist and K. Ueda, Rev. Mod. Phys. [**63**]{} (1991) 239. K. Kuroki, T Kimura, R. Arita, Y. Tanaka and Y. Matsuda, Phys. Rev. B [**65**]{} (2002) 100516(R). Y. Tanaka, Y. Yanase and M. Ogata, J. Phys. Soc. Jpn. [**73**]{} (2004) 2053. R. E. Schaak, T. Klimczuk, M. L. Foo and R. J. Cava, Natute, [**424**]{} (2003) 527. C. J. Milne, D. N. Argyriou, A. Chemseddine, N. Aliouane, J. Veira and D. Alber, cond-mat/0401273. M. D. Johannes, I. I. Mazin, D. J. Singh and D. A. Panaconstantopoulos, Phys. Rev. Lett. [**93**]{} (2004) 097005. H. Takahashi, J. Phys. Soc. Jpn. [**68**]{} (1999) 194; Y. Onishi and K. Miyake, J. Phys. Soc. Jpn. [**68**]{} (1999) 3927. T. Nomura and K. Yamada: J. Phys. Soc. Jpn. [**71**]{} (2002) 404. J. Kanamori, Prog. Theor. Phys. [**30**]{} (1963) 275. N. Bulut, Adv. Phys. [**51**]{} (2002) 1587. T. Nomura and K. Yamada, J. Phys. Soc. Jpn. [**72**]{} (2003) 2053. K. Ishida, private communication. G.-Q. Zheng and Y. Kitaoka, private communication. T. Sasaki, P. Badica, N. Yoneyama, K. Yamada, K. Togano and N. Kobayashi, J. Phys. Soc. Jpn. [**73**]{} (2004) 1131. K. Kuroki, Y. Tanaka and R. Arita, cond-mat/0407587. M. Yokoi, H. Watanabe, Y. Mori, T. Moyoshi, Y. Kobayashi and M. Sato, J. Phys. Soc. Jpn. [**73**]{} (2004) 1297.
[^1]: E-mail: yanase@hosi.phys.s.u-tokyo.ac.jp
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'B. W. Holwerda[^1], R. J. Allen, W. J. G. de Blok, A. Bouchard, R. A. González-Lópezlira, P. C. van der Kruit, and A. Leroy'
subtitle: Dust and Gas Surface Densities
title: 'The Opacity of Spiral Galaxy Disks IX;'
---
Introduction
============
The radio 21-cm emission of atomic hydrogen () observed in the disks of spiral galaxies is a powerful tracer of the presence and dynamics of the interstellar medium (ISM), extending to well outside the typical scale of the stellar disk. Its origin is likely a mix of “primordial" [@Fall80], or recently accreted material [@Sancisi08], recycled matter [ejecta raining back onto the disk; e.g., @Oosterloo07], and skins of photo-dissociated material surrounding molecular clouds [@Allen04]. The other components of the ISM, ionised and molecular hydrogen, metals and dust, are all more difficult to trace, because their emission strengths depend on the local degree of excitation which in turn is affected by particle densities and temperatures, photon densities, and stellar and AGN illumination.
Molecular hydrogen is usually traced with CO(J=1-0 or 2-1) line emission, and from it we have derived our knowledge of the molecular clouds in nearby spirals [e.g, @Rosolowsky05a; @Leroy08]. However, it remains an open question how sensitive the CO brightness is to the local volume density and temperature of the ISM, and what is the accuracy with which observations of CO surface brightness can be converted into 2 column densities and ultimately into molecular cloud masses. This conversion is also likely to depend on metallicity and hence galactocentric radius [@Madden97; @Israel97; @Leroy07; @Pohlen10; @Leroy11; @Foyle12].
Nevertheless, a successful and extensive description of the atomic and molecular ISM in spirals and their relation to the star-formation rate is currently being developed, using a multi-wavelength approach to estimate the star-formation rate, and high-resolution and CO observations to characterize the ISM in individual galaxies [@Calzetti05; @Kennicutt07; @Thilker07a; @Bendo10b; @Foyle12], in detail in small samples of galaxies [@Cortese06a; @Boissier07; @Bigiel08; @Leroy08; @Schruba11], or in a generalized way over a population of galaxies [e.g., @Kennicutt98; @Buat02; @Bell03c; @Kannappan04; @West10a; @Catinella10; @Fabello11a]. Star-formation occurs when the combined ISM exceeds a threshold surface density [although the exact threshold is still debated, see e.g., @Bigiel08; @Pflamm-Altenburg08]. The ratio between molecular and neutral ISM is set by the hydrostatic pressure [@Bigiel08; @Leroy08; @Obreschkow09c]. Also, observational models of the role of photo-dissociation in the balance between atomic and molecular hydrogen have made steady progress [@Allen97; @Allen04; @Smith00; @Heiner08a; @Heiner08b; @Heiner09; @Heiner10].
As an alternative to CO, one could use interstellar dust as a tracer of the molecular component in spiral galaxies, since it is linked mechanically to the molecular phase [@Allen86; @Weingartner01b], by mutual shielding from photo-dissociation, and the formation of molecular hydrogen on the surface of dustgrains [e.g., @Cazaux04b]. Interstellar dust can be traced by its emission or its extinction of starlight.
Surface densities of dust in spirals have been obtained from spectral energy distribution models of multi-wavelength data [e.g., @Popescu00; @Popescu02; @Draine07; @Boselli10], from simple (modified) blackbody fits of far-infrared and sub-mm data [@Bendo08; @Bendo10b; @Gordon08; @Gordon10] or FUV/FIR ratios [@Boissier04; @Boissier05; @Boissier07; @Munoz-Mateos11]. The aim is to estimate the typical temperature, mass, composition and emissivity of the dust, and the implied gas-to-dust ratio [@Boissier04; @Boselli10; @Munoz-Mateos09a; @Munoz-Mateos09b; @Pohlen10; @Smith10b; @Roman-Duval10; @Magrini11b; @Galliano11; @Foyle12; @Galametz12].
The most recent [*Herschel*]{} results include a resolved temperature gradient in the disks of spirals [@Bendo10b; @Smith10b; @Engelbracht10; @Pohlen10; @Foyle12], linked to increased illumination of the grains, notably in the spiral arms [@Bendo10b] and bulge [@Engelbracht10]. With sufficient spatial sampling, one can extract the ISM power spectrum but this is only possible with [*Herschel*]{} for local group galaxies [@Combes12]. Based on [*Herschel*]{} data of the Virgo cluster, [@Smith10b], [@Cortese10] and [@Magrini11b] show the spatial coincidence and efficiency of stripping the dust together with the from the disks of spirals in a cluster environment.
In the comparison between the [*Herschel*]{} cold grain emission, and and CO observations, the mass-opacity coefficient of dust grains appears to be too low in M33 [the inner disk, @Braine10], and M99 and M100 [@Eales10]. This is either because (1) its value is not well understood, (2) the conversion factor between CO and molecular hydrogen, , is different in M99 and M100, or (3) the emissivity ($\beta$) is different at sub-mm wavelengths. [@Roman-Duval10] compare CO, and dust in the Large Magellanic Cloud (LMC), and argue that the cause of the discrepancy cannot be a different emissivity, nor a different gas-to-dust ratio, but that CO clouds have 2 envelopes, hence changes with different density environments [an explanation also favored by @Wolfire10]. Other recent results seem to back variations in ; [@Leroy11] find a link between and metallicity based on SED models of a few local group galaxies and the HERACLES CO survey. A solid result from the first [*Herschel*]{} observations is that the gas-to-dust ratio increases with galactocentric radius [@Pohlen10; @Smith10b] as do [@Munoz-Mateos09b; @Bendo10a based on Spitzer data alone]. [@Magrini11b] find a much lower than Galactic CO-to-2 conversion factor based on the relation between metallicity and gas-to-dust ratio radial profiles of several Virgo cluster spirals.
Even with the excellent wavelength coverage of [*Herschel*]{}, the SED fit results remain degenerate between dust mass, temperature and emissivity [see the reviews in @Calzetti01; @Draine03]. It is still especially difficult to distinguish between a mass of very cold (poorly illuminated) dust from dust with much different emissivity characteristics (the emissivity efficiency depends on wavelength as $\lambda^{-\beta}$ in the sub-mm regime with $\beta \ne 2$, which may be typical for very large grains).
While large masses of extremely cold dust can be ruled out with increasing confidence, the level of illumination of the grains by the interstellar radiation field remains a fully free parameter in the SED models. The main uncertainty is complex relative geometry between the dusty filamentary structures and the illuminating stars. Both the grain emissivity and dust/star geometry can be expected to change significantly throughout the disk, i.e., with galactocentric radius or in a spiral arm.
Alternatively to models of dust emission, one can use the absorption of stellar light to trace dust densities. The advantages are higher spatial resolution of optical wavelengths and an independence of dust temperature. However, one needs a known background source of stellar light to measure the transparency of a spiral disk[^2]. Two observational techniques have been developed to measure the opacity of spirals and consequently their dust content. The first one uses occulting galaxy pairs [@Andredakis92; @Berlind97; @kw99a; @kw00a; @kw00b; @kw01a; @kw01b; @Elmegreen01; @Holwerda07c; @Holwerda09; @Keel11 Holwerda et al. [*submitted.*]{}], of which an increasing number are now known thanks to the Sloan Digital Sky Survey and the GalaxyZOO citizen science project [@Lintott08].
The second method uses the number of distant galaxies seen through the disk of a nearby face-on spiral, preferably in Hubble Space Telescope ([*HST*]{}) images. The latter technique is the focus of our “Opacity of Spiral Galaxies” series of papers [@Gonzalez98; @Gonzalez03; @Holwerda05a; @Holwerda05b; @Holwerda05c; @Holwerda05e; @Holwerda05d; @Holwerda07a].[^3] The benefit of using distant galaxies as the background light source is their ubiquity in HST images of nearby galaxies. Now that uniform maps are available from the THINGS project [The Nearby Galaxy Survey, @Walter08], as well as public [*Herschel*]{} data from the KINGFISH [Key Insights on Nearby Galaxies: a Far-Infrared Survey with Herschel, @Skibba11; @Dale12; @Kennicutt11; @Galametz12], and CO(J=2-1) maps from the HERACLES survey [The HERA CO Line Extragalactic Survey, @heracles] for a sub-sample of the galaxies analysed in our “Opacity of Spiral Galaxies” project, we are taking the opportunity to compare our disk opacities to and 2 surface densities to see how they relate.
Our method of determining dust surface densities is certainly not without its own uncertainties (notably cosmic variance, see §\[s:sfm\]) but these are not the ones of sub-mm emission suffers from (grain emissivity, level of stellar illumination, variance within the disk or these). Hence, our motivation for our comparison between the disk opacity and the other tracers of the cold ISM is to serve as an independent check to the new [*Herschel*]{} results.
In section 2, we discuss the origin of our sample and data. Section 3 explains how we derive a disk opacity from the number of distant galaxies. In section 4, we discuss the distant galaxy number as a function of column density and in section 5, we compare the and 2 column densities, dust extinction, averaged over whole WFPC2 fields, and per contour, respectively. Sections 6 and 7 contain our discussion and conclusions.
Galaxy Sample and Data
======================
Our present sample is the overlap between the [@Holwerda05b], the THINGS [@Walter08], and the HERACLES [@heracles] projects. The common 10 disk galaxies are listed in Table \[t:info\]. We use the public THINGS data and early science release data from HERACLES.
Figure \[f:himap\] shows the HST/WFPC2 “footprints” overlaid on the VLA HI maps. In the case of NGC 3621 and NGC 5194, there are two HST/WFPC2 fields available for each galaxy.
VLA 21-cm Line Observations {#s:hi}
---------------------------
For this study we use the THINGS [The Nearby Galaxy Survey, @Walter08] robustly-weighted (RO) integrated total intensity maps (available from <http://www.mpia-hd.mpg.de/THINGS/>). The maps were obtained with the VLA, and converted to surface density using the prescription from [@Walter08], equations 1 and 5, and Table 3. Although the naturally-weighted maps are markedly more sensitive to the largest scale distribution, the robust maps have the highest angular resolution.
The robust maps are better suited for a direct comparison with the number of background galaxies, as we are interested in the column density at the position of each background galaxy and hence at scales smaller than the FOV of the HST/WFPC2 FOV (3 CCDs of $1\farcm3 \times 1\farcm3$). Additionally, we use the WFPC2 footprint as an aperture on the maps (Figure \[f:himap\]).[^4] The column densities averaged over the WFPC2 footprints (an angular scale of $2\farcm3$) on the sample galaxies, and expressed in units of ${\rm M_\odot/pc^2}$ are listed in Table \[t:info\]. These mean column densities include a correction factor (1.36) for Helium contribution to the atomic gas phase.
![The THINGS robustly weighted integrated column density maps. The HST/WFPC2 footprint is overlaid (black outline). NGC 3621 and NGC 5194 have two HST pointings each. A 3 arcminute ruler is shown for scale comparison. Most of the WFPC2 fields in [@Holwerda05] were originally taken for the Cepheid Distance Scale Key Project ([@KeyProject]); they were positioned on spiral arms in the outer, less crowded, parts of the disks to aid in the identification of Cepheid variables. NGC 3031 and NGC 3621 do not have CO observations.[]{data-label="f:himap"}](./holwerda_f1.pdf){width="\textwidth"}
\[t:info\]
----------- ------- ---------- ------------ ------------------- ------------ ---------- ------------------- ------------ ------------------------ ----------------- ----------- --------- ------ ------- ------- -------
Galaxy Dist. $R_{25}$ $R/R_{25}$ $\rm \Sigma_{HI}$ $\rm FWHM$ $L_{CO}$ $\rm \Sigma_{H2}$ $\rm FWHM$ $\rm A_{SFM}$ $\rm \Sigma_d$ $\rm FOV$ log(OH) + 12
() (CO)
(Mpc) (kpc) ($\rm M_\odot$ (kpc) (K ($\rm M_\odot$ (kpc) (mag.) ($\rm M_\odot $ (kpc) KK04 PT05
$\rm / pc^2$) km/s) $\rm / pc^2$) $\rm / pc^2$)
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17)
NGC925 11.20 5.61 0.50 17.92 0.33 0.23 1.83 0.60 $ -0.4^{ 0.3}_{ 0.3}$ -0.44 4.22 8.20 8.70 -3.49 -3.50 -3.54
NGC2841 14.10 4.07 0.61 8.98 0.41 1.00 7.85 0.75 $ 0.8^{ 0.4}_{ 0.5}$ 0.88 5.32 8.40 9.20 -3.17 -3.27 -3.39
NGC3031 3.60 13.46 0.25 4.47 0.10 … … 0.19 $ 0.8^{ 0.6}_{ 0.5}$ 0.88 1.36 8.50 9.10 -4.05 -4.16 -4.22
NGC3198 13.80 4.26 0.62 18.15 0.40 0.31 2.43 0.74 $ 0.8^{ 0.3}_{ 0.3}$ 0.88 5.20 8.30 8.80 -3.60 -3.63 -3.69
NGC3351 10.00 3.71 0.51 3.52 0.29 1.04 8.19 0.53 $ 1.2^{ 0.5}_{ 0.6}$ 1.32 3.77 8.60 9.20 -3.05 -3.18 -3.33
NGC3621-1 6.64 6.15 0.37 18.80 0.19 … … 0.35 $ 2.2^{ 0.6}_{ 0.6}$ 2.42 2.50 8.30 8.90 -3.25 -3.34 -3.41
NGC3621-2 6.64 6.15 0.38 12.72 0.19 … … 0.35 $ 1.0^{ 0.3}_{ 0.4}$ 1.10 2.50 8.30 8.90 -3.25 -3.54 -3.63
NGC3627 10.05 4.56 0.52 9.15 0.29 3.93 30.98 0.54 $ 2.1^{ 0.7}_{ 0.7}$ 2.31 3.79 … … -2.67 -2.86 -3.01
NGC5194-1 8.40 5.61 0.62 8.97 0.24 3.38 26.66 0.45 $ -0.4^{ 0.4}_{ 0.4}$ -0.44 3.17 8.50 9.10 -2.88 -3.05 -3.21
NGC5194-2 8.41 5.61 0.63 10.11 0.24 4.47 35.26 0.45 $ 1.4^{ 0.6}_{ 0.6}$ 1.54 3.17 8.50 9.10 -2.84 -3.01 -3.14
NGC6946 11.48 5.74 0.75 9.44 0.33 2.12 16.69 0.61 $ 1.1^{ 0.5}_{ 0.6}$ 1.21 4.33 8.30 8.90 -3.28 -3.44 -3.60
NGC7331 14.72 5.24 0.76 21.53 0.43 0.26 2.06 0.78 $ 0.3^{ 0.3}_{ 0.3}$ 0.33 5.55 8.30 8.80 -3.57 -3.61 -3.69
----------- ------- ---------- ------------ ------------------- ------------ ---------- ------------------- ------------ ------------------------ ----------------- ----------- --------- ------ ------- ------- -------
CO(J = 2 $\rightarrow$ 1) Line Observations {#s:co}
-------------------------------------------
The HERACLES project [The HERA CO Line Extragalactic Survey, @heracles; @Walter09] is a project on the IRAM 30m telescope to map the molecular gas over the entire optical disks ($R_{25}$) of 40 nearby galaxies via the CO(J=2-1) emission line. The HERA instrument has comparable spatial (11") and velocity (2.6 km/s) resolutions to the THINGS survey, and good sensitivity ($3 \sigma \approx 3 \rm M_\odot / pc^2$) as well. The HERACLES sample overlaps by design with the THINGS and SINGS [[*Spitzer*]{} Infrared Nearby Galaxy Survey, @SINGS] samples and it also has 8 galaxies in common with our previous work (Table \[t:info\]).
To convert the CO (J=2-1) maps to molecular hydrogen surface density maps, we need the conversion factor (alternatively denoted as $\rm \alpha_{CO}$). For the CO (J=1-0) line, this is commonly assumed to be 4.4. The ratio between the CO(J=1-0) and CO(J=2-1) line is 0.7 according to the HERACLES observations. To convert the CO(J=2-1) map (in K km/s) into molecular surface density, $\rm X_{CO (2-1)} = 4.4/0.7 = 6.3 M_\odot/pc^2$ [@Leroy08]. The mean values of the CO(J=2-1) surface brightness and the molecular hydrogen surface density are listed in Table \[t:info\].
HST/WFPC2 Images {#s:hst}
----------------
The background galaxy counts are based on HST/WFPC2 data, as presented in [@Holwerda05] and [@Holwerda05b]. The footprints of the 12 HST/WFPC2 fields on the integrated maps of 10 THINGS galaxies are shown in Figure \[f:himap\] and we only consider these areas of the disks. The HST fields are predominantly from the Distance Scale Key Project [@KeyProject], and are therefore usually aimed at spiral arms in the outer parts of the main disks, in order to facilitate the identification of Cepheids. The final drizzled WFPC2 images in $F814W$ and $F555W$, from [@Holwerda05], can be obtained at <http://archive.stsci.edu/prepds/sgal/> and the NASA Extragalactic Database.[^5]
Disk Opacity from the Number of Background Galaxies. {#s:sfm}
====================================================
The central premise of our method to measure disk opacity, is that the reduction in the number of distant galaxies seen though a foreground spiral galaxy is a reasonable indication of the transparency of the disk. The number of distant galaxies that can be identified is a function of several factors: the real number of galaxies behind the disk; the crowding by objects in the foreground disk and consequently the confusion in the identification of the distant galaxies, and, finally, absorption of the light from the background galaxies by the interstellar dust in the foreground disk. Since we are only interested in the last one –the dust extinction–, all the other factors need to be mitigated and accounted for. [*HST*]{} provides the superb resolution to identify many distant galaxies, even in the quite crowded fields of nearby spiral galaxies. But to fully calibrate for crowding and confusion, we developed the “Synthetic Field Method” (SFM), in essence a series of artificial galaxy counts under the same conditions as the science field [@Gonzalez98; @Holwerda05a].
If we identify $N$ galaxies in a field, we need to know two quantities to convert this number into a disk opacity measurement: (1) the number ($N_0$) of galaxies we would have identified in this field, without any dust extinction but under the same crowding and confusion conditions, and (2) the dependence ($C$) of the number of galaxies on any increase of dust extinction. The disk’s opacity in $F814W$ is then expressed as: $$\label{eq1}
A_I = -2.5~ C~ \log \left({N \over N_0}\right).$$
If the number of identified galaxies behaved exactly as photons, the parameter $C$ would be unity. We have found it to be close to 1.2 for a typical field, and $N_0$ to depend the surface brightness and granularity of the foreground disk [@Gonzalez03; @Holwerda05e]. From our artificial distant galaxy counts in the WFPC2 fields, we can obtain both $N_0$ and $C$; the first from an artificial count of seeded, undimmed, distant galaxies, and the second from a series of artificial distant galaxy counts with progressive dimming of the seeded galaxies.
Since we cannot know the intrinsic number of distant galaxies behind the foreground disk, we treat the cosmic variance as a source of uncertainty in $N_0$ that can be estimated from the observed 2-point correlation function. This typically is of the same order as the Poisson error in the opacity measurement.[^6] Because the cosmic variance uncertainty is substantial, improvements in the identification of distant galaxies barely improve our errors [see also @mythesis].
To test the general SFM results, we have done several checks against other techniques. The results are consistent with those obtained from occulting galaxy pairs [@Holwerda05b], both the results in [@kw00a; @kw00b] as well as the later opacities found in [@Holwerda07c]. The SFM results are also consistent with the amount of dust reddening observed for the Cepheids in these fields [the majority of which is from the Cepheid Distance Scale Key Project @KeyProject], the dust surface densities inferred from the far-infrared SED [@Holwerda07a discussed below], and the sub-mm fluxes from KINGFISH observations (§\[s:herschel\], below). Even with HST, the number of identifiable galaxies in a given WFPC2 field is relatively small, a fact that results in large uncertainties if the field is further segmented for its analysis, e.g., sub-divided into arm and inter-arm regions. To combat the large uncertainties, we combined the numbers of background galaxies found in different fields, based on certain characteristics of the foreground disks, like galactocentric radius, location in the arm or inter-arm regions [@Holwerda05b], surface brightness [@Holwerda05d], or NIR colour [@Holwerda07b]. Because no uniform and CO maps were available until now, we compared radial profiles to our radial opacity profile in [@Holwerda05c], but this is far from ideal. Now that the THINGS and HERACLES maps are available, we can compare the average opacity of an [*HST*]{} field to its mean and 2 surface densities or, alternatively, rank the distant galaxies based on the foreground disk’s column density at their position.
Dust Surface Densities {#s:sigmad}
----------------------
To convert the above opacity of the spiral disk to a dust surface density, we assume a smooth surface density distribution of the dust (no clumps or fine structure). The dust surface density is then: $$\Sigma_{\rm d} = {1.086 A \over \kappa_{\rm abs}},$$ with $\kappa_{\rm abs}$ for Johnson $I$ from [@Draine03], Table 4; $4.73 \times 10^3$ cm$^2$ g$^{-1}$. The mean opacity ($A_{\rm SFM}$) and implied mean dust surface densities are listed in Table \[t:info\]. The value for $\kappa_{\rm abs}$ changes with the types of grain (and hence with environment in the disk) and the Draine et al. is a value typical for large grains. Variance in $\kappa_{\rm abs}$ is not unusual depending on the prevailing composition of the dust.
The screen approximation to estimate the surface density is common but in fact the dusty ISM is clumped and filamentary in nature with a wide range of densities and temperatures. Typically, the distant galaxies are seen in gaps between the dusty clouds [@Holwerda07b]. The typical value of $A_I \sim 1$ (Figure \[f:sighi-A\]) corresponds to a surface covering factor of 60%, if the clouds were completely opaque. In reality, the disk opacity is a mix of covering factor and the mean extinction of the clouds [on average $\tau_{\rm cloud} = 0.4$ and cloud size 60 pc, @Holwerda07a]. We note that our mass estimates agree with those from a fit to the [*Spitzer*]{} fluxes with the [@Draine01b] model [to within a factor of two @Holwerda07a Figure \[f:draine\]]. [@Draine07] note that the addition of sub-mm information to such a fit may modify the dust mass estimate by a factor of 1.5 or less. Thus, while there is certainly a range of dust densities in each field, we are confident that the estimate from the above expression is a reasonable [*mean*]{} surface density.
Herschel-SPIRE Surface Brightness {#s:herschel}
---------------------------------
Sub-mm data for all our galaxies are available at the [*Herschel*]{} Science Archive[^7], the majority taken for the KINGFISH project[^8]. We therefore check the reliability of the SFM as a tracer of the dust surface density by directly comparing the surface brightness measured by the Spectral and Photometric Imaging REceiver [SPIRE, @Griffin10], onboard [*Herschel*]{} to the opacity as measured by the SFM.
We used the WFPC2 field-of-view as the aperture to measure the fluxes at 250, 350, and 500 $\mu$m (listed in Table \[t:info\]), similar to our measurements of the surface density in the and CO data (§\[s:hi\] and \[s:co\]). These were not aperture corrected because of the unique shape of the aperture. Figure \[f:herschel\] shows the [*Herschel*]{} surface brightnesses versus the SFM opacities for all three wavebands (the 250 and 500 $\mu$m. values are the end points of the horizontal bars). To convert the flux in a [*Herschel-SPIRE*]{} waveband into a dust surface density, one would need both a typical dust temperature or a temperature distribution and the dust’s emissivity. The horizontal bars indicate there is a range of mean temperatures in these disks.
There is a linear relation between the [*Herschel-SPIRE*]{} surface brightnesses and the SFM opacities. The scatter is much less for this relation than between the SFM dust surface density values and those inferred from far-infrared SED models [@Holwerda07a Figure \[f:draine\]]. Hence, we conclude that the SFM opacities are a reasonable indicator for mean dust surface density.
As a qualitative check, we compare the dust surface densities derived for a subset of the KINGFISH sample by [@Galametz12], their Figure A1, to those derived above. Typical values mid-disk for the overlap (NGC 3351, NGC 3521 and NGC 3627), where the WFPC2 images are located, are $\sim0.3 M_\odot/pc^2$, which appear to be typical [i.e., similar to those in @Foyle12]. These values lie a factor two below the ones implied by the SFM (Table \[t:info\]), regardless of the dust emissivity used in the [@Galametz12] fits but the difference is greater for fits where the emissivity is a free parameter. We found similar dust surface densities from the SED model in [@Holwerda07a], based on the [*Spitzer*]{} fluxes alone (Figure \[f:draine\]). Because all these models are based on the [@Draine07] model, we made a second check using the [magphys]{} SED model ([magphys]{}). These dust surface density are to a factor ten below the SFM or Draine et al. estimates. These fits illustrate the importance of the choice of model compared to the inclusion of sub-mm data.
![The Herschel/SPIRE 350 $\mu$m mean surface brightnesses in the WFPC2 field-of-view. Horizontal bars mark the 250 (left) and 500 (right) $\mu$m fluxes in the same field. The width of the horizontal bar is indicative of the mean temperature of the dust in each disk (a wide bar points to higher mean temperature). Variance around the mean surface brightness in each band is substantial due to both Poisson noise and structure in the galaxy disk. The lowest surface brightness point is NGC 3031, the closest galaxy in our sample. This field is right on the edge of the ISM disk (Figure \[f:himap\]) and therefore suffers the most from uncertainties due to internal structure and aperture correction. []{data-label="f:herschel"}](./holwerda_f2.pdf){width="50.00000%"}
![The dust surface density inferred by the SED model from [@Draine07] based on Spitzer fluxes [presented earlier in @Holwerda07a] compared to those from the SFM. There is at most a factor two difference between these, consistent with the lack of sub-mm information in these initial fits. Dashed line is the line of equality. []{data-label="f:draine"}](./holwerda_f2a.pdf){width="50.00000%"}
Column Density and the Number of Distant Galaxies {#s:hi}
==================================================
To improve statistics, one our tactics has been to stack the numbers of galaxies in our fields according to a local characteristic (surface brightness, galactocentric radius etc.). Here we combine the number of background galaxies, both real and artificial, based on the column density at their respective positions. If there is a relation between disk opacity and column density resolved in the THINGS RO maps, it should show as a preference of the real distant galaxies for a specific column density, for example for lower values of $\rm \Sigma_{HI}$. The artificial galaxies would not prefer any column density value in particular.
![[**Top:**]{} histogram of real (hatched) and artificial galaxies, $N$ and $N_0$ respectively, as a function of surface density, $\rm \Sigma_{HI}$. Because all the WFPC2 fields were chosen on spiral arms at the edge of the optical disks, the range of $\rm \Sigma_{HI}$ is limited. [**Bottom:**]{} inferred opacity ($A_I$) as a function of surface density. The dashed line is the relation from [[@Bohlin78]]{} for the Galactic total (+2) gas-to-dust ratio. []{data-label="f:na"}](./holwerda_f3.pdf){width="50.00000%"}
The top panel in Figure \[f:na\] shows the distribution histogram of real (hatched) and artificial (solid) galaxies observed, as a function of foreground galaxy column density. The bottom panel converts the ratio of real and artificial galaxies found at an column density into an opacity, using equation \[eq1\] with C equal to 1.2. The real distant galaxies identified in the HST images do not show a clear preference for a certain column density. Their distribution is very similar to that of the artificial distant background galaxies. As a result, the inferred opacity is constant with column density. In our opinion, this lack of a relation can either be: (1) real, pointing to a break-down in the spatial relation between and dust on scales of 6$^{\prime\prime}$ (corresponding to $\sim0.5$ kpc in our galaxies); or (2) an artifact of stacking results from different fields at various galactocentric radii in different foreground galaxies at diverse distances. We note, however, that the deviation from the [@Bohlin78] relation between column density and extinction (dashed line in bottom panel) is strongest for the lowest column densities, where our statistics are the most robust. In our opinion, this points to that one needs to compare to the total hydrogen column density, including the molecular component[^9].
![image](./holwerda_f4a.pdf){width="32.00000%"} ![image](./holwerda_f4b.pdf){width="32.00000%"} ![image](./holwerda_f4c.pdf){width="32.00000%"}
Average Column Densities and Opacity per WFPC2 field {#s:wfpc2}
====================================================
Our second approach is to compare and 2 column densities to disk opacity averaged over each WFPC2 field. Table \[t:info\] lists the average opacity value for each HST/WFPC2 field, and the and 2 column densities averaged over the WFPC2 field-of-view (the footprints in Figure \[f:himap\]). The beams of the and 2 observations are much smaller than the WFPC2 apertures and we expect any aperture correction to the surface densities to be small (Table \[t:info\])[^10]. Figure \[f:sighi-A\], left, plots the opacity versus surface density; there is no clear relation between the two, when averaged over the size of a WFPC2 field. There are two negative values in our present sample \[and the entire [@Holwerda05] sample\], that are probably due to cosmic variance in the number of background galaxies (a background cluster). The opacity values and surface densities span a reasonable range for spiral galaxies. [@Cuillandre01] similarly find little relation between reddening and number of distant galaxies, on one side, and column density, on the other. Figure \[f:sighi-A\], middle panel, shows the relation between disk opacity and mean surface density of 2, inferred from the CO observations. There are fewer useful points, as there are no CO data for three of our WFPC2 fields, and two of the WFPC2 fields show the aforementioned negative opacity. There could be a relation between CO inferred molecular surface density and opacity. Figure \[f:sighi-A\], right panel, shows the relation between disk opacity and mean surface density of total gas (+2). For those galaxies where no CO information as available, we use the mean surface density (open diamonds). For comparison, we show the canonical Galactic relation from Bohlin et al. Opacity appears mostly independent from total gas surface density but with the majority of our points lie above the Bohlin et al. relation. There is surprisingly little of a relation between the gas, total, molecular or atomic, and disk opacity. In part this may be due in part to the different dust clumpiness in each disk, which is observed at a different distance. Alternatively, the metallicity and implicitly the average galactocentric radius of each field is the missing factor in the gas-dust relation in these fields.
![The ratio of dust surface density ($\rm \Sigma_{D}$) to either molecular (2 $\rm \Sigma_{H_2}$, top panel) or atomic hydrogen ( $\rm \Sigma_{HI}$) as a function of galactocentric radius, normalized to the 25 mag/arcsec$^2$ B-band isophotal radius [$R_{25}$ from @RC3]. Open circles are the negative disk opacities from NGC 925 and NGC5194-1. The innermost three points from NGC 3031 and NGC 3621 do not have CO information.[]{data-label="f:sigsR"}](./holwerda_f56.pdf){width="50.00000%"}
One explanation for the lack of a relation in Figure \[f:sighi-A\] is that the measurements were taken at various galactocentric radii (and hence metallicity) in each disk. Figure \[f:sigsR\] plots the ratio between the dust surface density (to facilitate direct comparison) and the two phases of the hydrogen,atomic and molecular in $\rm M_\odot/pc^2$, as a function of radius, scaled to the 25 mag/arcsec$^2$ B-band isophotal radius ($R_{25}$) from [@RC3]. The relation with atomic phase is consistent with a constant fraction of $\rm \Sigma_{D}/\Sigma_{HI} \sim 0.1$ with two exceptions at $R \sim0.5 R_{25}$; NGC 3351 and NGC 3627. Both of these are small disks, of which the WFPC2 field covers a large fraction (see Figure \[f:himap\]), both with prominent spiral arms. NGC 3627 is a member of the Leo triplet and as such may also be a victim of atomic gas stripping or a tidally induced strong spiral pattern. The top panel in \[f:sigsR\] shows the ratio between dust and molecular surface density, consistent with a constant fraction of $\rm \Sigma_{D}/\Sigma_{H2} \sim 0.75$ with one exception; NGC 3198, which is a very flocculant spiral, which a much lower surface density. There is little relation between the dust-to-atomic or dust-to-molecular ratio and radius. Exceptions seem to be either strong spiral arm structure, in the case of , or very flocculant, in the case of 2.
![ The ratio between implied average dust surface density ($\rm \Sigma_d$), and the total hydrogen surface density ($\rm \Sigma_{HI+H_2}$) as a function of radius. The inner three points (open diamonds) do not have CO data. The dashed-dotted line is the ratio from Bohlin et al. (1978).[]{data-label="f:ratio"}](./holwerda_f7.pdf){width="50.00000%"}
By combining the surface densities of and 2 into a single hydrogen surface density ($\rm \Sigma_{HI+H2}$), we can now directly compare the total dust-to-gas surface density ratio. In the cases, where no CO observations are available (NGC 3031 and NGC 3621), we use the ratio with only. Figure \[f:ratio\] shows the dust-to-total-gas ratio as a function of radius. The anomalous ratios of NGC 3351, NGC 3627 and NGC 3198 in Figure \[f:sigsR\] now fall into line.
If we take the points without CO information (open diamond symbols) at face value (assume no molecular gas), Figure \[f:ratio\] suggests an exponential decline of dust-to-total gas; ${\rm \Sigma_d / \Sigma_{HI+H2}} = 0.52 \times e^{- 4.0 {R / R_{25} }}$. A decline of the dust-to-total-gas ratio would be consistent with the relation with metallicity shown in [@Leroy11; @Sandstrom11], and with the trends with radius in the recent [*Spitzer*]{} [e.g., @Munoz-Mateos09b; @Bendo10a] and [*Herschel*]{} results [@Pohlen10; @Smith10b].
However, if we exclude those points without CO information (open diamond symbols) and those with negative SFM measurements (open circles), Figure \[f:ratio\] is in agreement with a [*constant*]{} gas-to-dust ratio of $0.043 \pm 0.02$ (weighted mean). One can reasonably expect a much more substantial contribution by the molecular component in the inner disk, which would bring the three points without CO information into line with this constant fraction. This dust-to-total-gas fraction is approximately a factor two above the typical value in the literature [$\sim$0.01-0.03, @Smith10b; @Leroy11] or the one from [@Bohlin78]. The fact that the ratio between dust and total gas surface density is nearly constant points to dust in both the diffuse disk as well as in the denser molecular clouds.
Discussion
==========
When compared to either phase of hydrogen in these disks, atomic or molecular, the dust density implied by the disk opacity mostly point to a constant ratio. Exceptions seem to point to a change in gas phase due to the strength of spiral arms in the WFPC2 field-of-view; a strong spiral density wave moves gas into the molecular phase and a flocculant structure into the atomic one. A scenario consistent with the density wave origin of spiral structure. In our opinion it illustrates the need for a constraint on both gas phases for a comparison with dust surface density.
Our dust-to-total-gas ratio of 0.043 (Figure \[f:ratio\]) is higher than the values found, for example, in the Local Group spiral galaxies [the Milky Way, M31, and M33 in the case of @Leroy11], or in a single Virgo spiral galaxy with [*Herschel*]{} [NGC 4501, @Smith10b] or the values found by [@Magrini11b]. These studies find the values closest to ours in the outskirts of the respective galaxy disks. There are several explanations for the high dust-to-gas ratio in our measurements: (1) we overestimated the dust surface density, (2) a substantial aperture correction of the CO and surface densities is needed, (3) for large portions of the disk, a different CO-to-2 conversion factor () is appropriate, and (4) a different absorption factor ($\kappa_{abs}$) for a disk average is appropriate. First, we are confident that our dust surface densities are unbiased and reasonably accurate because we checked them agains several other observational techniques (Cepheid reddening, occulting galaxy results, [*Spitzer*]{} FIR SED fits). Our main assumption is that the dust is in a screen, which is a very rough approximation, especially when the probe used is the number of distant objects [i.e, the opacity is also a function of cloud cover @Holwerda07b]. However, our comparison between dust surface densities from an SED fit and the number count of distant galaxies showed good agreement [@Holwerda07a] (Figure \[f:draine\]) to within a factor two. We note that these SED fits were done without sub-mm information but [@Draine07] point out that dust masses can vary with a factor less than 1.5 if the SED of the large grains is done with or without sub-mm information (their Figure 12). The dust surface densities are therefore not likely to be overestimated by more than a factor two. Our comparison with sub-mm fluxes (Figure \[f:herschel\], §\[s:herschel\]) seems to confirm this. The SFM estimate of the dust surface density may well be the upper limit of dust in these disks.\
Secondly, no aperture correction was applied to the CO and surface densities. Because the aperture we use to measure the CO and fluxes is the odd shape of the WFPC2 camera’s field-of-view (Figure \[f:himap\]), an aperture correction is not straightforward. Yet, we estimate that the aperture correction cannot change the reported average surface brightnesses sufficiently, as the resolution of the observations is substantially smaller than the WFPC2 aperture (Table \[t:info\]).\
Thirdly, when averaged over a large portion of the disk, which spans a range in density environments, the CO conversion factor () may well underestimate the total molecular hydrogen surface density, since some molecular clouds the observed CO may be from the “skin” of the GMC and there is not straightforward conversion from CO to 2 volume [@Wolfire10; @Glover11a; @Planck-Collaboration11a; @Shetty11; @Madden11; @Feldmann11a; @Feldmann12b; @Mac-Low12].\
A fourth option is that the dust absorption factor ($\kappa_{abs}$) is different when averaged over different environments and therefore dust grain properties [e.g., @Narayanan11c; @Narayanan12], although this is likely a secondary effect.
If [*all*]{} our dust surface densities would are all [*over*]{}-estimated by a factor $\sim$2, or the aperture correction increased the gas surface densities substantially, one may not need to change the factor to bring our dust-to-gas ratio in line with recent results from [*Herschel*]{}. We suspect, however, that the explanation includes a different , when averaged of a large section of the spiral disk and at different galactocentric radii, extending the range in values found in Local Group spiral galaxies by [@Leroy11].
![The logarithm of the ratio between the total hydrogen surface density ($\rm \Sigma_{HI+H_2}$) and the implied average dust surface density ($\rm \Sigma_d$) as a function of the metallicity ($\rm 12+log(O/H)$), estimated from Figure 7 in [@Moustakas10]. The circles and diamonds are for the calibration from [@KK04] and [@PT05] respectively. Open symbols are those points without CO information. There is no metallicity estimate for NGC 3627. We use the gas-to-dust ratio here in order to compare to the relation from [@Leroy11].[]{data-label="f:metal"}](./holwerda_f8.pdf){width="50.00000%"}
Comparison to Metallicities {#s:metal}
---------------------------
The present consensus is that the dust-to-total-gas ratio depends linearly on the metallicity [see for instance @Leroy11]. Fortunately, uniformly determined metallicity gradients for the SINGS[^11], and hence THINGS and HERACLES, galaxies are presented in [@Moustakas10]. Only NGC 3627 does not have metallicity information. Starting from their linear relation for the radial dependence of metallicity in each galaxy (their Figure 7), we can obtain an estimate of the metallicity for each of our WFPC2 fields. They present two different estimates of metallicity ($\rm log(O/H)$), with either the theoretical calibration from [@KK04] or the empirical one from [@PT05] (see Table \[t:info\]). [@Moustakas10] note that, until the calibration issues are resolved, one should either average the metallicity estimates based on either calibration or use both separately. We will use both calibrations separately for comparison and the total-gas-to-dust ratio to facilitate a direct comparison to Figure 6 in [@Leroy11]. We note that since our WFPC2 fields were placed with crowding issues in mind, our coverage of galactocentric radii (and hence metallicities) is not very large.
Figure \[f:metal\] shows the logarithm of the total-gas-to-dust ratio as a function of metallicity, using either of the two calibrations. Our points lie lower than the linear relation from [@Leroy11] for the gas-to-dust ratio with metallicity, not unexpectedly as we already established that our dust-to-gas values are higher than those previously reported.
However, using the calibration from [@PT05], and discarding those points that lack CO information, there is a reasonable agreement with the relation from [@Leroy11].
Conclusions {#s:concl}
===========
To conclude our “Opacity of spiral disks" project, we have compared the opacity of spiral galaxies, and the hence dust surface density to the surface densities of hydrogen, both atomic and molecular, the original goal of our project. We conclude from this comparison:
1. The disk opacity scales with the [*Herschel-SPIRE*]{} 250 $\mu$m surface brightness (Figure \[f:herschel\]), confirming our assertion that opacity scales with dust surface density to first order.
2. There is little relation between the column density and where a distant galaxy was identified in these fields (Figure \[f:na\]).
3. Averaged over a WFPC2 field, there is only a weak link between disk opacity (or dust surface density) and gas surface density, either atomic, molecular or total (Figure \[f:sighi-A\]), pointing to third factor; radius or metallicity.
4. The dust-to- or dust-to-2 relations with galactocentric radius are both relatively constant (Figure \[f:sigsR\]), but the exceptions point to the role of spiral structure in the dominant gas phase of the ISM.
5. The dust-to-[*total*]{}-gas ratio is close to constant for all our fields $\rm \Sigma_{HI+H_2} = 0.043 \pm 0.024$ (Figure \[f:ratio\]). This higher value can, in our opinion, be attributed to a different conversion to dust surface density or the CO-to-2 conversion factor () for such large sections of disks.
6. Compared to the relation between total-gas-to-dust and metallicity from [@Leroy11], our results are reasonably consistent, provided one uses the [@PT05] calibration of the metallicities of [@Moustakas10] (Figure \[f:metal\]).
Future use of the number of distant galaxies identified through a foreground spiral disk as a probe of dust is critically limited by cosmic variance [@Gonzalez03; @Holwerda05e] but its optimal application will be on a [*single*]{} large [*HST*]{} mosaic of a nearby face-on spiral (e.g., M81 or M101), which will most likely the last contribution of this unique approach to the issue of the dust content of spiral disks.
Acknowledgements {#acknowledgements .unnumbered}
================
We acknowledge the THINGS collaboration for the publication of their surface density maps, based on their Very Large Array radio observations and would like to thank the HERACLES collaboration for making their surface density maps available early. The authors would like to thank F. Walther and S-L. Blyth for useful discussions and feedback. We thank the anonymous referee for his or her excellent report and extraordinary effort.
We acknowledge support from HST Archive grants AR-10662 and AR-10663 and from the National Research Foundation of South Africa. The work of W.J.G. de Blok is based upon research supported by the South African Research Chairs Initiative of the Department of Science and Technology and the National Research Foundation. Antoine Bouchard acknowledges the financial support from the South African Square Kilometre Array Project. R. A. González-Lópezlira acknowledges support from DGAPA (UNAM) grant IN118110.
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Based on observations made with the NASA/ESA Hubble Space Telescope, which is a collaboration between the Space Telescope Science Institute (STScI/ NASA), the Space Telescope European Coordinating Facility (ST-ECF/ ESA), and the Canadian Astronomy Data Centre (CADC/NRC/CSA). The Hubble data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584, and by other grants and contracts.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of NASA’s Astrophysics Data System.
[123]{} natexlab\#1[\#1]{}
, R. J., [Atherton]{}, P. D., & [Tilanus]{}, R. P. J. 1986, , 319, 296
, R. J., [Heaton]{}, H. I., & [Kaufman]{}, M. J. 2004, , 608, 314
, R. J., [Knapen]{}, J. H., [Bohlin]{}, R., & [Stecher]{}, T. P. 1997, , 487, 171
, Y. C. & [van der Kruit]{}, P. C. 1992, , 265, 396
, E. F., [Baugh]{}, C. M., [Cole]{}, S., [Frenk]{}, C. S., & [Lacey]{}, C. G. 2003, , 343, 367
, G. J., [Draine]{}, B. T., [Engelbracht]{}, C. W., [et al.]{} 2008, , 389, 629
, G. J., [Wilson]{}, C. D., [Pohlen]{}, M., [et al.]{} 2010, , 518, L65+
, G. J., [Wilson]{}, C. D., [Warren]{}, B. E., [et al.]{} 2010, , 402, 1409
, A. A., [Quillen]{}, A. C., [Pogge]{}, R. W., & [Sellgren]{}, K. 1997, , 114, 107
, F., [Leroy]{}, A., [Walter]{}, F., [et al.]{} 2008, , 136, 2846
, R. C., [Savage]{}, B. D., & [Drake]{}, J. F. 1978, , 224, 132
, S., [Boselli]{}, A., [Buat]{}, V., [Donas]{}, J., & [Milliard]{}, B. 2004, , 424, 465
, S., [Gil de Paz]{}, A., [Boselli]{}, A., [et al.]{} 2007, , 173, 524
, S., [Gil de Paz]{}, A., [Madore]{}, B. F., [et al.]{} 2005, , 619, L83
, A., [Eales]{}, S., [Cortese]{}, L., [et al.]{} 2010, , 122, 261
, J., [Gratier]{}, P., [Kramer]{}, C., [et al.]{} 2010, , 518, L69+
, V., [Boselli]{}, A., [Gavazzi]{}, G., & [Bonfanti]{}, C. 2002, , 383, 801
, D. 2001, , 113, 1449
, D., [Kennicutt]{}, R. C., [Bianchi]{}, L., [et al.]{} 2005, , 633, 871
, B., [Schiminovich]{}, D., [Kauffmann]{}, G., [et al.]{} 2010, , 403, 683
, S. & [Tielens]{}, A. G. G. M. 2004, , 604, 222
, F., [Boquien]{}, M., [Kramer]{}, C., [et al.]{} 2012, , 539, A67
, L., [Boselli]{}, A., [Buat]{}, V., [et al.]{} 2006, , 637, 242
, L., [Davies]{}, J. I., [Pohlen]{}, M., [et al.]{} 2010, , 518, L49+
, J., [Lequeux]{}, J., [Allen]{}, R. J., [Mellier]{}, Y., & [Bertin]{}, E. 2001, , 554, 190
, E., [Charlot]{}, S., & [Elbaz]{}, D. 2008, , 388, 1595
, D. A., [Aniano]{}, G., [Engelbracht]{}, C. W., [et al.]{} 2012, , 745, 95
, G., [de Vaucouleurs]{}, A., [Corwin]{}, H. G., [et al.]{} 1991, [Third Reference Catalogue of Bright Galaxies]{} (Volume 1-3, XII, 2069 pp. 7 figs.. Springer-Verlag Berlin Heidelberg New York)
, D. L., [Keel]{}, W. C., [Ryder]{}, S. D., & [White]{}, III, R. E. 1999, , 118, 1542
, D. L., [Keel]{}, W. C., & [White]{}, III, R. E. 2000, , 545, 171
, B. T. 2003, , 41, 241
, B. T., [Dale]{}, D. A., [Bendo]{}, G., [et al.]{} 2007, , 663, 866
, C. M., [Bica]{}, E., [Clari[á]{}]{}, J. J., [Piatti]{}, A. E., & [Ahumada]{}, A. V. 2001, , 371, 895
, S. A., [Smith]{}, M. W. L., [Wilson]{}, C. D., [et al.]{} 2010, , 518, L62+
, D. M., [Kaufman]{}, M., [Elmegreen]{}, B. G., [et al.]{} 2001, , 121, 182
, C. W., [Hunt]{}, L. K., [Skibba]{}, R. A., [et al.]{} 2010, , 518, L56+
, S., [Catinella]{}, B., [Giovanelli]{}, R., [et al.]{} 2011, , 411, 993
, S. M. & [Efstathiou]{}, G. 1980, , 193, 189
, R., [Gnedin]{}, N. Y., & [Kravtsov]{}, A. V. 2011, , 732, 115
, R., [Gnedin]{}, N. Y., & [Kravtsov]{}, A. V. 2011, ArXiv e-prints
, K., [Wilson]{}, C. D., [Mentuch]{}, E., [et al.]{} 2012, ArXiv e-prints
, W. L., [Madore]{}, B. F., [Gibson]{}, B. K., [et al.]{} 2001, , 553, 47
, M., [Kennicutt]{}, R. C., [Albrecht]{}, M., [et al.]{} 2012, ArXiv e-prints
, F., [Hony]{}, S., [Bernard]{}, J. ., [et al.]{} 2011, ArXiv e-prints
, S. C. O. & [Mac Low]{}, M. 2011, , 412, 337
, R. A., [Allen]{}, R. J., [Dirsch]{}, B., [et al.]{} 1998, , 506, 152
, R. A., [Loinard]{}, L., [Allen]{}, R. J., & [Muller]{}, S. 2003, , 125, 1182
, K. D., [Engelbracht]{}, C. W., [Rieke]{}, G. H., [et al.]{} 2008, , 682, 336
, K. D., [Galliano]{}, F., [Hony]{}, S., [et al.]{} 2010, , 518, L89+
, M. J., [Abergel]{}, A., [Abreu]{}, A., [et al.]{} 2010, , 518, L3+
, M. & [Hodge]{}, P. 1990, , 102, 849
, J. S., [Allen]{}, R. J., [Emonts]{}, B. H. C., & [van der Kruit]{}, P. C. 2008, , 673, 798
, J. S., [Allen]{}, R. J., & [van der Kruit]{}, P. C. 2009, in The Evolving ISM in the Milky Way and Nearby Galaxies
, J. S., [Allen]{}, R. J., & [van der Kruit]{}, P. C. 2010, , 719, 1244
, J. S., [Allen]{}, R. J., [Wong]{}, O. I., & [van der Kruit]{}, P. C. 2008, , 489, 533
, P. W. 1974, , 192, 21
, P. W. & [Snow]{}, T. P. 1975, , 80, 9
, B. W. 2005, PhD thesis, Proefschrift, Rijksuniversiteit Groningen, 2005
, B. W. 2005, PhD thesis, Proefschrift, Rijksuniversiteit Groningen, 2005
, B. W., [Draine]{}, B., [Gordon]{}, K. D., [et al.]{} 2007, , 134, 2226
, B. W., [González]{}, R. A., [Allen]{}, R. J., & [van der Kruit]{}, P. C. 2005, , 129, 1381
, B. W., [González]{}, R. A., [Allen]{}, R. J., & [van der Kruit]{}, P. C. 2005, , 129, 1396
, B. W., [González]{}, R. A., [Allen]{}, R. J., & [van der Kruit]{}, P. C. 2005, , 444, 101
, B. W., [González]{}, R. A., [Allen]{}, R. J., & [van der Kruit]{}, P. C. 2005, , 444, 319
, B. W., [González]{}, R. A., [van der Kruit]{}, P. C., & [Allen]{}, R. J. 2005, , 444, 109
, B. W., [Keel]{}, W. C., & [Bolton]{}, A. 2007, , 134, 2385
, B. W., [Keel]{}, W. C., [Williams]{}, B., [Dalcanton]{}, J. J., & [de Jong]{}, R. S. 2009, , 137, 3000
, B. W., [Meyer]{}, M., [Regan]{}, M., [et al.]{} 2007, , 134, 1655
, F. P. 1997, , 328, 471
, S. J. 2004, , 611, L89
, W. C., [Manning]{}, A. M., [Holwerda]{}, B. W., [et al.]{} 2012, [ *submitted*]{}, MNRAS
, W. C. & [White]{}, III, R. E. 2001, , 121, 1442
, W. C. & [White]{}, III, R. E. 2001, , 122, 1369
, R. C., [Armus]{}, L., [Bendo]{}, G., [et al.]{} 2003, , 115, 928
, R. C., [Calzetti]{}, D., [Aniano]{}, G., [et al.]{} 2011, , 123, 1347
, Jr., R. C. 1998, , 498, 541
, Jr., R. C., [Calzetti]{}, D., [Walter]{}, F., [et al.]{} 2007, , 671, 333
, H. A. & [Kewley]{}, L. J. 2004, , 617, 240
, A., [Bolatto]{}, A., [Stanimirovic]{}, S., [et al.]{} 2007, , 658, 1027
, A. K., [Bolatto]{}, A., [Gordon]{}, K., [et al.]{} 2011, ArXiv e-prints/1102.4618
, A. K., [Walter]{}, F., [Bigiel]{}, F., [et al.]{} 2009, , 137, 4670
, A. K., [Walter]{}, F., [Brinks]{}, E., [et al.]{} 2008, , 136, 2782
, A. & [Draine]{}, B. T. 2001, , 554, 778
, C. J., [Schawinski]{}, K., [Slosar]{}, A., [et al.]{} 2008, , 389, 1179
, M.-M. & [Glover]{}, S. C. O. 2012, , 746, 135
, H. T. 1975, , 170, 241
, S. C., [Galametz]{}, M., [Cormier]{}, D., [et al.]{} 2011, ArXiv e-prints
, S. C., [Poglitsch]{}, A., [Geis]{}, N., [Stacey]{}, G. J., & [Townes]{}, C. H. 1997, , 483, 200
, L., [Bianchi]{}, S., [Corbelli]{}, E., [et al.]{} 2011, ArXiv e-prints/1106.0618
, J., [Kennicutt]{}, Jr., R. C., [Tremonti]{}, C. A., [et al.]{} 2010, , 190, 233
, J. C., [Boissier]{}, S., [Gil de Paz]{}, A., [et al.]{} 2011, , 731, 10
, J. C., [Gil de Paz]{}, A., [Boissier]{}, S., [et al.]{} 2009, , 701, 1965
, J. C., [Gil de Paz]{}, A., [Zamorano]{}, J., [et al.]{} 2009, , 703, 1569
, D. 2011, ArXiv e-prints
, D., [Krumholz]{}, M., [Ostriker]{}, E. C., & [Hernquist]{}, L. 2011, , 418, 664
, D., [Krumholz]{}, M. R., [Ostriker]{}, E. C., & [Hernquist]{}, L. 2012, , 2537
, D., [Croton]{}, D., [DeLucia]{}, G., [Khochfar]{}, S., & [Rawlings]{}, S. 2009, , 698, 1467
, T., [Fraternali]{}, F., & [Sancisi]{}, R. 2007, , 134, 1019
, J. & [Kroupa]{}, P. 2008, , 455, 641
, L. S. & [Thuan]{}, T. X. 2005, , 631, 231
, [Ade]{}, P. A. R., [Aghanim]{}, N., [et al.]{} 2011, , 536, A19
, M., [Cortese]{}, L., [Smith]{}, M. W. L., [et al.]{} 2010, ArXiv e-prints
, C. C., [Misiriotis]{}, A., [Kylafis]{}, N. D., [Tuffs]{}, R. J., & [Fischera]{}, J. 2000, , 362, 138
, C. C. & [Tuffs]{}, R. J. 2002, Reviews of Modern Astronomy, 15, 239
, J., [Israel]{}, F. P., [Bolatto]{}, A., [et al.]{} 2010, , 518, L74+
, E. 2005, , 117, 1403
, R., [Fraternali]{}, F., [Oosterloo]{}, T., & [van der Hulst]{}, T. 2008, , 15, 189
, K. M., [Leroy]{}, A. K., [Walter]{}, F., [et al.]{} 2011, in Bulletin of the American Astronomical Society, Vol. 43, American Astronomical Society Meeting Abstracts \#217, \#202.07–+
, A., [Leroy]{}, A. K., [Walter]{}, F., [et al.]{} 2011, ArXiv e-prints
, H. 1951, Proceedings of the National Academy of Science, 37, 133
, R., [Glover]{}, S. C., [Dullemond]{}, C. P., [et al.]{} 2011, ArXiv e-prints
, R. A., [Engelbracht]{}, C. W., [Dale]{}, D., [et al.]{} 2011, ArXiv e-prints
, D. A., [Allen]{}, R. J., [Bohlin]{}, R. C., [Nicholson]{}, N., & [Stecher]{}, T. P. 2000, , 538, 608
, M. W. L., [Vlahakis]{}, C., [Baes]{}, M., [et al.]{} 2010, , 518, L51+
, D. A., [Boissier]{}, S., [Bianchi]{}, L., [et al.]{} 2007, , 173, 572
, F., [Brinks]{}, E., [de Blok]{}, W. J. G., [et al.]{} 2008, , 136, 2563
, F., [Leroy]{}, A., [Bigiel]{}, F., [et al.]{} 2009, in American Astronomical Society Meeting Abstracts, Vol. 214, American Astronomical Society Meeting Abstracts, \#419.08–+
, J. C. & [Draine]{}, B. T. 2001, , 553, 581
, A. J. 1961, , 122, 509
, A. A., [Garcia-Appadoo]{}, D. A., [Dalcanton]{}, J. J., [et al.]{} 2010, , 139, 315
, III, R. E., [Keel]{}, W. C., & [Conselice]{}, C. J. 2000, , 542, 761
, M. G., [Hollenbach]{}, D., & [McKee]{}, C. F. 2010, , 716, 1191
, D. 1994, , 108, 1619
MAGPHYS SED Model {#s:magphys}
=================
As an alternative check of the inferred dust masses, we ran the Multi-wavelength Analysis of Galaxy Physical Properties ([magphys]{}) package on the [*Spitzer*]{} and [*Herschel/SPIRE*]{} surface brightnesses. This is a self-contained, user-friendly model package to interpret observed spectral energy distributions of galaxies in terms of galaxy-wide physical parameters pertaining to the stars and the interstellar medium, following the approach described in [@da-Cunha08]. Figure \[f:magphys\] summarizes the result: dust surface density derived from the [magphys]{} fit compared to those inferred from the number of distant galaxies. In [@Holwerda07a], we found that the [@Draine07] model inferred similar dust optical depths for these disks as the SFM as well as similar (to within a factor two) dust masses. The discrepancy with [magphys]{} illustrates, in our view, the importance of modeling sections of spiral disks in resolved observations with more physical models that include a range of stellar heating parameters [e.g. the models by @Draine07; @Galliano11].
![The dust surface densities from the [magphys]{} fit and inferred from the number of identified background galaxies (SFM) for each WFPC2 aperture. The dashed line denotes a factor ten ratio. [magphys]{} SED models do not take internal structure and differential stellar heating into account.[]{data-label="f:magphys"}](./holwerda_f2b.pdf){width="50.00000%"}
SED of each WFPC2 field with the [magphys]{} fit. {#sed-of-each-wfpc2-field-with-the-magphys-fit. .unnumbered}
=================================================
![image](./NGC925.pdf){width="32.00000%"} ![image](./NGC2841.pdf){width="32.00000%"} ![image](./NGC3031.pdf){width="32.00000%"}\
![image](./NGC3198.pdf){width="32.00000%"} ![image](./NGC3351.pdf){width="32.00000%"} ![image](./NGC3621_1.pdf){width="32.00000%"}\
![image](./NGC3621_2.pdf){width="32.00000%"} ![image](./NGC3627.pdf){width="32.00000%"} ![image](./NGC5194_1.pdf){width="32.00000%"}\
![image](./NGC5194_2.pdf){width="32.00000%"} ![image](./NGC6946.pdf){width="32.00000%"} ![image](./NGC7331.pdf){width="32.00000%"}
\[f:seds\]
[^1]: E-mail: benne.holwerda@esa.int
[^2]: We used the term “opacity" throughout our project and its publications for historical reasons.
[^3]: Other authors have used distant galaxy counts or colours to estimate extinction in the Magellanic Clouds [@Shapley51; @Wesselink61b; @Hodge74; @Hodge75; @McGillivray75; @Gurwell90; @Dutra01] and other galaxies [@Zaritsky94; @Cuillandre01].
[^4]: In this case it does not matter whether the maps are robustly weighted or naturally weighted.
[^5]: Similar quality products are now also available from the archives at STSCI, the High-Level Archive; [www.hla.stsci.edu](www.hla.stsci.edu).
[^6]: It depends to a degree on the depth of the data. Conservatively, for this kind of fields, the total error is about 3.5 times Poisson [@Gonzalez03].
[^7]: <http://herschel.esac.esa.int/Science_Archive.shtml>
[^8]: Key Insights on Nearby Galaxies: a Far-Infrared Survey with Herschel, PI. R. Kennicutt, [see also @Skibba11a; @Dale12; @Kennicutt11; @Galametz12]
[^9]: Our fields are usually centered on a spiral arm (to observe Cepheids) and this increases the contribution from molecular phase.
[^10]: We chose not to correct the surface densities because of the odd shape of the aperture. Depending on how one treats the edges of the aperture, the average surface density varies with $\sim$10%.
[^11]: Spitzer Infrared Nearby Galaxy Survey [@SINGS].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, we consider the inverse optimal control problem for the discrete-time linear quadratic regulator, over finite-time horizons. Given observations of the optimal trajectories, and optimal control inputs, to a linear time-invariant system, the goal is to infer the parameters that define the quadratic cost function. The well-posedness of the inverse optimal control problem is first justified. In the noiseless case, when these observations are exact, we analyze the identifiability of the problem and provide sufficient conditions for uniqueness of the solution. In the noisy case, when the observations are corrupted by additive zero-mean noise, we formulate the problem as an optimization problem and prove the statistical consistency of the problem later. The performance of the proposed method is illustrated through numerical examples.'
address:
- 'Department of Mathematics, KTH Royal Institute of Technology, SE-100 44, Stockholm, Sweden'
- 'Department of Information Technology, Uppsala University, Uppsala, Sweden'
author:
- Han Zhang
- Jack Umenberger
- Xiaoming Hu
bibliography:
- 'ref.bib'
title: 'Inverse Quadratic Optimal Control for Discrete-Time Linear Systems'
---
,
,
,
Inverse optimal control, Linear Quadratic Regulator
Introduction
============
Problem Formulation and Well-Posedness {#sec:well-poseness}
======================================
The “forward" optimal LQ problem reads $$\begin{aligned}
&\min_{x_{1:N},u_{1:N-1}} J = x_N^TSx_N+{\sum}_{t=1}^{N-1} \left(u_t^TRu_t+x_t^T Qx_t\right)\label{eq:opt_ctrl_pro_fin_t}\\
&\mbox{s.t. }x_{t+1} = Ax_t+Bu_t,\quad x_1=\bar{x},\label{eq:lti}\end{aligned}$$ where $S,Q$ are $n$-dimensional positive semidefinite matrices, $R$ is $m$-dimensional positive definite matrix, $x_t\in\mathbb{R}^n$ and $u_t\in\mathbb{R}^m$. The inverse optimal control problem aims to find $(S,Q,R)$ given $(A,B)$, the initial value $x_1=\bar{x}$ and (possibly noisy) observations of the optimal trajectory $x_{2:N}^*$ or control input $u_{1:N-1}^*$. For simplicity, in this paper, we consider the case of $R=I$ and $S=0$. In addition, it is assumed that $(A,B)$ is controllable and $B$ has full column rank. Moreover, we assume that $A$ is invertible. To see that the assumption is reasonable, consider a discrete-time system sampled from a continuous linear system $\dot{x}=\hat{A}x+\hat{B}u$, where the sample period $\Delta t$ is small. Hence for the discretized linear system, we have $A=e^{\hat{A}\Delta t}$, $B=\int_0^{\Delta t}e^{\hat{A}\tau}\hat{B}d\tau$. It is clear that $A=e^{\hat{A}\Delta t}$ is invertible.
Before moving on considering how to solve the inverse optimal control problem, we would like to justify the well-posedness of it. The fundamental question for well-posedness that remains to be anwered is that: does there exist two different $Q$’s such that they can generate the same closed-loop LQR system? If there exists two different $Q$’s that can generate the same closed-loop system matrix, then the problem is obviously ill-posed. Before continuing, we would like to present the following lemma first.
\[lem:A\_cl\_invertible\] If $A\in\mathbb{R}^{n\times n}$ is invertible and $\mathscr{S}\in\mathbb{S}^n_+$, then the matrix $A-B(B^T\mathscr{S}B+I)^{-1}B^T\mathscr{S}A$ is invertible, for all $B\in\mathbb{R}^{n\times m}$.
Consider the matrix $$\begin{aligned}
H=
\begin{bmatrix}
I &B\\B^T\mathscr{S} &B^T\mathscr{S}B+I
\end{bmatrix}.\end{aligned}$$ Note that $H$ is invertible since $\det(H)=\det(I)\det(H\backslash I)=\det(B^T\mathscr{S}B+I-B^T\mathscr{S}B)=1$. Moreover, note that $\det(H)=1=\det(B^T\mathscr{S}B+I)\det\left(H\backslash (B^T\mathscr{S}B+I)\right)$ and $B^T\mathscr{S}B+I$ is invertible since $\mathscr{S}\in\mathbb{S}^n_+$. This implies that $\det(H\backslash (B^T\mathscr{S}B+I))=I-B(B^T\mathscr{S}B+I)^{-1}B^T\mathscr{S}\neq 0$, which means that $H\backslash (B^T\mathscr{S}B+I)$ is invertible. Since $A-B(B^T\mathscr{S}B+I)^{-1}B^T\mathscr{S}A=\left(H\backslash (B^T\mathscr{S}B+I)\right)A$ and $A$ is invertible, the statement follows.
Now we are ready to justify the well-posedness of the inverse LQR optimal control problem.
\[thm:well\_poseness\] Given the closed-loop system matrices $A_{cl}(1:N-1)$ and $N\geq n+2$, the $Q$ that is used to generate the closed-loop system matrices is unique.
We know that $$\begin{aligned}
K_t=-(B^TP_{t+1}B+I)^{-1}B^TP_{t+1}A,
\label{eq:K_t}\end{aligned}$$ where $P_{2:N}\succeq 0$ is the solution to the discrete-time Riccati Equation (DRE) $$\begin{aligned}
&P_t=A^TP_{t+1}A+Q-\\
&\quad A^TP_{t+1}B(B^TP_{t+1}B+I)^{-1}B^TP_{t+1}A,t=1:N-1\\
&P_N=0.
\end{aligned}
\label{eq:discrete_riccati_eq}$$ Now assume both $Q,Q^\prime\succeq 0$ generates the closed-loop system matrices $A_{cl}(1:N-1)$. Then there are $P_{2:N}^\prime$ together with $Q^\prime$ that satisfy the DRE . Denote $Q^\prime=Q+\Delta Q$, $P_t^\prime=P_t+\Delta P_t,t=2:N$, where $\Delta Q,\Delta P_{2:N}\in\mathbb{S}^n$.
First, it is worth noticing that note that if the closed-loop systems are the same, then the control gain matrix must be the same. This is because $A+BK_t=A+BK_t^\prime\Leftrightarrow BK_t=BK_t^\prime\Rightarrow (B^TB)K_t=(B^TB)K_t^\prime$ and since $B$ has full column rank, $B^TB$ is invertible and hence $K_t=K_t^\prime$. It follows from that $$\begin{aligned}
&(B^TP_{t+1}B+I)K_t=-B^TP_{t+1}A\\
\Leftrightarrow&B^TP_{t+1}(A+BK_t)=-K_t,\quad t=1:N-1.\end{aligned}$$ Recall $A_{cl}(t)=A+BK_t=A-B(B^TP_{t+1}B+I)^{-1}B^TP_{t+1}A$ and $P_{t+1}\in\mathbb{S}^n_+$ for all $t=1:N-1$. Note that $A_{cl}(t)=A+BK_t$ is invertible for all $t=1:N-1$. To see that, consider the determinant of $A_{cl}(t)$: $$\begin{aligned}
&\det\left(A_{cl}(t)\right)=\det(A-B(B^TP_{t+1}B+I)^{-1}B^TP_{t+1}A)\\
&=\det(I-B(B^TP_{t+1}B+I)^{-1}B^TP_{t+1})\det(A)\end{aligned}$$ By Sylvester’s determinant theorem, it follows that $$\begin{aligned}
&\det\left(A_{cl}(t)\right)\\
&=\det(I-B^TP_{t+1}B(B^TP_{t+1}B+I)^{-1})\det(A)\\
&=\det\left((B^TP_{t+1}B+I)-B^TP_{t+1}B\right)\\
&\times\det\left[(B^TP_{t+1}B+I)^{-1}\right]\det(A)\\
&=\det(A)/\det(B^TP_{t+1}B+I)\neq 0,\quad t=1:N-1,\end{aligned}$$ since $A$ is invertible. Therefore $B^TP_{t+1}=-K_tA_{cl}^{-1}(t),t=1:N-1$. Since $Q^\prime$ generate the same $A_{cl}(1:N-1)$, it also holds that $B^TP_{t+1}^\prime=-K_tA_{cl}^{-1}(t),t=1:N-1$ and hence $$\begin{aligned}
B^T\Delta P_t=0,t=2:N.\label{eq:B^T_delta_P_t}\end{aligned}$$
Moreover, recall that $P_{2:N}^\prime$ and $Q^\prime$ satisfy the DRE and $Q^\prime=Q+\Delta Q$, $P_t^\prime=P_t+\Delta P_t$. The DRE for $Q^\prime$ and $P_{2:N}^\prime$ reads $$\begin{aligned}
&P_{t}+\Delta P_t=A^T(P_{t+1}+\Delta P_{t+1})A+(Q+\Delta Q)\\
&-A^T(P_{t+1}+\Delta P_{t+1})B\left[B^T(P_{t+1}+\Delta P_{t+1})B+I\right]^{-1}\\
&\times B^T(P_{t+1}+\Delta P_{t+1})A=0,\quad P_N+\Delta P_N=0.\end{aligned}$$ By , it follows from the above equation that $$\begin{aligned}
&A^T\Delta P_{t+1}A-\Delta P_t+\Delta Q=0,\quad t=2:N-1,\\
&\Delta P_N=0.
\end{aligned}
\label{eq:necessary_cond_for_uniqueness}$$ By examining the recursion , utilizing the fact $A$ is invertible and , we know that $$\begin{aligned}
&\Delta P_{N-1}=\Delta Q, \quad B^T\Delta P_{N-1}=B^T\Delta Q=0,\label{eq:delta_Q=0(1)}\\
&\Delta P_{N-2}=A^T\Delta P_{N-1}A+\Delta Q, \nonumber\\
& B^T\Delta P_{N-2}=B^TA^T\Delta QA+B^T\Delta Q=B^TA^T\Delta QA=0,\nonumber\\
& \implies B^TA^T\Delta Q=0,\end{aligned}$$ $$\begin{aligned}
&\Delta P_{N-3}=A^T\Delta P_{N-2}A+\Delta Q\nonumber\\
&=(A^T)^2\Delta QA^2+A^T\Delta QA+\Delta Q,\nonumber\\
&B^T\Delta P_{N-3}=B^T(A^T)^2\Delta QA^2+\underbrace{B^TA^T\Delta QA+B^T\Delta Q}_{=0}\nonumber\\
&=B^T(A^T)^2\Delta Q A^2=0\implies B^T(A^T)^2\Delta Q=0\label{eq:delta_Q=0(2)}\\
&\qquad\qquad\qquad\qquad \vdots\nonumber\\
&\Delta P_2=A^T\Delta P_3A+\Delta Q\nonumber\\
&=(A^T)^{N-3}\Delta QA^{N-3}+\cdots A^T\Delta QA+\Delta Q,\nonumber\end{aligned}$$ $$\begin{aligned}
&B^TP_2=B^T(A^T)^{N-3}\Delta QA^{N-3}\nonumber\\
&+\underbrace{B^T\left((A^T)^{N-4}\Delta QA^{N-4}+\cdots A^T\Delta QA+\Delta Q\right)}_{=0}\nonumber\\
&=B^T(A^T)^{N-3}\Delta QA^{N-3}=0\implies B^T(A^T)^{N-3}\Delta Q=0.\label{eq:delta_Q=0(3)}\end{aligned}$$ Stacking - together, we get $$\begin{aligned}
\underbrace{\begin{bmatrix}
B^T\\B^TA^T\\\vdots\\B^T(A^T)^{N-3}
\end{bmatrix}}_{\tilde{\Gamma}}\Delta Q=0\end{aligned}$$ Since $(A,B)$ is controllable and $N\ge n+2$, $\tilde{\Gamma}$ has full column rank and hence $\Delta Q=0$. Thus the statement follows.
Inverse Optimal Control in the Noiseless Case {#sec:noiseless}
=============================================
After justifying the well-posedness of the inverse optimal control problem, in this section, we consider inverse optimal control for the LQR problem in the noiseless case. It is assumed that we have knowledge of $M$ sets of optimal trajectories $\lbrace x_{1:N}^{(i)*},u_{1:N-1}^{(i)*}\rbrace_{i=1}^M$, i.e., $u_t^{(i)*}=K_tx_t^{(i)*}$, where $K_t$ is the optimal feedback gain. We omit the superscript “star" in the remainder of this section to shorten the notation.
By PMP, if $u_{1:N-1}$ and $x_{1:N}$ are the optimal control and corresponding trajectory, then there exists adjoint variables $\lambda_{2:N}$ such that $$\begin{aligned}
&\lambda_t=A^T\lambda_{t+1}+Qx_t,\:t=2:N-1,\\
&\lambda_N=0,\\
&u_t=-B^T\lambda_{t+1},\:t=1:N-1.
\label{eq:PMP_original}
\end{aligned}$$ Note that in general, PMP only provides necessary optimality conditions for optimal control problems, nevertheless, since the optimal solution to the LQ optimal control problem is unique, PMP becomes also sufficient conditions for optimality.
Note that in this case, knowing $u_{1:N-1}^{(1:M)}$ and $x_1^{(1:M)}=\bar{x}^{(1:M)}$ is equivalent to knowing $x_{1:N}^{(1:M)}$. This is because when given an optimal trajectory $x_{1:N}^{(i)}$, its corresponding optimal control $u_{1:N-1}^{(i)}$ can be determined by $u_t^{(i)}=(B^TB)^{-1}B^T(x_{t+1}^{(i)}-Ax_t^{(i)})$ since $B$ has full column rank by assumption. On the other hand, when given the initial value $\bar{x}^{(1:M)}$ and $u_{1:N-1}^{(1:M)}$, we can use to compute the optimal trajectory $x_{1:N}^{(1:M)}$. Hence we do not distiguish these two cases in the remainder of this section.
Based on , it is straight forward to solve the inverse optimal control problem, i.e., get the matrix $Q$ by solving the following feasibility SDP problem $$\begin{aligned}
& \underset{\lambda_{2:N}^{(1:M)},Q\in\mathbb{S}^n_+}{\text{minimize}}
& & 0
& \text{subject to}
& &\eqref{eq:PMP_original},\:i=1:M,
\end{aligned}$$ with a slightly abuse of notation that “subject to " actually means with a superscript $(i)$ on every $x_t$, $\lambda_t$ and $u_t$’s. The objective function of the above problem can be any constant, without losing generality, here we let it be 0.
Though the problem is easy in the noiseless case, however, we would like to have a closer look at the identifiability of $Q$. Namely, given a set of noiseless optimal trajectories $x_{1:N}^{(1:M)}$, is there a unique positive semidefinite matrix that corresponds to the given optimal trajectories? Now we give two sufficient conditions on the given trajectories $x_{1:N}^{(1:M)}$ that can be used to determine the uniqueness of $Q$.
\[prop:AD\_full\_rank\] Define matrix $$\begin{aligned}
&\mathscr{A}(x)=
\begin{bmatrix}
x_2^{(1)T}\\ \vdots\\x_2^{(M)T}
\end{bmatrix}
\otimes
\begin{bmatrix}
B^T\\0\\ \vdots\\0
\end{bmatrix}+
\begin{bmatrix}
x_3^{(1)T}\\ \vdots\\x_3^{(M)T}
\end{bmatrix}
\otimes
\begin{bmatrix}
B^TA^T\\B^T\\ \vdots\\0
\end{bmatrix}\\
&+\cdots+
\begin{bmatrix}
x_{N-1}^{(1)T}\\ \vdots\\x_{N-1}^{(M)T}
\end{bmatrix}
\otimes
\begin{bmatrix}
B^T(A^T)^{N-3}\\B^T(A^T)^{N-4}\\\vdots\\B^T
\end{bmatrix}.
\end{aligned}
\label{eq:A_mat_eq}$$ If $M(N-2)m\geq n(n+1)/2$ and $\mathscr{A}(x)\mathscr{D}$ has full column rank, then the $Q\in\mathbb{S}^n_+$ that corresponds to the given optimal trajectories $x_{1:N}^{(1:M)}$ is unique, where $\mathscr{D}$ is the duplication matrix for $\mathbb{S}^n$.
By PMP , it follows that $$\begin{aligned}
&\begin{bmatrix}
I &-A^T \\
&I &\ddots\\
& &\ddots&-A^T\\
& & &I
\end{bmatrix}
\begin{bmatrix}
\lambda_2^{(i)}\\\vdots\\\lambda_N^{(i)}
\end{bmatrix}
=
\begin{bmatrix}
Qx_2^{(i)}\\\vdots\\Qx_{N-1}^{(i)}\\0
\end{bmatrix},\\
\Leftrightarrow
&\begin{bmatrix}
\lambda_2^{(i)}\\\vdots\\\lambda_N^{(i)}
\end{bmatrix}
=
\begin{bmatrix}
I &A^T &(A^T)^2 &\cdots &(A^T)^{N-2}\\
&I &A^T&\cdots &(A^T)^{N-3}\\
& &\ddots &\ddots &\vdots\\
& & &I&A^T\\
& & & &I
\end{bmatrix}
\begin{bmatrix}
Qx_2^{(i)}\\\vdots\\Qx_{N-1}^{(i)}\\0
\end{bmatrix},\\
\Rightarrow &-\operatorname{vec}(u_{1:N-1}^{(i)})
=(I\otimes B^T)\operatorname{vec}(\lambda_{2:N}^{(i)})
\\
&=
\begin{bmatrix}
B^TQx_2^{(i)}+B^TA^TQx_3^{(i)}+\cdots +B^T(A^T)^{N-3}Qx_{N-1}^{(i)}\\
B^TQx_3^{(i)} +\cdots +B^T(A^T)^{N-4}Qx_{N-1}^{(i)}\\
\vdots\\
B^TQx_{N-1}^{(i)}\\
0
\end{bmatrix}\end{aligned}$$ Using the property of vectorization and Kronecker product, we can rewrite the above equation as $$\begin{aligned}
-\operatorname{vec}(u_{1:N-1}^{(i)})&=(x_2^{(i)T}\otimes
\begin{bmatrix}
B^T\\0\\\vdots\\0
\end{bmatrix}
+x_3^{(i)T}\otimes
\begin{bmatrix}
B^TA\\B^T\\0\\\vdots\\0
\end{bmatrix}
+\cdots\\
&x_{N-1}^{(i)T}\otimes
\begin{bmatrix}
B^T(A^T)^{N-3}\\B^T(A^T)^{N-4}\\\vdots\\B^T\\0
\end{bmatrix})\operatorname{vec}(Q)\end{aligned}$$ Stacking all $u_{1:N-2}^{(i)}$ for $i=1:M$ and by $\operatorname{vec}(Q)=\mathscr{D}\operatorname{vech}(Q)$, we get $$\begin{aligned}
-\operatorname{vec}(u_{1:N-2}^{(1:M)})=\mathscr{A}(x)\mathscr{D}\operatorname{vech}(Q).
\label{eq:vech_Q_equation}\end{aligned}$$ Since $\mathscr{A}(x)\mathscr{D}$ has dimension $M(N-1)m\times n(n+1)/2$ and $M(N-1)m\geq n(n+1)/2$, $\operatorname{vech}(Q)$ must be unique provided that $\mathscr{A}(x)\mathscr{D}$ has full column rank.
Sometimes, $\mathscr{A}(x)\mathscr{D}$ does not necessarily have full column rank, but we can still get a unique $Q$ by the optimal trajectories $x_{1:N}^{(1:M)}$ available. We give the following sufficient condition:
\[prop:uniqueness\_dual\] Suppose $\mathscr{A}(x)\mathscr{D}$ does not have full column rank, $\operatorname{vech}(Q^\prime)$ is a solution to and $\operatorname{span}\{\operatorname{vech}(\Delta Q_k)\}=\operatorname{ker}(\mathscr{A}(x)\mathscr{D})$, where $\operatorname{vech}(\Delta Q_k)$ are linearly independent. In addition, suppose $\Phi^*\in\mathbb{S}^n_+$ is an optimal solution to $$\begin{aligned}
& \underset{\Phi\in\mathbb{S}^n_+}{\text{minimize}}
& & tr(Q^\prime\Phi)\\
& \text{subject to}
& &tr(\Delta Q_k\Phi)=0,k=1:\dim\left(\operatorname{ker}\left(\mathscr{A}(x)\mathscr{D}\right)\right),
\end{aligned}
\label{eq:vech_Q_dual}$$ where $rank(\Phi^*)=r$, $\Phi^*=G\:diag(\sigma_1,\cdots\sigma_r,0,\cdots,0)G^T$ and $GG^T=I$. Define $$\begin{aligned}
\mathscr{N}_\Phi=\left\{G
\begin{bmatrix}
0&0\\0 &W
\end{bmatrix}G^T\:|\:W\in\mathbb{S}^{n-r}
\right\}.\end{aligned}$$ If $\mathscr{N}_\Phi\cap \operatorname{span}\{\Delta Q_k\}=\{0\}$, then the $Q\in\mathbb{S}^n_+$ that corresponds to the given optimal trajectories $x_{1:N}^{(1:M)}$ is unique.
Denote $\eta=\dim(\operatorname{ker}(\mathscr{A}(x)\mathscr{D}))$. Since $\mathscr{A}(x)\mathscr{D}$ does not have full column rank, $\operatorname{vech}(Q^\prime)$ is a solution to and $\operatorname{vech}(\Delta Q_k)$ are linearly independent and spans $\operatorname{ker}(\mathscr{A}(x)\mathscr{D})$, it holds that $$\begin{aligned}
\mathscr{A}(x)\mathscr{D}\left(\operatorname{vech}(Q^\prime)+\sum_{k=1}^{\eta}\alpha_k\operatorname{vech}(\Delta Q_k)\right)=0,\forall \alpha_k.\end{aligned}$$ What remains to show is that there exists a unique $\{\alpha_k\}_{k=1}^\eta$, such that $Q=Q^\prime+\sum_{k=1}^\eta\alpha_k\Delta Q_k\in\mathbb{S}^n_+$. Consider the following SDP $$\begin{aligned}
& \underset{\{\alpha_k\}}{\text{minimize}}
& & 0\\
& \text{subject to}
& &Q^\prime+\sum_{k=1}^\eta\alpha_k\Delta Q_k\succeq 0,
\end{aligned}
\label{eq:vech_Q_feasible}$$ whose dual problem is . If $\Phi^*$ is an optimal solution to and $\mathscr{N}_\Phi\cap \operatorname{span}\{\Delta Q_k\}=\{0\}$, then the optimal solution is non-degenerate. Hence the primal problem has a unique solution.[@alizadeh1997complementarity]
If the “real" $Q$ is strictly positive definite, then “the matrix $\mathscr{A}(x)\mathscr{D}$ has full column rank" also becomes a necessary condition for the identifiability of $Q$. If $\mathscr{A}(x)$ does not have full rank, then there always exists some $\Delta Q\in \operatorname{ker}(\mathscr{A}(x)\mathscr{D})$ and small enough $\varepsilon$ such that $\mathscr{A}(x)\mathscr{D}(\operatorname{vech}(Q)+\varepsilon \operatorname{vech}(\Delta Q))=-\operatorname{vec}(u_{1:N-2}^{(1:M)})$ and $\operatorname{vech}(Q)+\varepsilon \operatorname{vech}(\Delta Q)\in\mathbb{S}^n_+$ since $Q$ is an interior point in $\mathbb{S}^n_+$.
Here is an example that illustrates Proposition \[prop:uniqueness\_dual\]. Suppose $M=1$, $N=15$, the system matrices, the initial value and the “real" $Q$ matrix (we denote it as $\bar{Q}$) are as follows $$\begin{aligned}
&A=
\begin{bmatrix}
-0.1922 &-0.2490 &1.2347\\
-0.2741 &-1.0642 &-0.2296\\
1.5301 &1.6035 &-1.5062
\end{bmatrix},
B=
\begin{bmatrix}
-0.4446\\-0.1559\\0.2761
\end{bmatrix},\\
&\bar{Q}=
\begin{bmatrix}
0.0068 &-0.0116 &-0.0102\\
-0.0116 &0.0197 &0.0174\\
-0.0102 &0.0174 &0.0154
\end{bmatrix},
x_0=
\begin{bmatrix}
-25.0136 \\ -18.9592 \\ -14.8221
\end{bmatrix}.\end{aligned}$$ In this case, $dim(\operatorname{ker}(\mathscr{A}(x)\mathscr{D}))=1$ and $$\begin{aligned}
&\Delta Q=
\begin{bmatrix}
0.0723 &-0.6085 &-0.1447\\
-0.6085 &-0.0422 &-0.6661\\
-0.1447 &-0.6661 &-0.3976
\end{bmatrix},\\
&\Phi^*=
\begin{bmatrix}
7.5572 &1.6696 &3.1474\\
1.6696 &4.4056 &-3.8723\\
3.1474 &-3.8723 &6.4792
\end{bmatrix},\end{aligned}$$ $rank(\Phi^*)=2$. If we solve the following problem $$\begin{aligned}
& \underset{\beta,W\in\mathbb{R}}{\text{minimize}}
& & 0\\
& \text{subject to}
& &G\begin{bmatrix}
0&0\\0 &W
\end{bmatrix}G^T=\beta \Delta Q,\end{aligned}$$ we will find that the only feasible solution is $\beta=W=0$. And if one solves the inverse optimal control problem, she will get an unique solution $Q^*=\bar{Q}$.
Note that $\mathscr{A}(x)\mathscr{D}$ depends on the data. Though it has been stated in Proposition \[prop:AD\_full\_rank\] that we would have a unique $Q$ that corresponds to the given optimal trajectories $x_{1:N}^{(1:M)}$ if $\mathscr{A}(x)\mathscr{D}$ has full column rank, we would like to say a bit more about the data set $x_{1:N}^{(1:M)}$, more precisely, under what conditions of the data set $x_{1:N}^{(1:M)}$ will let $\mathscr{A}(x)\mathscr{D}$ have full column rank. Since $\mathscr{D}$ has full column rank, $\mathscr{A}(x)\mathscr{D}$ would have full column rank if $\mathscr{A}(x)$ has full column rank. In the following we will focus on discussing what kind of data set $x_{1:N}^{(1:M)}$ would let $\mathscr{A}(x)$ have full column rank. Before we give a sufficient condition for that, we would like to present the following lemma:
\[lem:struct\_mat\_linear\_indep\] If the vectors $$\begin{aligned}
\begin{bmatrix}
a^{(1)}_1\\a^{(1)}_2\\\vdots\\a^{(1)}_n
\end{bmatrix},
\begin{bmatrix}
a^{(2)}_1\\a^{(2)}_2\\\vdots\\a^{(2)}_n
\end{bmatrix},
\cdots,
\begin{bmatrix}
a^{(n)}_1\\a^{(1)}_2\\\vdots\\a^{(n)}_n
\end{bmatrix}\end{aligned}$$ are linearly independent, then matrix $$\begin{aligned}
\bar{\chi}=
\begin{bmatrix}
\bar{\chi}_1^{(1)} &\bar{\chi}_2^{(1)} &\cdots &\bar{\chi}_n^{(1)}\\
\bar{\chi}_1^{(2)} &\bar{\chi}_2^{(2)} &\cdots &\bar{\chi}_n^{(2)}\\
\vdots&\vdots&\vdots\\
\bar{\chi}_1^{(n)} &\bar{\chi}_2^{(1)} &\cdots &\bar{\chi}_n^{(n)}
\end{bmatrix}\end{aligned}$$ is nonsingular $\forall \xi_{j,l}^{(i)}$, where $$\begin{aligned}
\bar{\chi}_j^{(i)}=
\begin{bmatrix}
a_j^{(i)} &\xi_{j,1}^{(i)} &\cdots &\xi_{j,n-1}^{(i)}\\
&a_j^{(i)} &\ddots &\vdots\\
& &\ddots &\xi_{j,1}^{(i)}\\
& & &a_j^{(i)}
\end{bmatrix}.\end{aligned}$$
Suppose there exists constants $\eta_{1:n}^{(1:n)}$ such that $\sum_{i=1}^n\sum_{j=1}^n\eta_j^{(i)}\left[\bar{\chi}\right]_{(i-1)n+j}=0$. Recall the structure of $\bar{\chi}$, it must hold for the first row of every row block $\left[\bar{\chi}_1^{(i)} \bar{\chi}_2^{(i)}\cdots\bar{\chi}_n^{(i)}\right]$ in $\bar{\chi}$ that $\sum_{i=1}^n\eta_1^{(i)}\left[\bar{\chi}_1^{(i)} \bar{\chi}_2^{(i)}\cdots\bar{\chi}_n^{(i)}\right]_1=0$, which reads precisely as $$\begin{aligned}
&a_1^{(1)}\eta_1^{(1)}+a_1^{(2)}\eta_1^{(2)}+\cdots a_1^{(n)}\eta_1^{(n)}=0\\
&\vdots\\
&a_n^{(1)}\eta_1^{(1)}+a_n^{(2)}\eta_1^{(2)}+\cdots a_n^{(n)}\eta_1^{(n)}=0\\
\Leftrightarrow
&\begin{bmatrix}
a_1^{(1)} &a_1^{(2)} &\cdots &a_1^{(n)}\\
a_2^{(1)} &a_2^{(2)} &\cdots &a_2^{(n)}\\
\vdots & & &\vdots\\
a_n^{(1)} &a_n^{(2)} &\cdots &a_n^{(n)}
\end{bmatrix}
\begin{bmatrix}
\eta_1^{(1)}\\\eta_1^{(2)}\\\vdots\\\eta_1^{(n)}
\end{bmatrix}=0.\end{aligned}$$ Since by assumption $\left[a^{(1)T}_1,a^{(1)T}_2,\cdots,a^{(1)T}_n\right]^T$, $\cdots$ $\left[a^{(n)T}_1,a^{(1)T}_2,\cdots\\a^{(n)T}_n\right]^T$ are linearly independent, the above matrix only have unique zero solution, i.e., $\eta_1^{(i)}=0,\:\forall i$. This implies that $\sum_{i=2}^n\sum_{j=1}^n\eta_j^{(i)}\left[\bar{\chi}\right]_{(i-1)n+j}=0$. Similar to the argument above, we can iteratively show $\eta_j^{(i)}=0,\:\forall i,j$ and hence all of the rows in $\bar{\chi}$ are linearly independent. Therefore, $\bar{\chi}$ is nonsingular.
\[thm:end\_pt\_indep\] Suppose $N\ge n+2$ and $M\geq n$. If there exists $n$ linearly independent $x_{N-1}^{(1:n)}$ among all $M$ sets of data, then there exists a unique $Q$ that corresponds to the given optimal trajectories $x_{1:N}^{(1:M)}$.
Recall the definition of $\mathscr{A}(x)$ in , it can be rewritten as: $$\begin{aligned}
\mathscr{A}(x)=\sum_{t=2}^{N-1}X_t\otimes(\mathcal{S}_t \Gamma)=\underbrace{\left[\sum_{t=2}^{N-1}X_t\otimes \mathcal{S}_t\right]}_\chi(I_n\otimes\Gamma),\end{aligned}$$ where $$\begin{aligned}
&\Gamma=
\begin{bmatrix}
B^T(A^T)^{N-3}\\B^T(A^T)^{N-4}\\\vdots\\B^T
\end{bmatrix},
\mathcal{S}_2=
\begin{bmatrix}
0_m &\cdots &0_m &I_m\\
0_m &\cdots & &0_m\\
\vdots &\vdots& &\vdots\\
0_m &\cdots & &0_m
\end{bmatrix}\end{aligned}$$ $$\begin{aligned}
&\mathcal{S}_3=
\begin{bmatrix}
0_m &\cdots &I_m &0_m\\
0_m &\cdots &0_m &I_m\\
\vdots & &\vdots &\vdots\\
0_m &\cdots &0_m &0_m
\end{bmatrix},\cdots,
\mathcal{S}_{N-2}=
\begin{bmatrix}
0_m &I_m &0_m &\cdots &0_m\\
0_m &0_m &I_m &\cdots &0_m\\
\vdots &\vdots& &\ddots &I_m\\
0_m &0_m &\cdots & &0_m
\end{bmatrix},\end{aligned}$$ $$\begin{aligned}
&\mathcal{S}_{N-1}=I_{m(N-2)},\: X_t=
\begin{bmatrix}
x_t^{(1)T}\\ \vdots\\x_t^{(M)T}
\end{bmatrix}.\end{aligned}$$
Note due to the structure of $\mathcal{S}_t$’s, $\chi$ has the following form $$\begin{aligned}
\chi=
\begin{bmatrix}
\chi_1^{(1)} &\cdots &\chi_n^{(1)}\\
\vdots & &\vdots\\
\chi_1^{(M)} &\cdots &\chi_n^{(M)}
\end{bmatrix},\end{aligned}$$ where $$\begin{aligned}
\chi_j^{(i)}=
\begin{bmatrix}
x_{N-1,j}^{(i)}I_m &x_{N-2,j}^{(i)}I_m &\cdots &x_{1,j}^{(i)}I_m\\
&x_{N-1,j}^{(i)}I_m &\ddots &\vdots\\
& &\ddots &x_{N-2,j}^{(i)}I_m\\
& & &x_{N-1,j}^{(i)}I_m
\end{bmatrix}.\end{aligned}$$ Note that the first $n$ row blocks in $\chi$ has exactly the same structure as $\bar{\chi}$ in Lemma \[lem:struct\_mat\_linear\_indep\]. Since $x_{N-1}^{(1:n)}$ are linearly independent, that is, $\left[x_{N-1,1}^{(1)},\cdots,x_{N-1,n}^{(1)}\right]^T$ $\cdots$ $\left[x_{N-1,1}^{(n)},\cdots,x_{N-1,n}^{(n)}\right]^T$ are linearly independent, we can apply Lemma \[lem:struct\_mat\_linear\_indep\] and conclude that the matrix formed by the first $n$ row blocks of $\chi$ is nonsingular. Thus $\chi$ has full column rank.
On the other hand, since $(A,B)$ is controllable, $\Gamma$ has full column rank and $rank(\Gamma)=n$. By the property of Kronecker product, it holds that $rank(I_n\otimes\Gamma)=rank(I_n)\times rank(\Gamma)=n^2$. Therefore, due to the fact that $\chi$ has full column rank, $rank(\mathscr{A}(x))=rank\left(\chi(I_n\otimes\Gamma)\right)=rank(I_n\otimes\Gamma)=n^2$, i.e., $\mathscr{A}(x)$ has full column rank. Hence the solution $\operatorname{vech}(Q)$ to the equation is unique.
Theorem \[thm:end\_pt\_indep\] indicates that if among $M$ trajectories, there exists $n$ trajectories such that the second last states of each, i.e., $x_{N-1}^{(1)},\cdots,x_{N-1}^{(n)}$ are linearly independent, then $Q$ is identifiable. The theorem provides a convenient way of checking the identifiability of $Q$.
Inverse Optimal Control in the Noisy Case {#sec:noisy}
=========================================
Now we turn our attention to the noisy case. Inspired by [@aswani2015inverse], we first pose the inverse optimal control problem in the noisy case. Suppose the probability space $(\Omega,\mathcal{F},\mathbb{P})$ carries independent random vectors $\bar{x}\in\mathbb{R}^n$, $\{v_t\in\mathbb{R}^n\}_{t=2}^N$ and $\{w_t\in\mathbb{R}^n\}_{t=1}^{N-1}$ distributed according to some unknown distributions. The following assumptions are made in the remainder of the paper:
\[ass:zero\_mean\_fin\_var\] $\mathbb{E}(\|\bar{x}\|^2)<+\infty$, $\mathbb{E}(v_t)=\mathbb{E}(w_{t-1})=0$ and $\mathbb{E}(\|v_t\|^2)<+\infty$, $\mathbb{E}(\|w_{t-1}\|^2)<+\infty,t=2:N$.
\[ass:neighbourhood\_not\_measure\_zero\] $\forall \eta\in\mathbb{R}^n$, $\exists r(\eta)\in\mathbb{R}\backslash\{0\}$ such that $\mathbb{P}\left(\bar{x}\in\mathscr{B}_\varepsilon(r\eta)\right)>0,\forall \varepsilon>0$, where $\mathscr{B}_\varepsilon(r\eta)$ is the open $\varepsilon$-ball centered at $r\eta$.
Equipped with the stochastic set-up above and given that the initial value $x_1$ is actually a realization of the random vector $\bar{x}$, i.e., $x_1=\bar{x}(\omega)$, the LQR problem can actually be seen as $$\min \{J(u_{1:N-1}(\omega),x_{2:N}(\omega);Q;\bar{x}(\omega))|\eqref{eq:lti},\mbox{given }\omega\in\Omega\},
\label{eq:aswani_formulation}$$ Note that the optimal control input and trajectory $\{u_t^*\},\{x_t^*\}$ are now random vectors implicitly determined by the random variable $\bar{x}$ and the parameter $Q$. With the formulation of the “forward problem" , we now can pose the formulation of the inverse optimal control problem.
Suppose $\{u_t^*\}$ and $\{x_t^*\}$ are corrupted by some zero mean noise, namely, $y_t=x_t^*+v_t,t=2:N$, $\mu_t=u_t^*+w_t,t=1:N-1$. To abbreviate the notation, we denote $Y=(y_2^T,\cdots,y_N^T)^T$, $\Upsilon=(\mu_1^T,\cdots,\mu_{N-1}^T)^T$, $\xi_{x}=(\bar{x}^T,Y^T)^T$ and $\xi_u=(\bar{x}^T,\Upsilon^T)^T$. In addition, we assume that the “real" $Q$ belongs to a compact set $\bar{\mathbb{S}}^n_+(\varphi)=\{Q|Q\in\mathbb{S}^n_+,\|Q\|^2_F\leq \varphi\}$. We aim to find the $Q\in\bar{\mathbb{S}}^n_+(\varphi)$ that corresponds to the optimal trajectory $\{x_t^*\}$ and control input $\{u_t^*\}$ by using the initial value $\bar{x}$ and the noisy observations $\xi_x$ or $\xi_u$.
Given $Q$ and an initial value $\bar{x}$, the solution to is unique. We define the risk functions $\mathscr{R}_x:\bar{\mathbb{S}}^n_+(\varphi)\mapsto\mathbb{R}$ and $\mathscr{R}_u:\bar{\mathbb{S}}^n_+(\varphi)\mapsto\mathbb{R}$ as $$\mathscr{R}^x(Q)=\mathbb{E}_{\xi_x}\left[f_x(Q;\xi_x)\right],
\label{eq:risk_function}$$ $$\mathscr{R}^u(Q)=\mathbb{E}_{\xi_u}\left[f_u(Q;\xi_u)\right],
\label{eq:risk_function_u}$$ where $f_x:\bar{\mathbb{S}}^n_+(\varphi)\times \mathbb{R}^{Nn}\mapsto\mathbb{R}$ and $f_u:\bar{\mathbb{S}}^n_+(\varphi)\times \mathbb{R}^{n+m(N-1)}\mapsto\mathbb{R}$ $$\begin{aligned}
f_x(Q;\xi_x)=\sum_{t=2}^{N}\|y_t-x_t^*(Q;\bar{x})\|^2,\label{eq:risk_function1}\\
f_u(Q;\xi_x)=\sum_{t=1}^{N-1}\|\mu_t-u_t^*(Q;\bar{x})\|^2,\label{eq:risk_function1_u}\end{aligned}$$ and $x_{2:N}^*(Q;\bar{x})$ and $u_{1:N-1}^*(Q;\bar{x})$ are the optimal solution to . In order to solve the inverse optimal control problem, we would like to minimize the risk functions, namely, $$\min_{Q\in\bar{\mathbb{S}}^n_+(\varphi)}\mathscr{R}^x(Q)
\label{eq:risk_minimization}$$ or $$\min_{Q\in\bar{\mathbb{S}}^n_+(\varphi)}\mathscr{R}^u(Q),
\label{eq:risk_minimization_u}$$ depending on which observations are available. Nevertheless, since the distributions of $\bar{x}$, $v_t$ and $w_t$ are unknown, the distributions of $\xi_x$ and $\xi_u$ are also unknown. We can not solve and directly. and in principle, however, can be approximated by $$\begin{aligned}
\mathscr{R}_M^x(Q)=\frac{1}{M}\sum_{i=1}^M f_x(Q;\xi_x^{(i)}),\label{eq:risk_function_approx}\\
\mathscr{R}_M^u(Q)=\frac{1}{M}\sum_{i=1}^M f_u(Q;\xi_u^{(i)}),\label{eq:risk_function_approx_u}\end{aligned}$$
where $\xi_x^{(i)}$ and $\xi_u^{(i)}$ are i.i.d. random samples. We will show the statistical consistency for the approximation later.
Recall that for discrete-time LQR’s in finite-time horizon, PMP provides sufficient and necessary conditions for optimality, hence we can express $u_{1:N-1}^*$, $x_{2:N}^*$ using and the approximated risk-minimizing problem reads $$\begin{aligned}
\underset{Q\in\bar{\mathbb{S}}^n_+(\varphi),x_{2:N}^{(i)},\lambda_{2:N}^{(i)}}{\text{min}\qquad}
& \mathscr{R}_M^x(Q)=\frac{1}{M}\sum_{i=1}^M f_x(Q;\xi^{(i)})\\
\text{s.t.\qquad}
& x_{t+1}^{(i)}=Ax_t^{(i)}-BB^T\lambda_{t+1}^{(i)},\quad t=2:N-1,\\
& \lambda_t^{(i)}=A^T\lambda_{t+1}^{(i)}+Qx_t^{(i)},\:t=2:N-1,\\
& x_2^{(i)}=A\bar{x}^{(i)}-BB^T\lambda_2^{(i)},\\
&\lambda_N^{(i)}=0,\quad i=1:M,
\end{aligned}
\label{eq:risk_minimizing_pro_approx}$$ We omit the “star" in the notation to avoid the confusion with the optimizer of . The risk-minimization problem for $\mathscr{R}_M^u(Q)$ is omitted here for the sake of brevity. The optimizer $\left(Q_M^*(\omega),x_{2:N}^{(i)*}(\omega),\lambda_{2:N}^{(i)*}(\omega)\right)$ is defined in the sense that it optimizes (or ) for every $\omega\in\Omega$.
\[thm:statistical\_consistency\] Suppose $Q_M^*\in\bar{\mathbb{S}}^n_+(\varphi)$, $N\geq n+2$, $\{x_{2:N}^{(i)*}\}$ and $\{\lambda_{2:N}^{(i)*}\}$ solves , then $Q_M^*\overset{p}{\rightarrow}\bar{Q}$ as $M\rightarrow \infty$, where $\bar{Q}$ is the true value used in the “forward" problem .
Before moving on, we would like to take a close look at and the system dynamics . Denote $z_t=\left(x_t^T,\lambda_t^T\right)^T,t=2:N$, then the first two constraints can be written as the following implicit dynamics $$\underbrace{\begin{bmatrix}
I &BB^T\\0 &A^T
\end{bmatrix}}_{E}
z_{t+1}=
\underbrace{\begin{bmatrix}
A &0\\-Q &I
\end{bmatrix}}_F z_t,\quad t=2:N-1.$$ Hence we can write together with as the following compact form $$\underbrace{\begin{bmatrix}
\tilde{E} & & &\tilde{F}\\
-F &E\\
&\ddots &\ddots\\
& &-F& E
\end{bmatrix}}_{\mathscr{F}(Q)}
\underbrace{\begin{bmatrix}
z_2\\\vdots\\z_N
\end{bmatrix}}_{Z}
=
\underbrace{\begin{bmatrix}
A\bar{x}^{(i)}\\0\\\vdots\\0
\end{bmatrix}}_{b(\bar{x})},
\label{eq:PMP_matrix_equation}$$ where $$\begin{aligned}
\tilde{E}=
\begin{bmatrix}
I &BB^T\\0 &0
\end{bmatrix},
\tilde{F}=
\begin{bmatrix}
0 &0\\0 &I
\end{bmatrix}.\end{aligned}$$ We claim that $\mathscr{F}(Q)$ is invertible for all $Q\in\mathbb{S}^n_+$. Though this fact can be proven by “brute force", i.e., by considering its determinant using Laplace expansion, perhaps the easiest way to see this is that for an arbitrary $Q\in\mathbb{S}^n_+$, is a sufficient and necessary condition for the corresponding “forward" LQR problem. Since the “forward" LQR problem has a unique solution, it must hold that $\mathscr{F}(Q)$ is invertible for all $Q\in\mathbb{S}^n_+$. Thus, it follows that $Z=\mathscr{F}(Q)^{-1}b(\bar{x})=\mathscr{F}(Q)^{-1}\tilde{A}\bar{x}$, where $\tilde{A}=[A^T,0,\cdots,0]^T$. Hence $f_x(Q;\xi_x)$ can be rewritten as $$\begin{aligned}
f_x(Q;\xi_x)=\|Y-G_xZ\|^2=\|Y-G_x\mathscr{F}(Q)^{-1}\tilde{A}\bar{x}\|^2,\end{aligned}$$ where $G_x=I_{N-1}\otimes \left[I_n, 0_n\right]$.
It is clear that $f_x(Q;\xi_x)$ is continuous with respect to $\xi_x$, hence it is a measurable function of $\xi_x$ at each $Q$. Further, $\mathscr{F}(Q)$ is continuous and hence $\mathscr{F}(Q)^{-1}$ is continuous. Then $f_x(Q;\xi_x)$ is also continuous with respect to $Q$.
On the other hand, since $\mathscr{F}(Q)^{-1}$ is continuous and $Q$ lives in a compact set, then $\|\mathscr{F}(Q)^{-1}\|_F$ is bounded, i.e., $\|\mathscr{F}(Q)^{-1}\|_F\leq \bar{\varphi}$ for some finite positive $\bar{\varphi}$. It follows that $\mathbb{E}(\|Z^*\|^2)=\mathbb{E}(\|\mathscr{F}(\bar{Q})^{-1}\tilde{A}\bar{x}\|^2)\leq \|\mathscr{F}(\bar{Q})^{-1}\|_F^2\|\tilde{A}\|_F^2\mathbb{E}(\|\bar{x}\|^2)<+\infty$, where $Z^*$ corresponds to the “true" $\bar{Q}$.
Recall that $y_t=x_t^*+v_t$ and this implies that $Y=G_xZ^*+\zeta$, where $\zeta=[v_2^T\cdots,v_N^T]^T$. By Assumption \[ass:zero\_mean\_fin\_var\], $\mathbb{E}(\|v_t\|^2)<\infty$, which implies $\mathbb{E}(\|\zeta\|^2)<+\infty$. Therefore, $\mathbb{E}(\|Y\|^2)=\mathbb{E}(\|G_xZ^*+\zeta\|^2)\leq 2\left(\mathbb{E}(\|G_xZ^*\|^2)+\mathbb{E}(\|\zeta\|^2)\right)\leq 2\big(\|G_x\|_F^2\mathbb{E}(\|Z^*\|^2)$ $+\mathbb{E}(\|\zeta\|^2)\big)$ $<+\infty$. Hence it holds that $$\begin{aligned}
f_x(Q,\xi_x)&=\|Y-G_x\mathscr{F}(Q)^{-1}\tilde{A}\bar{x}\|^2\\
&\leq 2\left(\|Y\|^2+\|G_x\mathscr{F}(Q)^{-1}\tilde{A}\bar{x}\|^2\right)\\
&\leq 2\left(\|Y\|^2+\|G_x\|_F^2\|\mathscr{F}(Q)^{-1}\|_F^2\|\tilde{A}\|_F^2\|\bar{x}\|^2\right)\\
&\leq 2\left(\|Y\|^2+\bar{\varphi}^2\|G_x\|_F^2\|\tilde{A}\|_F^2\|\bar{x}\|^2\right):=d(\xi_x),\end{aligned}$$ and it is clear that $\mathbb{E}(d(\xi_x))<+\infty$ since $\mathbb{E}(\|Y\|^2)<+\infty$ and $\mathbb{E}(\|\bar{x}\|^2)<+\infty$. By the analysis above, we conclude that the uniform law of large numbers [@jennrich1969asymptotic] applies, namely, $$\sup_{Q\in\bar{\mathbb{S}}^n_+(\varphi)}\|\frac{1}{M}\sum_{i=1}^M f_x(Q,\xi_x^{(i)})-\mathbb{E}_{\xi_x}\left(f_x(Q;\xi_x)\right)\|\overset{p}{\rightarrow} 0.
\label{eq:uniform_law_of_large_numbers}$$ Besides , if we are able to show $\bar{Q}$ is the unique optimizer to , then $Q_M^*\overset{p}{\rightarrow}\bar{Q}$ follows directly from Theorem 5.7 in [@van2000asymptotic].
Note that by assumption, $\bar{x}$, $\{v_t\}$ are independent, hence $x_t^*(Q;\bar{x})$ are independent of the noises $\{v_t\}$. Since $y_t=x^*_t(\bar{Q},\bar{x})+v_t$, $\mathbb{E}(v_t)=0,t=2:N$, can be simplified as $\mathscr{R}^x(Q)=L(Q)+\sum_{t=2}^N\mathbb{E}(\|v_t\|^2)$, where $L(Q)=\mathbb{E}\left(\sum_{t=2}^N\|x_t^*(\bar{Q},\bar{x})-x_t^*(Q,\bar{x})\|^2\right)$. It is clear that $Q=\bar{Q}$ minimizes the risk function $\mathscr{R}^x(Q)$. What remains to show is the uniqueness. By Theorem \[thm:well\_poseness\], if $Q\neq\bar{Q}$, then $A_{cl}(1,Q)\neq A_{cl}(1,\bar{Q})$. Hence there exists $\eta\in\mathbb{R}^n$, such that $A_{cl}(1,Q)\eta\neq A_{cl}(1,\bar{Q})\eta$. On the other hand, by Assumption \[ass:neighbourhood\_not\_measure\_zero\], $\exists r(\eta)\neq 0$, such that $\mathbb{P}(\bar{x}\in \mathscr{B}_\varepsilon(r\eta))>0,\forall \varepsilon$. Since $r\neq 0$, $A_{cl}(1,Q)(r\eta)\neq A_{cl}(1,\bar{Q})(r\eta)$. Further, since $A_{cl}(1,Q)\eta$ is continuous with respect to $\eta,\forall Q\in\mathbb{S}^n_+$, this implies $\exists \varepsilon_1$, such that $A_{cl}(1,Q)\bar{x}\neq A_{cl}(1,\bar{Q})\bar{x},\forall \bar{x}\in\mathscr{B}_{\varepsilon_1}(r\eta)$ and $\mathbb{P}\left(\bar{x}\in\mathbb{B}_{\varepsilon_1}(r\eta)\right)>0$. Thus $L(Q)\geq \int_{\mathscr{B}_{\varepsilon_1}(r\eta)}\|\left(A_{cl}(1,Q)-A_{cl}(1,\bar{Q})\right)\bar{x}(\omega)\|^2\mathbb{P}(d\omega)>0$. Hence $\bar{Q}$ is the unique minimizer to and the statement follows.
Suppose $Q_M^*\in\bar{\mathbb{S}}^n_+(\varphi)$, $\{x_{2:N}^{(i)*}\}$ and $\{\lambda_{2:N}^{(i)*}\}$ solves the problem of minimizing $\mathscr{R}^u(Q)$, then $Q_M^*\overset{p}{\rightarrow}\bar{Q}$ as $M\rightarrow \infty$, where $\bar{Q}$ is the true value used in the “forward" problem .
Similar to the proof of Theorem \[thm:statistical\_consistency\], we can rewrite $f_u(Q;\xi_u)$ as $$\begin{aligned}
f_u(Q;\xi_u)=\|\Upsilon-G_uZ\|^2=\|\Upsilon-G_u\mathscr{F}(Q)^{-1}\tilde{A}\bar{x}\|^2,\end{aligned}$$ The first part of proof that involves uniform law of large numbers can be shown analogously to the proof of Theorem \[thm:statistical\_consistency\]. What remains to show is that $\bar{Q}$ is the unique minimizer.
Analogously, since $\bar{x}$ and $\{w_t\}$ are independent by assumption, $u_t^*(Q;\bar{x})$ are independent of the noises $w_t$. Since $\mu_t=u_t^*(\bar{Q};\bar{x})+w_t$, $\mathbb{E}(w_t)=0,t=1:N-1$, the risk function can be written as $\mathscr{R}^u(Q)=L^\prime(Q)+\sum_{t=1}^{N-1}\mathbb{E}(\|w_t\|^2)$, where $L^\prime(Q)=\mathbb{E}\left(\sum_{t=1}^{N-1}\|u_t^*(\bar{Q};\bar{x})-u_t^*(Q;\bar{x})\|^2\right)$. By Theorem \[thm:well\_poseness\], if $Q\neq \bar{Q}$, then $A_{cl}(1,Q)\neq A_{cl}(1,\bar{Q})$. Since $B$ has full column rank and recall that $A_{cl}(1,Q)=A+BK_1(Q)$, it follows that $K_1(Q)\neq K_1(\bar{Q})$. Similar to the proof of Theorem \[thm:statistical\_consistency\] and using the fact that $u_1(Q,\bar{x})=K_1(Q)\bar{x}$, we can show that $L^\prime(Q)>0$. Therefore, $\bar{Q}$ is the unique minimizer to the risk-minimizing problem and the statement follows.
Now we have shown that the solutions $Q_M^*$ to the risk-minimizing problems are statistically consistent, we start to consider how to solve the problem. When actually solving the problem, $Y^{(i)}$, $\Upsilon^{(i)}$ and $\bar{x}$ are substituted with the actual measurement of the samples. From the analysis above, we know can be rewritten in the compact form of $\min_{Q\in\bar{\mathbb{S}}^n_+(\varphi)} \frac{1}{M}\sum_{i=1}^M\|Y^{(i)}-G_x\mathscr{F}(Q)^{-1}\tilde{A}\bar{x}\|^2$ (the risk-minimizing problem for control-input observations follows analogously). To solve the problems, we introduce the following convex matrix function $\hat{f}_\varepsilon:\mathbb{S}^n\mapsto\mathbb{R}$, $\hat{f}_\varepsilon(Q)=\varepsilon\ln tr(e^{Q/\varepsilon})=\varepsilon \ln[\sum_{i=1}^Ne^{\sigma_i(Q)/\varepsilon}]$ [@nesterov2007smoothing], where $\sigma_i(Q)$ is the $i$’th largest eigenvalue of $Q$. It holds that $\sigma_1(Q)\leq \hat{f}_\varepsilon(Q)\leq \sigma_1(Q)+\varepsilon\ln n$. Hence when $\varepsilon$ is small, the function $\hat{f}(Q)$ approximates the largest eigenvalue of $Q$ well. On the other hand, the gradient of $\hat{f}_\varepsilon(Q)$ reads $\nabla_Q\hat{f}_\varepsilon(Q)=[\sum_{i=1}^Ne^{\sigma_i(Q)/\varepsilon}]^{-1}[\sum_{i=1}^Ne^{\sigma_i(Q)/\varepsilon}\nu_i\nu_i^T],$ where $(\sigma_i(Q),\nu_i)$ are eigen-pairs of $Q$ with $\|\nu_i\|=1,\forall i$. Note that for $\varepsilon$ small enough, the gradient only numerically depends on the eigenvectors that correspond to the largest eigenvalues [@nesterov2007smoothing], which makes the gradient easy to compute. With the set-up above, we approximate the semi-positive definite constraint $Q\in\mathbb{S}^n_+$ with $\hat{f}_\varepsilon(-Q)\leq 0$ and we can solve the optimization problems with standard nonlinear optimization solvers.
Numerical Examples {#sec:examples}
==================
To illustrate the performance of the estimation statistically, we consider a series of discrete-time systems sampled from continuous systems $\dot{x}=\hat{A}x+\hat{B}u$ with the sampling period $\Delta t=0.1$, where $$\begin{aligned}
\hat{A}=
\begin{bmatrix}
0 &1\\a_1 &a_2
\end{bmatrix},
B=
\begin{bmatrix}
0\\1
\end{bmatrix};\end{aligned}$$ and $a_1,a_2$ are sampled from uniform distributions on $[-3,3]$. The aim for us to generate systems like this is to unsure the controllability of the systems. We take the time horizon $N=50$. The “real " $\bar{Q}$ is generated by letting $\bar{Q}=Q_1Q_1^T$ where each elements of $Q_1$ are sampled from the uniform distribution on $[-1,1]$. We set the feasible compact set for $Q$ as $\mathbb{S}^n_+(5)$ (we discard those randomly generated $\bar{Q}$ that does not belong to $\mathbb{S}^n_+(5)$). Each element of the initial conditions $\bar{x}^{(1:M)}$ are generated by sampling from a uniform distribution supported on $[-5,5]$. We generate 200 different sets of $(\hat{A},\hat{B},\bar{Q})$ and for each fixed $(\hat{A},\hat{B},\bar{Q})$, 200 trajectories are generated, i.e., $M=200$. 15dB and 20dB of white Gaussian noises are added to $x_{2:N}^{(1:M)}$ and $u_{1:N-1}^{(1:M)}$ respectively to get $y_{2:N}^{(1:M)}$ and $\mu_{1:N-1}^{(1:M)}$. MATLAB function `fmincon` is used to solve the risk-minimizing problem. When solving the optimization problem, we use $Q=I$ as the initial iteration values for all cases.
As illustrated in Fig. \[fig:statistical\_consistency1\] and Fig. \[fig:statistical\_consistency2\], the relative error $\|Q_{est}-\bar{Q}\|_F/\|\bar{Q}\|_F$ roughly decreases as $M$ increases.
![The relative errors of minimizing $\mathscr{R}^x_M(Q)$.[]{data-label="fig:statistical_consistency1"}](observation_x_20_200_rel-eps-converted-to.pdf){width="50.00000%"}
![The relative errors of minimizing $\mathscr{R}^u_M(Q)$.[]{data-label="fig:statistical_consistency2"}](observation_u_20_200_rel-eps-converted-to.pdf){width="50.00000%"}
The result is also compared with the “residual minimization" method proposed in [@keshavarz2011imputing]. In [@keshavarz2011imputing], it is assumed that the observations of the solutions to the “forward" problems are completely available, namely in this scenario, both $y_{1:N}^{(1:M)}$ and $\mu_{1:N-1}^{(1:M)}$ are available. In order to make the comparison fair, in this numerical example, observations on both of the optimal trajectories and control input are used. This will not change the statistical consistency of the method. The result is shown in Fig. \[fig:statistical\_error1\].
![Our method vs. residual minimization[]{data-label="fig:statistical_error1"}](consistency_15_20.pdf){width="40.00000%"}
We denote the estimation of $Q$ by our method as $Q_{est}$ and the estimation by “residual minimization" [@keshavarz2011imputing] as $Q_{RM}$. In Fig. \[fig:statistical\_error1\], the blue line illustrates $\|Q_{est}-\bar{Q}\|_F=\|Q_{RM}-\bar{Q}\|_F$. As we can see from Fig. \[fig:statistical\_error1\], our method out-performs the residual-minimization method statistically.
Conclusion {#sec:conclusion}
==========
In this paper, we analyse the inverse optimal control problem for discrete-time LQR in finite-time horizons. We consider both the noiseless case (in which observations of the optimal trajectories are exact) and the noisy case (in which such observations are corrupted by additive noise). The well-posedness of the problem is first justified. In the noiseless case, we discuss identifiability of the problem, and provide sufficient conditions on the uniqueness of the solution. In the noisy case, we formulate the search for $Q$ as an optimization problem, and prove that such formulation is statistically consistent. Numerical examples shows our method has a better performance than that proposed in [@keshavarz2011imputing].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Quantum protocols will be more efficient with high-dimensional entangled states. Photons carrying orbital angular momenta can be used to create a high-dimensional entangled state. In this paper we experimentally demonstrate the entanglement of the orbital angular momentum between the Stokes and anti-Stokes photons generated in a hot atomic ensemble using spontaneous four-wave-mixing. This experiment also suggests the existence of the entanglement concerned with spatial degrees of freedom between the hot atomic ensemble and the Stokes photon.'
author:
- 'Qun-Feng Chen'
- 'Bao-Sen Shi'
- 'Yong-Sheng Zhang'
- 'Guang-Can Guo'
title: Entanglement of the orbital angular momentum states of the photons generated in a hot atomic ensemble
---
Entanglement is one of the most fantastic phenomenon of quantum mechanics, and is used as a resource in quantum information field[@RevModPhys.74.347]. High-dimensional two-particle entangled states can be used to realize some quantum information protocols more efficiently [@PhysRevA.64.012306; @PhysRevLett.85.3313]. Photons carrying orbital angular momenta (OAM) are used to create high-dimensional entangled states, since OAM can be used to define an infinite-dimensional Hilbert space [@calvo:013805]. The first experiment of the entanglement of the OAM states generated via spontaneous parametric down-conversion in a nonlinear crystal was demonstrated in 2001[@Mair:N:2001:313], since then several protocols based on OAM states of photons have been realized experimentally[@Vaziri:PRL:2002:240401; @Vaziri:PRL:2003:227902; @Langford:PRL:2004:053601]. The transferring of OAM between classical light and cold atoms[@PhysRevLett.83.4967; @PhysRevLett.90.133001; @Barreiro:OL:2004:1515] and hot atoms[@Jiang:PRA:2006:043811] has also been reported in the past years. Recently, the entanglement of OAM states of the photons generated in a cold atomic system using Duan-Lukin-Cirac-Zoller (DLCZ) scheme[@Duan:N:2001:413] has been clarified by Inoue *et al.*[@inoue:053809]. So far, there is no experimental discussion about the entanglement of the OAM states of the photons generated in a hot atomic system. In this paper we demonstrate the entanglement of OAM states of the photons generated in a hot atomic ensemble using the spontaneous four-wave-mixing (SFWM)[@balic:183601; @Chen:unpublished]. Our experiment is different from the experiment done by Inoue *et al.*: In our experiment, SFWM is used to generate a photon pair, in contrast with the experiment of Ref. [@inoue:053809], in which the method based on DLCZ scheme is used. Furthermore, our experiment is based on a hot atomic ensemble, which is more easy to be realized compared with the scheme based on a cold atomci system. In our experiment, we clearly demonstrate the entanglement of the OAM between the Stokes and anti-Stokes photons generated via SFWM in a hot atomic ensemble, the concurrence got in this experiment is about 0.81. This experiment also suggests the existence of the entanglement concerned with spatial degrees of freedom between the hot atomic ensemble and the Stokes photon.
The schematic setup used in this experiment is shown in Fig. \[fig:1\]. The energy levels and the frequencies of the lasers used are shown in Fig. \[fig:1\](a). A strong coupling laser, which is resonant with the $|b\rangle\to|c\rangle$ transition, drives the populations of the atoms into level $|a\rangle$. A weak pump laser, resonant with the $|a\rangle\to|d\rangle$ transition, is applied to the system. The $|d\rangle\to|b\rangle$ transition will be induced by the pump laser and the Stokes (S) photons will be generated. When a Stokes photon is emitted, the atomic ensemble collapses into the state $\frac{1}{\sqrt{N}}
\sum_j|a_1,a_2,\ldots,b_j,\ldots,a_N\rangle$. The strong coupling laser repumps the atomic ensemble back to the state $|a_1,a_2,\ldots a_N\rangle$, and an anti-Stokes (AS) photon is generated. In this process, the energy, momentum and OAM of the photons will be conserved[@Scully:PRL:2006:010501; @Jiang:PRA:2006:043811],i. e., $$\begin{aligned}
\omega_{\rm S}+\omega_{\rm AS}&=&\omega_{\rm P}+\omega_{\rm
C},\nonumber \\
\vec k_{\rm S}+\vec k_{\rm AS}&=&\vec k_{\rm P}+\vec k_{\rm C},\nonumber\\
L_{\rm S}+L_{\rm AS}&=&L_{\rm P}+L_{\rm C}\,,
\label{cons}\end{aligned}$$ where the $\omega_i$, $\vec k_i$ and $L_i$ represent the frequency, wave vector and OAM of the corresponding photons respectively. According to Eq. (\[cons\]), when the pump and coupling lasers carry zero OAM, the Stokes and anti-Stokes photons will be in the entangled state of $$|\Psi\rangle= C\sum_{i=-\infty}^{+\infty}\alpha_i|i\rangle_{\rm
S}|-i\rangle_{\rm AS}\,,
\label{state}$$ where $C$ is the normalization coefficient, $\alpha_i$ are the relative amplitudes of the OAM states. In this work we only investigate the entanglement concerned with $i=0$ and $1$, thus the experimental expected entangled state can be written as: $$|\Psi\rangle = C(|0\rangle_{\rm S}|0\rangle_{\rm
AS} + \alpha_1 |1\rangle_{\rm S}|-1\rangle_{\rm AS})\,.
\label{}$$ Although we only discuss the two dimensional case, it is natural to presume that our discussion can be extended into high-dimensional cases over a wide range of OAM[@inoue:053809].
A Gaussian mode beam carrying the well-defined OAM is in Laguerre-Gaussian (LG) mode[@PhysRevA.45.8185], it can be described by LG$_{pl}$ mode, where $p+1$ is the number of the radial nodes, and $l$ is the number of the $2\pi$-phase variations along a closed path around the beam center. Here we only consider the cases of $p=0$. The LG$_{0l}$ mode carries the corresponding OAM of $l\hbar$ per photon and has a doughnut-shape intensity distribution: $$E_{0l}(r,\varphi) = E_{00}(r) \frac{1}{\sqrt{|l|!}}
\left(\frac{r\sqrt{2}}{w}\right)^{|l|} e^{-il\varphi},
\label{}$$ where $$E_{00}(r)= \sqrt{\frac{2}{\pi}}\frac{1}{w}\exp\left(-\frac{r^2}{w^2}\right)$$ is the intensity distribution of a Gaussian mode beam which carries zero OAM (LG$_{00}$) and $w$ is the beam waist. In most cases, computer-generated holograms (CGH) are used to create the LG modes of various orders[@Arlt:JOM:1998:1231]. The superposition of the LG$_{00}$ mode and the LG$_{01}$ mode can be achieved by shifting the dislocation of the hologram out of the beam center a certain amount[@Mair:N:2001:313; @Vaziri:JOO:2002:S47].
In this paper, a CGH combined with a single-mode fiber are used for mode discrimination. The $\pm 1$ order diffraction of the CGH increases the OAM of the input beam by $\pm 1\hbar$ per photon when the dislocation of the hologram is overlapped with the beam center. The first order diffraction of the CGH is coupled into the single-mode fiber. The single-mode fiber collects only the Gaussian mode beam, therefore the combination of the CGH and the single-mode fiber can be used to select the LG$_{0\mp 1}$ or LG$_{00}$ mode or the superposition of the them, according to which of the $\pm 1$ order diffraction of the hologram is coupled and the displacement of the hologram. It should be noted that there are also higher order LG modes in the first order diffraction, but they are very small compared with the LG$_{0\pm1}$ mode[@Vaziri:JOO:2002:S47] and the influence of them is ignored in this paper.
![(Color online) (a) Energy levels and frequencies of the lasers used in this experiment. (b) Schematic setup of our experiment. A strong coupling and a weak pump laser, which are resonant with $|5S_{1/2},F=2\rangle \to
|5P_{1/2},F=2\rangle$ and $|5S_{1/2},F=1\rangle \to
|5P_{3/2},F=2\rangle$ transitions of $^{87}$Rb respectively, are in counter propagating. Pairs of correlated Stokes and anti-Stokes photons are generated in phase-matched directions. H1 and H2 are computer-generated holograms; SMF1 and SMF2 are single-mode fibers, which are connected to single photon counting modules(SPCM); F1 and F2 are filters.[]{data-label="fig:1"}](fig1){width="8.3cm"}
The schematic experimental setup is shown in Fig. \[fig:1\](b). A natural rubidium cell with a length of 5 cm is used as the working medium. The temperature of the cell is kept at about 50$^\circ$C, corresponding to an atomic intensity of about $1\times10^{11}/{\rm cm}^3$. The coupling laser, which is vertically linear polarized, is resonant with the $|5S_{1/2},F=2\rangle \to |5P_{1/2},F=2\rangle$ transition of $^{87}$Rb. The intensity of the coupling laser is about 7 mW. The pump laser, which is counter-propagating with the coupling laser and horizontally polarized, is resonant with the $|5S_{1/2},F=1\rangle \to |5P_{3/2},F=2\rangle$ transition of ${}^{87}$Rb. The power of the pump is about $60\mu$W. The $1/e^2$ diameters of these two lasers are about 2 mm. The vertically polarized Stokes photons emitted at an angle of about 4$^\circ$ to the lasers are diffracted by a CGH (H1), and the $-1$ order diffraction of the H1 is coupled into a single-mode fiber (SMF1) after being filtered by the F1. The diffraction of the H1 decreases the OAM of the input photons by $1\hbar$ when the displacement of H1 is 0. The displacement of the CGH is defined as the distance between the dislocation of the CGH and the beam center. The horizontally polarized anti-Stokes photons in the phase matched direction are diffracted by the other CGH (H2). The $+1$ order diffraction is coupled into SMF2 after being filtered by F2, which increases the OAM of the collected anti-Stokes photons by $1\hbar$ at 0 displacement. The diffraction efficiency of the CGHs used in this experiment are about $40\%$. Each of the filters F1 and F2 consists of an optical pumped paraffin-coated $^{87}$Rb cell and a ruled diffraction grating. The optical pumped rubidium cell is used to filter out the scattering of the co-propagating laser, and the ruled diffraction grating is used to separate the photons at the D1 and D2 transitions. The collected photons are detected by photon-counting modules (Perkin-Elmer SPCM-AQR-15). The time resolved coincident statistics of the Stokes and anti-Stokes photons are accumulated by a time digitizer (FAST ComTec P7888-1E) with 2 n$s$ bin width and totally 160 bins. In this experiment the Stokes photons are used as the START of the P7888-1E and the anti-Stokes photons after certain delay are used as the STOP of the P7888-1E.
The time resolved coincident counts of the Stokes and anti-Stokes photons when the displacement of the both CGHs are far larger than the waists of the beam are shown in Fig. \[fig:2\]. When the displacement of a CGH is far larger than the waist of a beam, the CGH almost does not affect the mode of the photons, therefore Fig. \[fig:2\] shows the coincidence between the Stokes and anti-Stokes photons in LG$_{00}$ mode. The maximum coincident counts are obtained at the relative delay of 12 ns between the Stokes and anti-Stokes photons, which gives a correlation function of $g_{\rm S,AS}(12\textrm{
ns})=1.57\pm0.04$. The counting rates of the Stokes and anti-Stokes photons are $1.4\times10^4/$s and $4.0\times10^4/$s respectively. The larger counting rates of the anti-Stokes photons is caused by the atoms moving out and in the coupling beam quickly, which makes a large effective decay rate between the ground states. The atoms in the state $\left|b\right>$ moving into the coupling laser contribute to uncorrelated anti-Stokes photons. Even when the pump beam is absent the counting rate of the anti-Stokes is larger than 20000/s. These uncorrelated counts causes the large background in the coincidence between the Stokes and anti-Stokes photons, as shown in Fig. \[fig:2\]. From Fig. \[fig:2\] we found that the correlated time between the Stokes and anti-Stokes photons is less than 30 ns.
![(Color online) Time resolved coincidence counting between the Stokes and anti-Stokes photons. The data is accumulated about 1000 seconds and then normalized in time. $\tau$ is the relative delay between the Stokes and anti-Stokes photons. The delay between the Stokes photons and anti-Stokes photons is cause by time used to generate anti-Stokes photons, which is mainly determined by the Rabi frequency of coupling field[@balic:183601]. []{data-label="fig:2"}](fig2){width="8.3cm"}
In order to evaluate the quantum correlation of the OAM states, we measure the coincident counts with various displacements of the holograms. Figure \[fig:3\] shows the results when the H1 is fixed at various displacement while the displacement of H2 is swept. Every point is got by $N=\sum_{\tau =2 \rm ns}^{32 \rm ns}(N(\tau)-bg)/bg$, where $N(\tau)$ is the counting rate of each bin and $bg$ is background counting rate which is got by averaging the coincidences between the Stokes and anti-Stokes photons when $\tau>50$ ns. This guarantees that most of the correlated anti-Stokes photons are taken into account. Every point is accumulated over 500 seconds. The data are fitted with the square of the projection function[@Arlt:JOM:1998:1231]: $$\begin{aligned}
a(x0)&=&\int\!\!\!\int e^{-i
\arg(r\cos\varphi-x0, r\sin\varphi)}\nonumber\\
&&\times u_{AS}(r)u_{S}(r, \varphi)^*r\,\mathrm{d}
r\,\mathrm{d}\varphi\,,
\label{cocount}\end{aligned}$$ where $\arg(x,y)$ is the argument of the complex number $x+i\,y$, $e^{-i
\arg(r\cos\varphi-x0, r\sin\varphi)}$ represents the transmitting function of H2 with displacement of $x0$, $ u_{AS}(r)= E_{00}(r)$ is the field amplitude of the anti-Stokes photons collected by the single-mode fiber after being diffracted by the hologram, and $u_{S}(r,\varphi) = \cos\theta
E_{00}(r)+\sin\theta E_{01}(r,\varphi)$ is the field amplitude of the Stokes photons collected by the single-mode fiber. The superposition of the LG$_{00}$ and LG$_{01}$ modes can be controlled by the displacement of H1. Equation (\[cocount\]) gives the projection between the different OAM modes. In this paper the $u_{i}$s are the amplitudes of the Stokes and anti-Stokes photons respectively. This equation is tenable only when the collapse of the Stokes photons lead the anti-Stokes photons collapse into the corresponding states. Therefore if Eq. (\[cocount\]) always holds no matter the Stokes photons are collapsed to stationary states or superposition states, the Stokes photon and anti-Stokes photon should be in a quantum correlated state. In Fig. \[fig:3\] (a), the red squares show the results of the coincident counts versus the displacement of H2 when the displacement of H1 is far larger than the waist of the Stokes photons, and the green dots show the results when the displacement of H1 is 0. The red line in Fig. \[fig:3\] (a) is fitted with $\theta=0$ and the green dashed line is fitted with $\theta=\pi/2$, which means the Stokes photons are in LG$_{00}$ and LG$_{01}$ modes respectively. This figure demonstrates the collapse of the Stokes photon state into the stationary states lead the anti-Stokes photon state collapse into the corresponding stationary states. Therefore this figure indicates clearly the correlation of OAM between the Stokes and anti-Stokes photons. However, such a correlation can be obtained even in the mixture $|0\rangle_{S}|0\rangle_{AS}$ and $|1\rangle_{S}|1\rangle_{AS}$ states. To further demonstrate that the Stokes and anti-Stokes photons are in a quantum correlated state, we displace the H1 with a certain amount, which make the collected Stokes photons be in the superposition states ${1}/{\sqrt{2}}(|0\rangle\pm |1\rangle)$, and then sweep H2. The results are shown in Fig. \[fig:3\] (b). The data fit well with the theoretical prediction, which demonstrates that the anti-Stokes photon state collapses into the corresponding superposition states when the Stokes photon state collapses into the superposition states. Therefore the results shown in Fig. \[fig:3\] demonstrate that the Stokes and anti-Stokes photons are in strongly quantum correlated OAM states.
![(Color online) Coincident counts versus the displacement of H2 with different displacement of H1. (a) shows the results that the Stokes photons are in stationary states $|0\rangle$ (red squares) and $|1\rangle$ (green dots); (b) shows the results that the Stokes photons are in the superposition states $(|0\rangle\pm|1\rangle)/\sqrt{2}$. The data are fitted using the square of Eq. (\[cocount\]) with $w=0.8$ mm. []{data-label="fig:3"}](fig3){width="8.3cm"}
![(Color online) Graphical representation of the reconstructed density matrix. (a) is the real part and (b) is the imaginary part.[]{data-label="fig:4"}](fig4){width="6cm"}
To further demonstrate the entanglement of the Stokes and anti-Stokes photons, we perform a two-qubit state tomography[@PhysRevA.64.052312], and get the full state of the Stokes and anti-Stokes photons. The density matrix is reconstructed from the experimentally obtained coincidences with various combinations of the measurement basis. A graphical representation of the reconstructed density matrix is shown in Fig. \[fig:4\]. From the density matrix, the fidelity[@Nielsen:2000] to the maximally entangled state $|\Psi\rangle =
(|0\rangle_{S}|0\rangle_{AS} +
|1\rangle_{S}|-1\rangle_{AS})/\sqrt{2}$ is estimated to about $\langle \Psi|\rho|\Psi\rangle=0.89$. The concurrence[@PhysRevLett.80.2245] estimated from the density matrix is about $0.81>0$, which demonstrated the Stokes and anti-Stokes photons are in an entangled state clearly[@PhysRevLett.80.2245]. The entanglement of formation[@Nielsen:2000] is also estimated to be 0.74.
The Stokes photons and the anti-Stokes photons are not generated simultaneously in the SFWM. The atomic ensemble collapses into the state $\frac{1}{\sqrt{N}} \sum_j|a_1,a_2,\ldots,b_j,\ldots,a_N\rangle$ after emitting an Stokes photon, the information of the Stokes photons will be stored in the atomic system firstly. Lately the information of the atomic ensemble is retrieved by the coupling laser, and an anti-Stokes photon is generated[@Scully:PRL:2006:010501; @inoue:053809], the anti-Stokes photon carries the information of the atomic ensemble. The speed of the anti-Stokes photon generated is mainly determined by the Rabi frequency of the coupling laser[@balic:183601]. Therefore the entanglement of OAM between Stokes photons and the anti-Stokes photons might suggest the existence of the entanglement of OAM between the Stokes photon and the atomic ensemble. Our work is different from the work of V. Boyer *el al.*[@boyer:143601]. In their work they used a four-wave-mixing process[@boyer:143601:1] to generate the spatially multimode quantum-correlated twin beams with finite OAM in a hot atomic vapor. Their experiment is not a spontaneous process, and is not in the photon level. They also have not demonstrated the entanglement between the beams.
We estimate that the main sources of the errors in this experiment are from follows: the decay rate of the atoms is very large, which causes the large background counting; the instability of the frequency of the lasers; there are also other LG modes in the diffraction except for the LG$_{00}$, LG$_{01}$ modes and their superposition[@Arlt:JOM:1998:1231; @Vaziri:JOO:2002:S47]; the superposition of the state LG$_{00}$ and LG$_{01}$ is got by shifting the hologram, which is dependent on the beam waist, therefore the small fluctuation of the beam position also causes the error.
In summary, we have demonstrated the entanglement of OAM states between the Stokes and anti-Stokes photons generated via SFWM in a hot rubidium cell. The entanglement of the Stokes and anti-Stokes photons also suggests that the Stokes photon might entangle with the hot atomic ensemble in spatial degrees of freedom (OAM in this paper).
We thank Pei Zhang for supplying computer-generated holograms and some useful discussion. We also thank Xi-Feng Ren for some useful discussion. This work is supported by National Fundamental Research Program(2006CB921907), National Natural Science Foundation of China(60621064, 10674126, 10674127), the Innovation funds from Chinese Academy of Sciences, and the Program for NCET.
[25]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , ****, ().
, , , , , , , , ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , , , ****, ().
, , , , , ****, ().
, , , , , **, .
, , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , ****, ().
, ** (, , ).
, ****, ().
, , , ****, ().
, , , , ****, ().
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'While stochastic gradient descent (SGD) is a workhorse in machine learning, the learning properties of many practically used variants are hardly known. In this paper, we consider least squares learning and contribute filling this gap focusing on the effect and interplay of multiple passes, mini-batching and averaging, and in particular tail averaging. Our results show how these different flavors of SGD can be combined to achieve optimal learning errors, hence providing practical insights.'
author:
- 'Nicole Mücke[^1] , Gergely Neu[^2] and Lorenzo Rosasco[^3]'
bibliography:
- 'bib\_SGD.bib'
title:
- 'Beating SGD Saturation with Tail-Averaging and Minibatching'
- 'Beating SGD Saturation with Tail-Averaging and Minibatching'
---
Introduction
============
Stochastic gradient descent (SGD) provides a simple and yet stunningly efficient way to solve a broad range of machine learning problems. Our starting observation is that, while a number of variants including multiple passes over the data, mini-batching and averaging are commonly used, their combination and learning properties are studied only partially. The literature on convergence properties of SGD is vast, but usually only one pass over the data is considered, see, e.g., [@NemJudLan09]. In the context of nonparametric statistical learning, which we consider here, the study of one-pass SGD was probably first considered in [@SmaYao06] and then further developed in a number of papers (e.g., [@YinPon08; @YaoTar14; @orabona]). Another line of work derives statistical learning results for one pass SGD with averaging from a worst-case sequential prediction analysis [@RSS12; @HK14; @RaShaSri11]. The idea of using averaging also has a long history going back to at least the works of [@R88] and [@PolJu92], see also [@ShaZha13] and references therein. More recently, averaging was shown to lead to larger, possibly constant, step-sizes, see [@BacMou13; @DieuBa16; @DieFlaBac17]. A different take on the role of (weighted) averaging was given in [@NeuRos18], highlighting a connection with ridge regression, a.k.a. Tikhonov regularization. A different flavor of averaging called *tail averaging* for one-pass SGD was considered in [@JKKNS18] in a parametric setting. The role of minibatching has also being considered and shown to potentially lead to linear parallelization speedups, see e.g. [@Cotter11] and references therein. Very few results consider the role of multiple passes for learning. Indeed, this variant of SGD is typically analyzed for the minimization of the empirical risk, rather than the actual population risk, see for example [@Ber97]. To the best of our knowledge the first paper to analyze the learning properties of multipass SGD was [@RosVil15], where a cyclic selection strategy was considered. Other results for multipass SGD were then given in [@HarRecSin16] and [@LinCamRos16]. Our starting point are the results in [@LinRos17] where optimal results for multipass SGD where derived considering also the effect of mini-batching. Following the approach in this latter paper, multipass SGD with averaging was analyzed by [@PillRudBa18] with no minibatching.
In this paper, we develop and improve the above results on two fronts. On the one hand, we consider for the first time the role of multiple passes, mini-batching and averaging at once. On the other hand, we further study the beneficial effect of tail averaging. Both mini-batching and averaging are known to allow larger step-sizes. Our results show that their combination allows even more aggressive parameter choices. At the same time averaging was shown to lead to slower convergence rates in some cases. In a parametric setting, averaging prevents linear convergence rates [@BacMou13; @DieFlaBac17]. In a nonparametric setting, it prevents exploiting the possible regularity in the solution [@DieuBa16], a phenomenon called [*saturation*]{} [@engl96]. In other words, uniform averaging can prevent optimal rates in a nonparametric setting. Our results provide a simple explanation to this effect, showing it has a purely deterministic nature. Further, we show that tail averaging allows to bypass this problem. These results parallel the findings of [@JKKNS18], showing similar beneficial effects of tail-averaging and minibatching in the finite-dimensional setting. Following [@LinRos17], our analysis relies on the study of batch gradient descent and then of the discrepancy between batch gradient and SGD, with the additional twist that it also considers the role of tail-averaging. The rest of the paper is organized as follows. In Section \[LS\_learn\], we describe the least-squares learning problem that we consider, as well as the different SGD variants we analyze. In Section \[appet\], we collect a number of observations shedding light on the role of uniform and tail averaging. In Section \[sec:main\], we present and discuss our main results. In Section \[numerics\] we illustrate our results via some numerical simulations. Proofs and technical results are deferred to the appendices.
Least Squares Learning with SGD {#LS_learn}
===============================
An appetizer: Averaging and Gradient Descent Convergence {#appet}
========================================================
Main Results and Discussion {#sec:main}
===========================
Numerical Illustration {#numerics}
======================
[**Acknowledgments**]{}\
\
NM is supported by the German Research Foundation under DFG Grant STE 1074/4-1. L. R. acknowledges the financial support of the AFOSR projects FA9550-17-1-0390 and BAA-AFRL-AFOSR-2016-0007 (European Office of Aerospace Research and Development), and the EU H2020-MSCA-RISE project NoMADS - DLV-777826.
Analysis {#sec:analysis}
========
[^1]: Institute for Stochastics and Applications, University of Stuttgart, [*nicole.muecke@mathematik.uni-stuttgart.de*]{}
[^2]: Universitat Pompeu Fabra, Barcelona, Spain, [ *gergely.neu@gmail.com*]{}
[^3]: LCSL, Massachusetts Institute of Technology & Istituto Italiano di Tecnologia & DIBRIS, Universita’ degli Studi di Genova, [*lrosasco@mit.edu*]{}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Compact steep-spectrum sources (CSSs) likely represent a population of young radio-loud active galactic nuclei (AGNs). Six CSSs have been identified as $\gamma$-ray emitting sources. We present a comprehensive analysis of their $\gamma$-ray emission observed with [*Fermi*]{}/LAT and establish their broadband spectral energy distributions (SEDs). We derive their jet properties by the SED fits with a two-zone leptonic model for radiations from the compact core and large-scale extended region, and explore the possible signature of a unification picture of jet radiation among subclasses of AGNs. We show that the observed $\gamma$-rays of CSSs with significant variability are contributed by the radiation of their compact cores via the inverse Compton process of the torus photons. The derived power-law distribution index of the radiating electrons is $p_1\sim1.5-1.8$, magnetic field strength is $B\sim0.15-0.6$ G, and Doppler boosting factor is $\delta\sim2.8-8.9$. Assuming that the jet is composed of $e^{\pm}$ pairs, the compact cores of CSSs are magnetized and have a high radiation efficiency, similar to that of flat spectrum radio quasars. The six CSSs on average have higher Eddington ratio and black hole mass than those non-GeV-detected CSSs, and they follow the correlation between the jet power in units of Eddington luminosity ($P^{e^{\pm}}_{\rm jet}/L_{\rm Edd}$) and Eddington ratio ($R_{\rm Edd}$) with other sub-classes of AGNs, $P^{e^{\pm}}_{\rm jet}/L_{\rm Edd}\propto R^{0.52\pm 0.03}_{\rm Edd}$, indicating that $R_{\rm Edd}$ would be a key physical driver for the unification scheme of AGN jet radiation.'
author:
- 'Jin Zhang, Hai-Ming Zhang, Ying-Ying Gan, Ting-Feng Yi, Jun-Feng Wang, En-Wei Liang'
title: 'Jet Properties of Compact Steep-Spectrum Sources and an Eddington-Ratio-Driven Unification Scheme of Jet Radiation in Active Galactic Nuclei'
---
Introduction {#sect:intro}
============
The compact steep-spectrum sources (CSSs) as a subclass of active galactic nuclei (AGNs) are characterized by a compact size (angular size $\leq1\arcsec-2\arcsec$) and a steep high frequency spectrum at the radio band ($\alpha\geq0.5$, $F_{\nu}\propto\nu^{-\alpha}$; Fanti et al. 1990). The radio morphology of CSS is typically characterized by fully developed radio lobes and by a small linear size ($<15$ kpc), and CSSs make up a significant fraction ($\sim30\%$) of sources at 5 GHz (O’Dea 1998 for a review). The other radio properties, for example, low degree of polarization, complex morphology, and high surface brightness were reported by van Breugel et al. (1984). Fanti et al. (1990) suggested that most of CSSs ($\geq70\%$) are likely to be intrinsically small other than a projection effect, and represent an early stage of radio source evolution. They estimated the age of CSSs to be $\leq5\times10^{6}$ years from their occurrence rate, which is consistent with the dynamical timescale. The lack of the synchrotron break frequency in the spectrum of lobe component at mm-wavelength also supports that CSSs are young sources with age of $\leq10^{5}$ years (Kameno et al. 1995). However, some CSSs exhibit distorted morphology, which is suggested to be attributed to the interaction with dense clouds in their environment (e.g., van Breugel et al. 1984; Wilkinson et al. 1984, 1991; Saikia et al. 1995). And there are some direct evidence for interactions between kpc-scale jets and ambient gas (O’Dea 1998 for a review). The superluminal motion has been observed in a few CSSs (Cotton et al. 1997 for 3C 138; Taylor et al. 1995 for 3C 216; Gawroński & Kus 2006 for 3C 309.1), indicating that the beaming effect may impact on some orientation-dependent properties of CSSs (Saikia et al. 1995). The core radiation of CSSs contributes a small fraction of total flux ($\leq36\%$) in the radio band and a large fraction of the radio emission is dominated by the extended lobes (Kameno et al. 1995). With the very long baseline interferometry (VLBI) observation results for 18 CSSs at 22 and 43 GHz, a correlation between variability at 22 GHz and spectral index at mm-wavelengths is observed, which can be explained by a two-component model with a flat-spectrum core and steep-spectrum lobes (Kameno et al. 1995).
It is generally thought that CSSs may eventually evolve into radio sources with large-scale jets, i.e., Fanaroff-Riley (FR) I and FR II radio galaxies (RGs) (O’Dea 1998; Polatidis & Conway 2003; Randall et al. 2011). According to the unification model for radio-loud (RL) AGNs, FR I and FR II RGs are the parent populations of blazars with large viewing angles and small Doppler factors (Urry & Padovani 1995). And a link between CSSs and narrow-line Seyfert 1 galaxies (NLS1s) has been reported in many studies (Oshlack et al. 2001; Komossa et al. 2006; Yuan et al. 2008; Caccianiga et al. 2014; Gu et al. 2015). CSSs and RL-NLS1s share the same radio properties (Komossa et al. 2006; Caccianiga et al. 2014), and Wu (2009) reported that the central black hole (BH) masses and Eddington ratios of CSSs are similar to these of NLS1s. Recent work further suggested that CSSs may be the parent population of RL-NLS1s (Caccianiga et al. 2014; Berton et al. 2016; Liao & Gu et al. 2020).
The detection of $\gamma$-ray emission in RL-NLS1s by *Fermi*/LAT is convincing evidence for the existence of relativistic jets in this class of AGNs (Abdo et al. 2009; D’Ammando et al. 2012; Yao et al. 2015, 2019; Paliya et al. 2018; Yang et al. 2018; Paliya 2019). So far five CSSs (3C 138, 3C 216, 3C 286, 3C 380, 3C 309.1) are included in the latest Fourth Fermi LAT source catalog (4FGL, Abdollahi et al. 2020), which covers 8 years of *Fermi*/LAT data in the energy range from 50 MeV to 1 TeV. An et al. (2016) reported that the $\gamma$-ray emitting source PKS 0202+149 (4C 15.05) is also a CSS. Hence likely six CSSs have been detected by *Fermi*/LAT. Different from the other $\gamma$-ray emitting AGNs, the core radiation of CSSs is very weak. In some CSSs even no significant core is observed. The radiation mechanism and origin of their $\gamma$-ray emission are still uncertain. Therefore it is very important to reveal them to study the physics of CSSs. These $\gamma$-ray emitting CSSs could be good candidates to investigate the jet properties among the young AGNs.
In this paper, we comprehensively analyze the data observed by *Fermi*/LAT for these $\gamma$-ray emitting CSSs in Section 3, including the long-term light curves, the spectra, the counts maps, and the variability index. We also compile their broadband spectral energy distributions (SEDs) and fit them with a two-zone leptonic model in Section 4, then obtain their jet parameters and investigate the jet properties. Comparisons of jet properties between $\gamma$-ray emitting CSSs and other kinds of $\gamma$-ray emitting AGNs are presented in Section 5 to explore the intrinsic unification among these $\gamma$-ray emitting AGNs. Discussion and conclusions are given in Section 6.
GeV-selected CSSs
=================
Five CSSs were identified as the $\gamma$-ray emission candidates in 4FGL, i.e., 3C 138, 3C 216, 3C 286, 3C 380, and 3C 309.1. It was also proposed that the $\gamma$-ray emitting source 4C 15.05 may be classified as a CSS (An et al. 2016). We therefore have six CSSs in the sample. We describe these sources in the following.
***3C 138*** located at $z=0.759$ (PKS 0518+16, Spinrad et al. 1985). It has a steep spectrum of $\alpha=0.65$ straight up to 22 GHz with a turn-over at about 100 MHz (Fanti et al. 1990; Shen et al. 2001). The VLBI observations at 15 and 22 GHz show that its main jet consists of several knots extending about 400 mas in a position angle of 65$^{\circ}$ and a weak counter-jet about 250 mas is presented in the opposite direction (Cotton et al. 1997 and references therein). The VLBI image at 5 GHz also exhibits two notable emission regions, i.e., the core and the eastern lobe, which are separated by 400 mas at a position angle of 70$^{\circ}$, and there are several discretely lower surface brightness regions between them, but no counter-jet was significantly detected at 5 GHz (Shen et al. 2001). Due to the flattest spectral index and the highest brightness temperature, component A is suggested as a nucleus component (Shen et al. 2005). The superluminal motion with an apparent velocity of 9.7$c$ was reported by Cotton et al. (1997), and later was suggested to be 3.3$c$ (Shen et al. 2001).
***3C 216*** also identified with the quasar 0906+430 at $z=0.67$ (Smith & Spinrad 1980). The VLBI observations indicate the superluminal motion of $\sim4c$ and a small viewing angle of $\theta<20^{\circ}$ (Taylor et al. 1995 and references therein). A steep spectrum of $\alpha=0.79$ at low frequencies is observed, but it flattens above 5 GHz with $\alpha=0.29$ (Taylor et al. 1995). The VLBI Space Observatory Programme observations reveal the pc-scale structure of 3C 216 that can be well described by compact jet models (Paragi et al. 2000). Using the measured brightness temperature Paragi et al. (2000) estimated a lower-limit of the Doppler boosting factor of $\delta\sim3$ for the core-jet with the viewing angle less than 19$^{\circ}$, and thus they concluded that the observed small projected size of 3C 216 is probably caused by both interaction and projection effects. It displays high optical polarization and variability, similar to typical blazars (Angel & Stockman 1980). The bright core-lobe features of 3C 216 extend over 2.${\arcsec}$5 (Pearson et al. 1985) and they are embedded in a faint diffuse radio halo with a diameter $\sim7\arcsec$ (Barthel et al. 1988; Taylor et al. 1995).
***3C 286*** also named B1328+307, at redshift of $z=0.849$ (Cohen et al. 1977). The steep radio spectrum has a spectral index of $\alpha=0.61$ between 1.4 and 50 GHz and then a turnover at about 300 MHz, below which it is flat till $\sim$75 MHz (An et al. 2017). The source displays a primary core and a second lobe $\sim$2.6 arcsec to the south-west (Spencer et al. 1989; An et al. 2017). The radio emission at 15 GHz of this source is rather stable and no significant variation is observed in the past 10 years. The high polarization of the source has been detected in the radio band (Akujor & Garrington 1995). A compact bright nucleus associated with the radio core is also detected by the Hubble Space Telescope (deVries et al. 1997). Two compact components with comparable flux densities in the inner 10-mas region are resolved by the VLBI and the more compact component showing an inverted spectrum with a turnover between 5 and 8 GHz may infer the core (An et al. 2017). A jet speed of $\sim0.5 c$ and an inclination angle of $\sim48^{\circ}$ are derived for 3C 286. The optical spectrum observed with the SDSS-BOSS clearly indicates that 3C 286 can be classified as a NLS1 (Berton et al. 2017).
***3C 309.1*** located at $z=0.904$ (S5 1458+71, Burbidge & Burbidge 1969). The overall extent of 3C 390.1 is $2\arcsec$ and it has a steep spectrum of $\alpha=0.69$ between 1 and 10 GHz (Wilkinson 1972). Its extended structures have a steep spectrum with $\alpha=0.94\pm0.08$ between 100 MHz and 15 GHz and exhibit a large rotation in position angle (Kus et al. 1981). Forbes et al. (1990) reported that 3C 309.1 is surrounded by very massive cooling flows, however, this is not consistent with the small rotation measures (Aaron et al. 1997). The image at 15 GHz observed with the VLBA shows a compact core and a extended component 20 mas to the south with some extended diffuse emission. The total fluxes at 8 and 14.5 GHz display obvious increase during 1990 to 2000, and the corresponding spectral index also varies (Aller et al. 2003). A relativistic motion of the new blob nearby the core with apparent velocity of $7.0\pm0.5c$ was observed (Gawroński & Kus 2006).
***3C 380*** located at $z=0.692$ (TXS 1828+487, Wilkinson et al. 1991). It has a steep radio spectrum with $\alpha=0.7$ between 300 MHz and 5 GHz and the extent extends to $\sim1\arcsec$ (Readhead & Wilkinson 1980). The superluminal motion in the outer regions of the jet is observed and corresponds to an apparent velocity $\sim8c$, indicating that the outer part of the VLBI-scale jet is within $10^{\circ}$ of the line-of-sight. However, the absence of strong variability at radio and optical bands may imply an intrinsic bend near the core-jet, and the core-jet points $\sim30^{\circ}$ away while the overall source axis is within $10^{\circ}$ of the line-of-sight (Wilkinson et al. 1991). Using the multi-epoch VLBI observations, Polatidis & Wilkinson (1998) revealed a curved pc-scale jet with complex substructure and an apparent acceleration from the core to $\sim100$ pc. They also suggested that 3C 380 most likely is a powerful FR II RG seen approximately end-on. With the space-VLBI observations the apparent superluminal motions are observed, however, no acceleration in pc-scale and changes of the position angle were confirmed (Kameno et al. 2000). One-sided jet is observed in both pc-scale and kpc-scale for 3C 380 (Gabuzda et al. 2014).
***4C 15.05*** also known as NRAO 91 and PKS 0202+149. On the basis of \[O iii\] $\lambda3727$ and \[Ne i\] $\lambda$3833 lines, it was estimated to be located at $z=0.833$ (Stickel et al. 1996), but a smaller redshift of $z=0.405$ was reported by Perlman et al. (1998). Recently, Jones et al. (2018) suggested that the neutral hydrogen absorption feature of this source agrees very well with the value of $z=0.833$. Its mean spectral index between 400 MHz to 8 GHz is $\alpha=0.33$ (Herbig & Readhead 1992), which is slightly steeper than that of blazars (Fan et al. 2010; Pei et al. 2019). This source displays the structure of a core and double lobes with the total projected size of $\sim$1.3 kpc. A core-jet structure in pc-scale extends the projected size of $\sim$25 pc at a position angle perpendicular to the kpc-scale structure (An et al. 2016). The significant apparent superluminal motion of $\sim16 c$ is detected (An et al. 2016). 4C 15.05 had been identified as a $\gamma$-ray emitting AGN with EGRET (von Montigny et al. 1995).
*Fermi*/LAT Data Analysis
=========================
The *Fermi*/LAT has provided a powerful tool for monitoring AGNs at $\gamma$-ray energy band (Ackermann et al. 2015), which is sensitive to photon energies greater than 20 MeV. For this work, the data of sources were taken from the Fermi Science Support Center (FSSC) covering the period from 2008 August 4 to 2019 July 16 (MET 239557417–584949858), approximately 11 years. the data analysis was performed with the publicly available software *fermitools* (ver. 1.0.0). Using standard data quality selection criteria “$(DATA\_QUAL > 0) \&\& (LAT\_CONFIG == 1)$", the events with energies from 100 MeV to 300 GeV are considered. In order to reduce the contamination from the $\gamma$-ray Earth limb, the maximum zenith angle is set to be 90$\degr$. Data within a $14\degr \times14\degr$ radius of interest (ROI) centered on the source position are binned in 12 logarithmically spaced bins in energy and a spatial bin of 0.1$\degr$ per pixel is used. The *$P8R3\_SOURCE\_V2$* set of instrument response functions (IRFs) is used. For the background model, we include the diffuse Galactic interstellar emission (IEM, $gll\_iem\_v07.fits$) and isotropic emission (“$iso\_P8R3\_SOURCE\_V2\_v1.txt$") templates released by FSSC[^1], as well as the individual $\gamma$-ray sources listed in the 4FGL (Abdollahi et al. 2020). The normalization and spectral parameters of the discrete $\gamma$-ray sources within 8$\degr$ in the background model were kept free. The Galactic emission and the isotropic component were also kept the normalization free during the data analysis. We use the Maximum Likelihood test statistic (TS) to estimate the significance of $\gamma$-ray signals, which is defined by TS$=2(\ln\mathcal{L}_{1}-\ln\mathcal{L}_{0})$, where $\mathcal{L}_{0}$ is the likelihood of background without the point source (null hypothesis) and $\mathcal{L}_{1}$ is the likelihood of the background including the point source.
Note that the 4FGL point sources are based on the 8-year survey data, here we make the new background source test using the package *gtfindsrc*. Only a new background source that has TS$\sim$41.2 ($>5\sigma$) with a power-law spectrum was found in the region of $3\degr\times3\degr$ centered on 3C 286. As illustrated in Figure \[TSmap\], the six CSSs are located within the $95\%$ containment of the associating 4FGL point sources, confirming that these CSSs are spatially associated with these 4FGL point sources. The information of these CSSs and associating 4FGL point sources is given in Table 1.
The spectral shapes of sources can be well described by the power-law spectral model, i.e., $dN(E)/dE = N_{0}(E/E_{0})^{-\Gamma_{\gamma}}$, where $\Gamma_{\gamma}$ is the photon spectral index. Note that the spectral model for 4FGL J0204.8+1513 (associated with 4C 15.05) in 4FGL is log-normal, but we find its TS$_{\rm curv}\sim1$, where TS$_{\rm curv}=2\log(\mathcal{L}_{\rm log-normal}/\mathcal{L}_{\rm power-law})$. The spectrum of a source is considered to be significantly curved if TS$_{\rm curv}>9$ (3$\sigma$ significance; Abdollahi et al. 2020). So we suggest that the power-law spectral model is better to represent its spectral shape than the log-normal spectral model, and we replace the log-normal model with the power-law model for this source.
The light curves are obtained by the derived fluxes with the power-law fits. The $\gamma$-ray light curves of these CSSs in time bins of 180-day with TS$\geq$9 (approximatively corresponds to $\sim3\sigma$ detection, Mattox et al. 1996) are presented in Figure \[LC\]. If TS$<9$, an upper-limit is presented (95$\%$ confidence level). The average luminosity of the past $\sim$11 years for sources is also given in the figure. Except for 3C 380 and 4C 15.05, only several time bins in the long-term light curves of these CSSs satisfy TS$\geq$9, indicating that on average this kind of AGNs has weak $\gamma$-ray emission comparing with blazars. The detection data points spread on both sides of the average luminosity for 3C 380 and 4C 15.05, however, almost all the detection data points of other four CSSs have higher luminosity than the average luminosity. We roughly estimate the variability amplitude of these sources with $F_{\max}/F_{\min}$, where $F_{\max}$ and $F_{\min}$ are respectively the highest and lowest fluxes in the long-term $\gamma$-ray light curves (excluding the time bin data with TS$<$9). The derived largest and smallest values of $F_{\max}/F_{\min}$ are 7.5 for 3C 309.1 and 2.1 for 3C 216, respectively. We can observe that these $\gamma$-ray emission CSSs generally do not show the fast and large flares like blazars.
Likelihood-based statistic is also the most common method to quantify variability (Nolan et al. 2012; Abdollahi et al. 2020; Peng et al. 2019; Xi et al. 2020). To gauge the variability of sources, we follow the definition in 2FGL (Nolan et al. 2012) and compute the variability index (TS$_{\rm var}$) as $$\rm TS_{\rm var} = 2\sum_{i=1}^N [ log(\mathcal{L}_{i}(F_i)) - log(\mathcal{L}_{i}(F_{\rm glob}))],$$ where $F_i$ is the fitting flux for bin $i$, $\mathcal{L}_{i}(F_i)$ is the likelihood corresponding to bin $i$, and $F_{\rm glob}$ is the best fit flux for the glob time by assuming a constant flux. Since we generate the light curves in time bins of 180-day, the source is considered to be probably variable if TS$_{\rm var}>45.82$, where TS$_{\rm var}=45.82$ corresponds to $3\sigma$ confidence level in a $\chi^2_{N-1}$(TS$_{\rm var}$) distribution with $N-1=21$ degrees of freedom, $N$ is the number of time bins. Five among six CSSs are variable sources with this criterion, i.e., TS$_{\rm var}=128.1$ for 3C 138, TS$_{\rm var}=64.3$ for 3C 286, TS$_{\rm var}=167.0$ for 3C 309.1, TS$_{\rm var}=149.1$ for 3C 380, and TS$_{\rm var}=211.4$ for 4C 15.05. Only 3C 216 does not show as a variable source with TS$_{\rm var}=11.8$.
The photon spectral index ($\Gamma_{\gamma}$) as a function of luminosity ($L_{\gamma}$) is also illustrated in Figure \[LC\]. The distinct spectral variations are observed in these CSSs. The largest change of $\Gamma_{\gamma}$ is presented in 3C 286 and 4C 15.05, i.e., from 2.08$\pm$0.40 to 3.63$\pm$0.10 and from 1.85$\pm$0.01 to 2.90$\pm$0.02, respectively. Although no significant flux variation is observed in 3C 216, its photon spectral index also shows variations, from 2.05$\pm$0.04 to 2.86$\pm$0.29. Only 3C 138 seems to show the behavior of “harder when brighter", which has been seen in the $\gamma$-ray emitting blazars (e.g., Zhang et al. 2013, 2018a, 2020). And the tendency of “steeper when brighter" is displayed in 3C 286 and 3C 309.1. The Pearson correlation analysis yields a correlation coefficient of $r=0.95$ and a chance probability of $p=0.01$ for 3C 286 and $r=0.57$, $p=0.19$ for 3C 309.1, respectively. No correlated tendency between $\Gamma_{\gamma}$ and $L_{\gamma}$ is presented in 3C 216 and 3C 380. For 4C 15.05, the relation between $\Gamma_{\gamma}$ and $L_{\gamma}$ seems to change from anti-correlation into correlation. Excluding the four data points with $L_{\gamma}>5\times10^{46}$ erg s$^{-1}$, the Pearson correlation analysis yields $r=0.45$ and $p=0.09$ for 4C 15.05. Recently, a transition from softer-when-brighter to harder-when-brighter is also observed in the monthly $\gamma$-ray flux–index plot of 3C 273 (Kim et al. 2020). This transition may be due to a balance between acceleration and cooling of relativistic particles (Kim et al. 2020) or a shift of the inverse Compton peak in the SED (Shah et al. 2019).
As illustrated in Figure \[Gamma-L\], we also show the Fermi blazars in the $L_{\gamma}$–$\Gamma_{\gamma}$ plane, where the blazar data are taken from Ackermann et al. (2015) and belong to the clean sample with confirmed redshift, including 414 flat spectrum radio quasars (FSRQs), 162 high-frequency-peaked BL Lacs (HBLs), 69 intermediate-frequency-peaked BL Lacs (IBLs), and 68 low-frequency-peaked BL Lacs (LBLs). The radiation properties of the six CSSs in the GeV band are analogous to these of FSRQs, however, they on average have steeper spectra and lower $L_{\gamma}$ than FSRQs.
Broadband SED Modeling
======================
With the [*Fermi*]{}/LAT data and multi-wavelength data compiled from the literature and ASI Science Data Center (ASDC)[^2], we establish the broadband SEDs of the six CSSs, as displayed in Figure \[SED\]. The SEDs in $\nu>10$ GHz are apparently similar to the SEDs of FSRQs and NLS1s (e.g., Zhang et al. 2014; Sun et al. 2015). The variability in $\gamma$-ray band shown in Figure \[LC\] and the significant position offset observations reported for 3C 216 (Paragi et al. 2000), 3C 138 (Shen et al. 2001), and 3C 309.1 (Ros & Lobanov 2001), together with the apparent superluminal motions in some CSSs, likely indicate that the $\gamma$-rays would be from their compact core-jets, which still have the relativistic bulk motion. The radio emission in $\sim$0.01–10 GHz in the SEDs should not be dominated by the emission of the compact core because of a significant synchrotron-self-absorption effect on the radio emission from the core region. It would be attributed to the emission from the large-scale extended regions, similar to large-scale hotspots and knots (e.g., Zhang et al. 2010, 2018b). Therefore, we employ a two-zone leptonic radiation model to fit the constructed SEDs.
The Compact Core Region
-----------------------
The core region is assumed as a homogenous sphere with radius $R$, magnetic field strength $B$, and Doppler factor $\delta$, where $\delta=1/\Gamma(1-\beta\cos\theta)$, $\Gamma$ is the bulk Lorentz factor, $\theta$ is the viewing angle, and $\beta=1/\sqrt{1-\Gamma^{2}}$. The electron population distribution is taken as a broken power-law, i.e., $$N(\gamma )= N_{0}\left\{ \begin{array}{ll}
\gamma ^{-p_1} & \mbox{ $\gamma_{\rm min}\leq\gamma \leq \gamma _{\rm b}$}, \\
\gamma _{\rm b}^{p_2-p_1} \gamma ^{-p_2} & \mbox{ $\gamma _{\rm b} <\gamma <\gamma _{\rm max} $.}
\end{array}
\right.$$
The synchrotron (syn), synchrotron-self-Compton (SSC), inverse Compton (IC) scattering of external field photons processes of the relativistic electrons are considered to model the broadband SEDs. The blue bump of the thermal emission from the accretion disk is prominent for 3C 138, 3C 286, 3C 309.1 and 3C 380, as shown in Figure \[SED\]. We use the standard accretion disk spectrum (Davis & Laor 2011) to explain this thermal emission (see also Zhang et al. 2015). The inner ($R_{\rm in}$) and outer ($R_{\rm out}$) radii and inclination to the line of sight ($i$) of the accretion disk are taken as $R_{\rm in}=R_{\rm s}$ (Krolik & Hawley 2002)[^3], $R_{\rm out}=700R_{\rm s}$, and $\cos i=1$, where $R_{\rm s}$ is the Schwarzschild radius. The black hole mass ($M_{\rm BH}$) as listed in Table 3 is also fixed, and then we vary the Eddington ratio to model the emission from the accretion disk.
As displayed in Figure \[LC\], the $\gamma$-ray light curves in time bins of 180 days show slightly variability for the sources, and thus we use the time-scale of 180 days to constrain the size of the radiation region for the compact core. If $R=\delta c\Delta t/(1+z)\sim4.7\times10^{17}\delta/(1+z)$, where $\Delta t=180$ days, the energy dissipation region should be outside the broad-line regions (BLRs) of these CSSs. In this case, the photons from torus provide the seed photons for the IC process of the relativistic electrons in the compact core. The energy density of the torus photon field in the comoving frame is $U^{'}_{\rm IR}=3\times10^{-4}\Gamma^2$ erg cm$^{-3}$ and the spectrum of the torus can be approximated by a blackbody with a peak in the comoving frame at $3\times10^{13}\Gamma$ Hz (e.g., Cleary et al. 2007; Kang et al. 2014).
Parameters $p_1$ and $p_2$ are fixed and derived from the spectral indices of the observed SEDs, i.e., in the radio (or X-ray) band and the GeV $\gamma$-ray (or IR-optical) band, which are obtained by fitting the observed data with a power-law function (see also Zhang et al. 2012, 2013). $\gamma_{\rm min}$ is fixed as $\gamma_{\rm min}=1$ or constrained with the observed SEDs via a method reported in Tavecchio et al. (2000). $\gamma_{\rm max}$ is also poorly constrained by the last observation point in the GeV energy band, or taken as 10$^5$. We fix the value of viewing angle and vary the value of $\Gamma$ (obtaining the corresponding $\delta$), $B$, $N_0$ and $\gamma_{\rm b}$ to fit the broadband SEDs of the core region.
The Large-scale Extended Region
-------------------------------
The radio emission below $10^{10}$ Hz in SEDs should be dominated by the radiation of extended region at large-scale. The extended region is also assumed to be a homogenous sphere and the radius is roughly derived from the angular radius at the radio band as listed in Table 2. Even at the pc-scale, only small motion is detected for some CSSs by the VLBI observations. Hence we do not consider the relativistic effect of the extended regions in large-scale and assume $\delta=\Gamma=1$ during SED modeling. The electron distribution is also taken as Eq. (2). The cosmic microwave background (CMB) provides the seed photons of IC process (IC/CMB) and the CMB energy density in the comoving frame is $U^{'}_{\rm CMB}=\frac{4}{3}\Gamma^2(1+z)^4U_{\rm CMB}$ (Dermer & Schlickeiser 1994; Georganopoulos et al. 2006), where $U_{\rm CMB}=4.2\times10^{-13}$ erg cm$^{-3}$. The syn+SSC+IC/CMB model under the equipartition condition ($U_{B}=U_{\rm e}$) is used to reproduce the radiation of extended region, where $U_{B}$ and $U_{\rm e}$ are the energy densities of magnetic fields and electrons.
$p_1$ is fixed and derived by fitting the radio spectrum with a power-law function. $p_2$ cannot be constrained and is fixed as $p_2=4$. $\gamma_{\rm min}$ is fixed as $\gamma_{\rm min}=100$ or is taken larger values to match the SEDs. $\gamma_{\rm max}$ is also poorly constrained and taken as $\gamma_{\rm max}=50\gamma_{\rm b}$. As illustrated in Figure \[SED\], it seems that there is a break around at $10^{10}$ Hz in the broadband SEDs of these CSSs, and the emission below this break may be dominated by the extended regions. We thus assume that the synchrotron radiation peak of the extended regions in large-scale is around $10^{10}$ Hz, which is used to constrain the values of $\gamma_{\rm b}$. We adjust the free parameters of $\gamma_{\rm b}$ and $N_0$ to fit the SEDs below $10^{10}$ Hz of the six CSSs.
Results
-------
We fit the SEDs with the two-zone leptonic model by considering the Klein-Nishina effect and the absorption of high-energy $\gamma$-ray photons by extragalactic background light (Franceschini et al. 2008). Note that the observed SEDs are contributed by the radiations of both core and extended region. The model parameters are poorly constrained, and we therefore only search for the parameter sets that can represent the SEDs. The uncertainty of the parameters cannot be constrained in such analysis. The results are illustrated in Figure \[SED\]. Under the equipartition condition and assuming $\delta=\Gamma=1$, the predicted IC fluxes by model from the extended regions are much lower than the *Fermi*/LAT observation data, which implies $\gamma$-ray emission of these CSSs should be from the radiation of their compact cores. The model fitting parameters are shown in Tables 2 and 3.
For the extended regions, the derived values of $\gamma_{\rm b}$, $B$, and $p_1$ are consistent with the values of substructures in large-scale jets, as displayed in Figure \[Para-LSJ\], where the data of these large-scale jet knots and hotspots are taken from Zhang et al. (2018b). The derived $\gamma_{\rm b}$ values of the large-scale extended regions in CSSs cluster at the lower end of the $\gamma_{\rm b}$ distribution. This is because we roughly take the large-scale extended region of CSS as a single-zone, which is not exactly like a knot or hotspot. The derived $\gamma_{\rm b}$ values are more similar to that in radio lobes (e.g., Takeuchi et al. 2012). $B$ of the large-scale extended regions in CSSs clusters at the large value end of the distribution, as displayed in Figure \[Para-LSJ\](b), but the $B$ distribution of the large-scale extended regions in CSSs is consistent with the distribution of knots and hotspots whose broadband SEDs can be well explained by the synchrotron radiation. The derived $p_1$ values of the large-scale extended regions in CSSs are consistent with that of those knots and hotspots, and are close or slightly larger than 2, which can be explained by the shock acceleration together with considering the cooling effect. This is consistent with the particle acceleration mechanisms in large-scale jets (e.g., Harris & Krawczynski 2006).
For the core regions, the derived Eddington ratio, i.e., $R_{\rm Edd}=L_{\rm disk}/L_{\rm Edd}$, where $L_{\rm Edd}$ and $L_{\rm disk}$ are the Eddington luminosity and accretion disk luminosity, respectively, ranges from 0.03 to 0.90, as listed in Table 3. $p_1$ ranges from 1.5 to 1.8 and is smaller than the expected value of $p=2$ from the shock acceleration mechanism (e.g., Kirk et al. 2000; Achterberg et al. 2001; Virtanen & Vainio 2005); $\gamma_{\rm b}$ ranges within $\sim$320–2000; $B$ narrowly clusters at 0.15–0.60 G; $\delta$ ranges from 2.8 to 8.9 while $\Gamma$ narrowly clusters at 2.4–5.5.
Jet Power–Eddington Ratio Correlation and Unification of Jet Radiation for AGN Subclasses
=========================================================================================
In order to compare the jet properties of core regions between these CSSs and other $\gamma$-ray emitting AGNs, we collect the jet parameter data together with $L_{\rm disk}$ and $M_{\rm BH}$ of other $\gamma$-ray emitting AGNs from the literature, as given in Table 4. As shown in Figure \[para-core\](a), the $p_1$ values of CSSs are smaller than the expected value of $p=2$ from the shock acceleration mechanism (e.g., Kirk et al. 2000; Achterberg et al. 2001; Virtanen & Vainio 2005), which is similar to that of most FSRQs and NLS1s. Therefore, magnetic reconnection may be an effective process of particle acceleration in these jets, which can produce a flatter power-law particle spectrum (Zenitani & Hoshino 2001; Sironi & Spitkovsky 2014; Guo et al. 2015; Zhu et al. 2016). Correlations among $\gamma_{\rm b}$, $B$, and $\delta$ for these different kinds of $\gamma$-ray emitting AGNs are also illustrated in Figure \[para-core\]. In the $\delta-B$ plane, the values of $\delta$ for CSSs are similar to these of RGs and slightly similar to these of NLS1s while $B$ of CSSs are similar to these of BL Lacs and RGs. In the $\delta-\gamma_{\rm b}$ plane, $\gamma_{\rm b}$ of CSSs are close to these of RGs. Hence, the values of $\gamma_{\rm b}$, $B$, and $\delta$ for CSSs are more closer to these of RGs than other $\gamma$-ray emitting AGNs. In the $B-\gamma_{\rm b}$ plane, no correlation between $B$ and $\gamma_{\rm b}$ is observed neither for BL Lacs nor FSRQs. However, it seems that there is an anti-correlation tendency between $B$ and $\gamma_{\rm b}$ for all the $\gamma$-ray emitting AGNs, RGs and CSSs distribute the area between BL Lacs and FSRQs. The Pearson correlation analysis yields $r=-0.76$ and $p\sim0$.
We also plot the peak luminosity of synchrotron radiation ($L_{\rm s}$) against the peak frequency of synchrotron radiation ($\nu_{\rm s}$) for these $\gamma$-ray emitting AGNs, as displayed in Figure \[nus-Ls\](a). No trend associated with the “blazar sequence" is observed. The six CSSs cluster at within the distribution area of FSRQs and their $\nu_{\rm s}$ and $L_{\rm s}$ distribute within a narrow range.
Based on the fitting parameters of core regions, we also calculate the powers of the non-thermal electrons ($P_{\rm e}$) and magnetic fields ($P_{B}$) using $P_{\rm i}=\pi R^2\Gamma^2cU_{\rm i}$, where $U_{\rm i}$ could be the energy density of electrons or magnetic fields, and is measured in the comoving frame. The radiation power ($P_{\rm r}$) is estimated with the bolometric luminosity ($L_{\rm bol}$, the non-thermal radiation of compact core), i.e., $P_{\rm r}=\pi R^2\Gamma^2cU_{\rm r}=L_{\rm bol}\Gamma^2/4\delta^4$. The energy density of protons cannot be constrained by observations, and using the one-to-one ratio of electron to proton to calculate proton power seriously depends on the minimum energy of electrons, which is generally poor constrained for most AGNs (e.g., Zhang et al. 2014, 2015). Hence, we just consider the case of the electron–positron pair jet in the following discussion, i.e., $P^{e^{\pm}}_{\rm jet}=P_{\rm r}+P_{\rm e}+P_{B}$. The derived values of $P_{\rm e}$, $P_{B}$, $P_{\rm r}$, and $P^{e^{\pm}}_{\rm jet}$ for the six CSSs are given in Table 3.
As displayed in Figure \[nus-Ls\](a), no correlation between $\nu_{\rm s}$ and $L_{\rm s}$ is observed. Replacing $L_{\rm s}$ with $P^{e^{\pm}}_{\rm jet}$ of sources, an anti-correlated tendency is presented for all the data points with $r=-0.54$ and $p=2.5\times10^{-8}$, as shown in Figure \[nus-Ls\](b). If we only consider the data of CSSs and blazars, the anticorrelations between them become stronger, i.e., $r=-0.77$ and $p=6.4\times10^{-13}$. This result may imply that the “sequence" behavior of $\gamma$-ray emitting AGNs is dominated by the jet power (see also Xue et al. 2017; Fan & Wu 2019). For these $\gamma$-ray emitting AGNs, $\nu_{\rm s}$ is related to jet radiation processes and thus should be correlated with the jet power.
$P_{\rm r}$ and $P_{B}$ as a function of $P^{e^{\pm}}_{\rm jet}$ for these $\gamma$-ray emitting AGNs is illustrated in Figure \[Pr-Pjet\]. $P_{\rm r}$ is strongly correlated with $P^{e^{\pm}}_{\rm jet}$, and the Pearson correlation analysis gives $r=0.91$ and $p\sim0$. The linear fit in the log scale gives $\log P_{\rm r}=-(6.66\pm2.50)+(1.13\pm0.06)\log P^{e^{\pm}}_{\rm jet}$. Except for 4C 15.05, the other five CSSs together with FSRQs are located at the high power end with high jet radiation efficiency ($P_{\rm r}$/$P^{e^{\pm}}_{\rm jet}$). Only 4C 15.05 and RG NGC 6251 have very low jet radiation efficiency ($<0.01$), even lower than that of some BL Lacs. Due to the lack of emission lines, the redshift of 4C 15.05 is still debated. As shown in Figure \[SED\], the two peak frequencies of its broadband SED could be well constrained, i.e., the synchrotron peak frequency ($\nu_{\rm s}\sim10^{12}$ Hz) and the IC peak frequency ($\nu_{\rm c}\sim10^{20}$ Hz). If the IC peak is produced by SSC process, it would give $B\delta\sim\frac{\nu_{\rm s}^2}{2.8\times10^6\nu_{\rm c}}\sim0.005$ (Tavecchio et al. 1998), even though $\delta=1$, then $B\sim0.005$. So the SED of 4C 15.05 cannot be well explained by the single-zone syn+SSC model as most of BL Lacs and RGs. As listed in Table 3, the jet power of 4C 15.05 is totally dominated by $P_{B}$. In this respect, it is similar to FSRQs.
$P_{B}$ is also correlated with $P^{e^{\pm}}_{\rm jet}$ for these $\gamma$-ray emitting AGNs, as presented in Figure \[Pr-Pjet\](b). The Pearson correlation analysis gives $r=0.88$ and $p\sim0$. We observe that there is a larger dispersion in the low power end for the $P_{B}$–$P^{e^{\pm}}_{\rm jet}$ relation. If only considering the data of CSSs, FSRQs, and NLS1s, the correlation between $P_{B}$ and $P^{e^{\pm}}_{\rm jet}$ becomes stronger with $r=0.92$ and $p\sim0$. On average the three kinds of $\gamma$-ray emitting AGNs have the larger $P_{B}$/$P^{e^{\pm}}_{\rm jet}$ than BL Lacs and RGs. As reported by Zhang et al. (2015), FSRQ jets and BL Lac jets have the different composition and radiation efficiency, and the jet properties of NLS1s are intermediate between them, but more analogous to FSRQ jets. It seems likely that CSS jets may be also high magnetized with high radiation efficiency.
The emission of these $\gamma$-ray emitting AGNs is dominated by the jet radiation. In order to investigate the relation between jet and central engine, we plot $P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ as functions of $L_{\rm disk}$ and $M_{\rm BH}$, together with their relations in units of Eddington luminosity[^4]. In the $L_{\rm disk}$–$P_{\rm r}$ plane (Figure \[R\_edd-Pr\](a)), the distributions of CSSs roughly overlap with these of FSRQs, however, on average $P_{\rm r}$ of CSSs is higher than that of FSRQs. Note that these $\gamma$-ray emitting AGNs form a sequence spanning six orders of magnitude in the $L_{\rm disk}$–$P_{\rm r}$ plane, and the similar feature has been observed for BL Lacs, NLS1s, and FSRQs in Zhang et al. (2015), but here FR I RGs further extend this sequence to the low power end. Except for several BL Lacs and RGs, and one NLS1, $P_{\rm r}$ of all the other sources are lower than their $L_{\rm disk}$. The Pearson correlation analysis yields $r=0.91$ and $p\sim0$. The linear fit in the log scale gives $\log P_{\rm r}=(13.39\pm1.48)+(0.69\pm0.03)\log L_{\rm disk}$, as shown in Figure \[R\_edd-Pr\](a). The derived slope is flatter than that in Zhang et al. (2015), but is steeper than that in Paliya et al. (2017). However, only blazars and NLS1s are considered in Zhang et al. (2015) and both $\gamma$-ray loud and $\gamma$-ray quiet blazars are included in Paliya et al. (2017). Similar feature is also observed in the $L_{\rm disk}$–$P^{e^{\pm}}_{\rm jet}$ plane (Figure \[R\_edd-Pr\](b)). The linear fit in the log scale yields $\log P^{e^{\pm}}_{\rm jet}=(21.27\pm1.35)+(0.53\pm0.03)\log L_{\rm disk}$ with $r=0.88$ and $p\sim0$. In this scenario, most of BL Lacs and RGs together with three CSSs have $P^{e^{\pm}}_{\rm jet}$ larger than $L_{\rm disk}$, and on average $P^{e^{\pm}}_{\rm jet}$ of CSSs is higher than that of other AGNs. The strong correlations are also observed when they are in units of Eddington luminosity, as illustrated in Figures \[R\_edd-Pr\](c) and \[R\_edd-Pr\](d). The linear fits in the log scale give $\log P_{\rm r}/L_{\rm Edd}=(-1.31\pm0.09)+(0.68\pm0.04)\log R_{\rm Edd}$ with $r=0.88$ and $p\sim0$ and $\log P^{e^{\pm}}_{\rm jet}/L_{\rm Edd}=(-0.93\pm0.07)+(0.52\pm0.03)\log R_{\rm Edd}$ with $r=0.86$ and $p\sim0$, respectively.
No correlation between $P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ with $M_{\rm BH}$ is observed for these AGNs, as displayed in Figures \[M-Pr\](a) and \[M-Pr\](b). The range of $M_{\rm BH}$ for CSSs is similar to that of RGs and FSRQs, and NLS1s are the low-mass tail of $\gamma$-ray emitting AGNs (Sun et al. 2015; Zhang et al. 2015; Berton et al. 2016). However, $P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ would be strongly correlated with $M_{\rm BH}$ if only considering the data of FSRQs, NLS1s, and CSSs. The Pearson correlation analysis yields $r=0.78$ and $p=2.3\times10^{-11}$ for the $M_{\rm BH}$–$P_{\rm r}$ relation and $r=0.78$ and $p=1.5\times10^{-11}$ for the $M_{\rm BH}$–$P^{e^{\pm}}_{\rm jet}$ relation, respectively. This is consistent with that the jet power is connected to the BH mass (Heinz & Sunyaev 2003). Hence, the lower jet power of NLS1s might be a consequence of the lower BH mass than FSRQs and CSSs. The predicted jet power should depend on the spin, mass, and horizon magnetic field of the BH (Blandford & Znajek 1977; Ghisellini et al. 2014). The different relations between jet power and BH mass for the two kinds of blazars (FSRQs and BL Lacs) may indicate their different dominating jet formation mechanisms (Zhang et al. 2014).
Although the dominant mechanisms of jet launching may be different among these AGNs via either the Blandford–Payne (BP; Blandford & Payne 1982) and/or Blandford–Znajek (BZ; Blandford & Znajek 1977) mechanisms (Ghisellini & Celotti 2001; Zhang 2013; Zhang et al. 2014; Ghisellini et al. 2014), the ejected jet power and jet radiation power are still connected with the disk luminosity and the Eddington ratio (e.g., Ghisellini et al. 2014; Zhang et al. 2015; Paliya et al. 2017). The different structures and accretion rates of accretion disks may result in the different dominating mechanisms of jet launching (Ghisellini & Celotti 2001; Zhang 2013). Hence, the accretion rate (see also Shen & Ho 2014) may be the fundamental parameter in the unified framework among different types of $\gamma$-ray emitting AGNs (e.g., Zhang et al. 2015). The $P^{e^{\pm}}_{\rm jet}/L_{\rm Edd}-R_{\rm Edd}$ (or $P_{\rm r}/L_{\rm Edd}-R_{\rm Edd}$) correlation likely signals a possible unification picture of jet radiation among subclasses of AGNs.
In order to further investigate the properties of $\gamma$-ray emitting CSSs, we collect the large sample data of CSSs and RL-NLS1s[^5] together with a RG sample from the literature (Liao & Gu et al. 2020; Viswanath et al. 2019; Berton et al. 2016). We plot $R_{\rm Edd}$ against $M_{\rm BH}$ for these sources together with the $\gamma$-ray emitting AGNs in Figure \[M-Redd\]. It was proposed that CSSs may be the parent population of RL-NLS1s, but we find that the large sample of CSSs has higher $M_{\rm BH}$ and lower $R_{\rm Edd}$ than that of the large RL-NLS1 sample. We also find that on average the $\gamma$-ray detected NLS1s have high $M_{\rm BH}$ (but not high $R_{\rm Edd}$) among the large RL-NLS1 sample, likely indicating that the $\gamma$-ray emission is easier to detect in the RL-NLS1s with high BH mass. CSSs have higher $R_{\rm Edd}$ than that of these RGs on average. It is found that the $\gamma$-ray emitting CSSs have higher $R_{\rm Edd}$ and $M_{\rm BH}$ than others among the large CSS sample, which may be a helpful signal to find more $\gamma$-ray emitting CSS candidates in the future. The high $R_{\rm Edd}$ and $M_{\rm BH}$ feature of $\gamma$-ray emitting CSSs is similar to FSRQs. The high $R_{\rm Edd}$ of $\gamma$-ray emitting CSSs may imply that their BHs are located at an environment with rich gas to provide the high accretion.
Discussion and Conclusions
==========================
We analyzed the $\sim11$-year *Fermi*/LAT data of six CSSs and all of them are located within the 95% containment of the associating 4FGL point sources, confirming that they are spatially associated with these 4FGL point sources. By exploiting *Fermi*/LAT observation data of the six CSSs, we obtained their long-term light curves, average spectra, and variability index (TS$_{\rm var}$). The derived TS$_{\rm var}$ values signal that five among six CSSs are obviously variable sources. This is consistent with the derived long-term light curves, and their variabilities are accompanied by the variations of photon spectral index. Even with the 180-day time bin size, we found that for four CSSs only several time bins in the long-term light curves have detections with TS$\geq$9, indicating that these CSSs are very weak $\gamma$-ray emitting AGNs. We also compiled the broadband SEDs of the six CSSs from the literatures and ASDC and they can be well explained with the two-zone leptonic model. The steep radio spectra below 10 GHz are from the radiations of the extended regions in large-scale while the $\gamma$-ray emission should be dominated by the radiations of the compact cores. The derived values of $\gamma_{\rm b}$, $B$, and $p_1$ for the extended regions in CSSs are consistent with these of large-scale jet hotspots and knots in other AGNs. For the core regions, the derived beaming factors of these CSSs are smaller than that of the $\gamma$-ray emitting blazars. The flat electron spectra with $p_1\sim1.5-1.8$ likely imply that the magnetic reconnection may be an effective process of particle acceleration in these jets, similar to some FSRQs and NLS1s (Zhu et al. 2016).
Based on the fitting parameters, we also calculated $P_{\rm e}$, $P_{B}$, $P_{\rm r}$, and $P^{e^{\pm}}_{\rm jet}$ of the core regions for the six CSSs and compared with other $\gamma$-ray emitting AGNs. Except for 4C +15.05, the other five CSSs have high jet radiation power and jet radiation efficiency, but all of them have high ratio of $P_{B}$ to $P^{e^{\pm}}_{\rm jet}$, i.e., highly magnetized core-jets, similar to FSRQs and NLS1s. We found that for the different kinds of $\gamma$-ray emitting AGNs, $P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ in units of Eddington luminosity are strongly correlated with the Eddington ratio. In the $R_{\rm Edd}-P_{\rm r}/L_{\rm Edd}$ (or $R_{\rm Edd}-P^{e^{\pm}}_{\rm jet}/L_{\rm Edd}$) plane, CSSs, FSRQs, NLS1s, BL Lacs, and RGs form a sequence spanning more than six orders of magnitude. Hence, we proposed that the Eddington ratio may be a key physical driver for the unification scheme of AGN jet radiation. Comparing with a large CSS sample, the $\gamma$-ray emitting CSSs have higher Eddington ratio and $M_{\rm BH}$ than other CSSs, which may be a helpful signal to find more $\gamma$-ray emitting CSSs in the future.
This work is supported by the National Natural Science Foundation of China (grants 11973050, 11533003, 11863007, 11851304, U1731239 and U1831205), the National Key R&D Program of China (2016YFA0400702), and Guangxi Science Foundation (grants AD17129006 and 2018GXNSFGA281007).
![TS maps of the six CSSs. The light blue crosses represent the positions of the 4FGL point sources and the corresponding $68\%$ and $95\%$ containments are shown as green contours. The red points represent the positions of these CSSs (taken from Abdollahi et al. 2020). Note that a new background source (New source) is needed to add in the background model of 3C 286 and the derived TS value for this new source is 41.2 by assuming a power-law spectrum. The maps are created with a pixel size of 0.05 and smoothed by Gaussian kernel ($\sigma=0.35\degr$).[]{data-label="TSmap"}](fig1a.eps "fig:") ![TS maps of the six CSSs. The light blue crosses represent the positions of the 4FGL point sources and the corresponding $68\%$ and $95\%$ containments are shown as green contours. The red points represent the positions of these CSSs (taken from Abdollahi et al. 2020). Note that a new background source (New source) is needed to add in the background model of 3C 286 and the derived TS value for this new source is 41.2 by assuming a power-law spectrum. The maps are created with a pixel size of 0.05 and smoothed by Gaussian kernel ($\sigma=0.35\degr$).[]{data-label="TSmap"}](fig1b.eps "fig:") ![TS maps of the six CSSs. The light blue crosses represent the positions of the 4FGL point sources and the corresponding $68\%$ and $95\%$ containments are shown as green contours. The red points represent the positions of these CSSs (taken from Abdollahi et al. 2020). Note that a new background source (New source) is needed to add in the background model of 3C 286 and the derived TS value for this new source is 41.2 by assuming a power-law spectrum. The maps are created with a pixel size of 0.05 and smoothed by Gaussian kernel ($\sigma=0.35\degr$).[]{data-label="TSmap"}](fig1c.eps "fig:")
![image](fig1d.eps) ![image](fig1e.eps) ![image](fig1f.eps)
![*Left-panels*: Long-term $\gamma$-ray light curves observed by the *Fermi*/LAT in time bins of 180 days. The opened triangles indicate TS$<$9 for this time bin. The horizontal dashed lines represent the $\sim$11-year average luminosity of sources observed by *Fermi*/LAT. *Left-panels*: photon spectral index ($\Gamma_{\gamma}$) as a function of $\gamma$-ray luminosity ($L_{\gamma}$).[]{data-label="LC"}](fig2a.eps "fig:") ![*Left-panels*: Long-term $\gamma$-ray light curves observed by the *Fermi*/LAT in time bins of 180 days. The opened triangles indicate TS$<$9 for this time bin. The horizontal dashed lines represent the $\sim$11-year average luminosity of sources observed by *Fermi*/LAT. *Left-panels*: photon spectral index ($\Gamma_{\gamma}$) as a function of $\gamma$-ray luminosity ($L_{\gamma}$).[]{data-label="LC"}](fig2aa.eps "fig:") ![*Left-panels*: Long-term $\gamma$-ray light curves observed by the *Fermi*/LAT in time bins of 180 days. The opened triangles indicate TS$<$9 for this time bin. The horizontal dashed lines represent the $\sim$11-year average luminosity of sources observed by *Fermi*/LAT. *Left-panels*: photon spectral index ($\Gamma_{\gamma}$) as a function of $\gamma$-ray luminosity ($L_{\gamma}$).[]{data-label="LC"}](fig2b.eps "fig:") ![*Left-panels*: Long-term $\gamma$-ray light curves observed by the *Fermi*/LAT in time bins of 180 days. The opened triangles indicate TS$<$9 for this time bin. The horizontal dashed lines represent the $\sim$11-year average luminosity of sources observed by *Fermi*/LAT. *Left-panels*: photon spectral index ($\Gamma_{\gamma}$) as a function of $\gamma$-ray luminosity ($L_{\gamma}$).[]{data-label="LC"}](fig2bb.eps "fig:") ![*Left-panels*: Long-term $\gamma$-ray light curves observed by the *Fermi*/LAT in time bins of 180 days. The opened triangles indicate TS$<$9 for this time bin. The horizontal dashed lines represent the $\sim$11-year average luminosity of sources observed by *Fermi*/LAT. *Left-panels*: photon spectral index ($\Gamma_{\gamma}$) as a function of $\gamma$-ray luminosity ($L_{\gamma}$).[]{data-label="LC"}](fig2c.eps "fig:") ![*Left-panels*: Long-term $\gamma$-ray light curves observed by the *Fermi*/LAT in time bins of 180 days. The opened triangles indicate TS$<$9 for this time bin. The horizontal dashed lines represent the $\sim$11-year average luminosity of sources observed by *Fermi*/LAT. *Left-panels*: photon spectral index ($\Gamma_{\gamma}$) as a function of $\gamma$-ray luminosity ($L_{\gamma}$).[]{data-label="LC"}](fig2cc.eps "fig:")
![image](fig2d.eps) ![image](fig2dd.eps) ![image](fig2e.eps) ![image](fig2ee.eps) ![image](fig2f.eps) ![image](fig2ff.eps)
![The $\sim$11-year average $\Gamma_{\gamma}$ observed by *Fermi*/LAT as a function of the corresponding $L_{\gamma}$ for the six CSSs. The blazar data are taken from Ackermann et al. (2015). []{data-label="Gamma-L"}](fig3.eps)
![Observed SEDs with model fitting. The data marked as opened black squares are taken from the ASI Science Data Center (ASDC). For 4C +15.05, the opened black circles in the radio band are from Bloom et al. (1994). The red solid symbols at the $\gamma$-ray band indicate the average spectrum of the *Fermi*/LAT observations, where the down-triangles represent upper-limits. The opened blue circles in the panels of 3C 216, 3C 309.1, 3C 380, and 4C 15.05 represent the core fluxes, which are taken from the NASA/IPAC Extragalactic Database (NED). The opened blue, red, and green circles in the panel of 3C 138 respectively represent the fluxes from components A, B and C (Shen et al. 2005). The opened blue and red circles in the panel of 3C 286 respectively represent the fluxes from components C1 and C2 (An et al. 2017). The thick black solid lines are the sum of emission from each component; synchrotron radiation (red lines), accretion disk (green lines), SSC process (blue lines), and EC process (magenta lines), and among them the dashed lines and solid lines respectively represent the radiation from the core and the extended region. []{data-label="SED"}](fig4a.eps "fig:") ![Observed SEDs with model fitting. The data marked as opened black squares are taken from the ASI Science Data Center (ASDC). For 4C +15.05, the opened black circles in the radio band are from Bloom et al. (1994). The red solid symbols at the $\gamma$-ray band indicate the average spectrum of the *Fermi*/LAT observations, where the down-triangles represent upper-limits. The opened blue circles in the panels of 3C 216, 3C 309.1, 3C 380, and 4C 15.05 represent the core fluxes, which are taken from the NASA/IPAC Extragalactic Database (NED). The opened blue, red, and green circles in the panel of 3C 138 respectively represent the fluxes from components A, B and C (Shen et al. 2005). The opened blue and red circles in the panel of 3C 286 respectively represent the fluxes from components C1 and C2 (An et al. 2017). The thick black solid lines are the sum of emission from each component; synchrotron radiation (red lines), accretion disk (green lines), SSC process (blue lines), and EC process (magenta lines), and among them the dashed lines and solid lines respectively represent the radiation from the core and the extended region. []{data-label="SED"}](fig4b.eps "fig:") ![Observed SEDs with model fitting. The data marked as opened black squares are taken from the ASI Science Data Center (ASDC). For 4C +15.05, the opened black circles in the radio band are from Bloom et al. (1994). The red solid symbols at the $\gamma$-ray band indicate the average spectrum of the *Fermi*/LAT observations, where the down-triangles represent upper-limits. The opened blue circles in the panels of 3C 216, 3C 309.1, 3C 380, and 4C 15.05 represent the core fluxes, which are taken from the NASA/IPAC Extragalactic Database (NED). The opened blue, red, and green circles in the panel of 3C 138 respectively represent the fluxes from components A, B and C (Shen et al. 2005). The opened blue and red circles in the panel of 3C 286 respectively represent the fluxes from components C1 and C2 (An et al. 2017). The thick black solid lines are the sum of emission from each component; synchrotron radiation (red lines), accretion disk (green lines), SSC process (blue lines), and EC process (magenta lines), and among them the dashed lines and solid lines respectively represent the radiation from the core and the extended region. []{data-label="SED"}](fig4c.eps "fig:") ![Observed SEDs with model fitting. The data marked as opened black squares are taken from the ASI Science Data Center (ASDC). For 4C +15.05, the opened black circles in the radio band are from Bloom et al. (1994). The red solid symbols at the $\gamma$-ray band indicate the average spectrum of the *Fermi*/LAT observations, where the down-triangles represent upper-limits. The opened blue circles in the panels of 3C 216, 3C 309.1, 3C 380, and 4C 15.05 represent the core fluxes, which are taken from the NASA/IPAC Extragalactic Database (NED). The opened blue, red, and green circles in the panel of 3C 138 respectively represent the fluxes from components A, B and C (Shen et al. 2005). The opened blue and red circles in the panel of 3C 286 respectively represent the fluxes from components C1 and C2 (An et al. 2017). The thick black solid lines are the sum of emission from each component; synchrotron radiation (red lines), accretion disk (green lines), SSC process (blue lines), and EC process (magenta lines), and among them the dashed lines and solid lines respectively represent the radiation from the core and the extended region. []{data-label="SED"}](fig4d.eps "fig:") ![Observed SEDs with model fitting. The data marked as opened black squares are taken from the ASI Science Data Center (ASDC). For 4C +15.05, the opened black circles in the radio band are from Bloom et al. (1994). The red solid symbols at the $\gamma$-ray band indicate the average spectrum of the *Fermi*/LAT observations, where the down-triangles represent upper-limits. The opened blue circles in the panels of 3C 216, 3C 309.1, 3C 380, and 4C 15.05 represent the core fluxes, which are taken from the NASA/IPAC Extragalactic Database (NED). The opened blue, red, and green circles in the panel of 3C 138 respectively represent the fluxes from components A, B and C (Shen et al. 2005). The opened blue and red circles in the panel of 3C 286 respectively represent the fluxes from components C1 and C2 (An et al. 2017). The thick black solid lines are the sum of emission from each component; synchrotron radiation (red lines), accretion disk (green lines), SSC process (blue lines), and EC process (magenta lines), and among them the dashed lines and solid lines respectively represent the radiation from the core and the extended region. []{data-label="SED"}](fig4e.eps "fig:") ![Observed SEDs with model fitting. The data marked as opened black squares are taken from the ASI Science Data Center (ASDC). For 4C +15.05, the opened black circles in the radio band are from Bloom et al. (1994). The red solid symbols at the $\gamma$-ray band indicate the average spectrum of the *Fermi*/LAT observations, where the down-triangles represent upper-limits. The opened blue circles in the panels of 3C 216, 3C 309.1, 3C 380, and 4C 15.05 represent the core fluxes, which are taken from the NASA/IPAC Extragalactic Database (NED). The opened blue, red, and green circles in the panel of 3C 138 respectively represent the fluxes from components A, B and C (Shen et al. 2005). The opened blue and red circles in the panel of 3C 286 respectively represent the fluxes from components C1 and C2 (An et al. 2017). The thick black solid lines are the sum of emission from each component; synchrotron radiation (red lines), accretion disk (green lines), SSC process (blue lines), and EC process (magenta lines), and among them the dashed lines and solid lines respectively represent the radiation from the core and the extended region. []{data-label="SED"}](fig4f.eps "fig:")
![Distributions of $\gamma_{\rm b}$, $B$, $p_1$ for the large-scale extended regions in the six CSSs (red lines). The large sample data of large-scale jet knots and hotspots (black lines) are taken from Zhang et al. (2018b).[]{data-label="Para-LSJ"}](fig5a.eps "fig:") ![Distributions of $\gamma_{\rm b}$, $B$, $p_1$ for the large-scale extended regions in the six CSSs (red lines). The large sample data of large-scale jet knots and hotspots (black lines) are taken from Zhang et al. (2018b).[]{data-label="Para-LSJ"}](fig5b.eps "fig:") ![Distributions of $\gamma_{\rm b}$, $B$, $p_1$ for the large-scale extended regions in the six CSSs (red lines). The large sample data of large-scale jet knots and hotspots (black lines) are taken from Zhang et al. (2018b).[]{data-label="Para-LSJ"}](fig5c.eps "fig:")
![Distributions of $p_1$ and correlations among $B$, $\delta$, and $\gamma_{\rm b}$ for these $\gamma$-ray emitting AGNs. $p_1$ of NLS1s are from Paliya et al. (2019). The more details of data for BL Lacs (Zhang et al. 2012), FSRQs (Zhang et al. 2014, 2015), NLS1s (Sun et al. 2015, Yang et al. 2018), and RGs (Xue et al. 2017) see Table 4.[]{data-label="para-core"}](fig6a.eps "fig:") ![Distributions of $p_1$ and correlations among $B$, $\delta$, and $\gamma_{\rm b}$ for these $\gamma$-ray emitting AGNs. $p_1$ of NLS1s are from Paliya et al. (2019). The more details of data for BL Lacs (Zhang et al. 2012), FSRQs (Zhang et al. 2014, 2015), NLS1s (Sun et al. 2015, Yang et al. 2018), and RGs (Xue et al. 2017) see Table 4.[]{data-label="para-core"}](fig6b.eps "fig:") ![Distributions of $p_1$ and correlations among $B$, $\delta$, and $\gamma_{\rm b}$ for these $\gamma$-ray emitting AGNs. $p_1$ of NLS1s are from Paliya et al. (2019). The more details of data for BL Lacs (Zhang et al. 2012), FSRQs (Zhang et al. 2014, 2015), NLS1s (Sun et al. 2015, Yang et al. 2018), and RGs (Xue et al. 2017) see Table 4.[]{data-label="para-core"}](fig6c.eps "fig:") ![Distributions of $p_1$ and correlations among $B$, $\delta$, and $\gamma_{\rm b}$ for these $\gamma$-ray emitting AGNs. $p_1$ of NLS1s are from Paliya et al. (2019). The more details of data for BL Lacs (Zhang et al. 2012), FSRQs (Zhang et al. 2014, 2015), NLS1s (Sun et al. 2015, Yang et al. 2018), and RGs (Xue et al. 2017) see Table 4.[]{data-label="para-core"}](fig6d.eps "fig:")
![The peak luminosity ($L_{\rm s}$) of synchrotron radiation and the jet power ($P^{e^{\pm}}_{\rm jet}$) as a function of the peak frequency ($\nu_{\rm s}$) of synchrotron radiation for these $\gamma$-ray emitting AGNs. The details of data see Tables 3 and 4.[]{data-label="nus-Ls"}](fig7a.eps "fig:") ![The peak luminosity ($L_{\rm s}$) of synchrotron radiation and the jet power ($P^{e^{\pm}}_{\rm jet}$) as a function of the peak frequency ($\nu_{\rm s}$) of synchrotron radiation for these $\gamma$-ray emitting AGNs. The details of data see Tables 3 and 4.[]{data-label="nus-Ls"}](fig7b.eps "fig:")
![$P_{\rm r}$ and $P_{B}$ as a function of $P^{e^{\pm}}_{\rm jet}$ for these $\gamma$-ray emitting AGNs. The linear fit in the log scale gives $\log P_{\rm r}=-(6.66\pm2.50)+(1.13\pm0.06)\log P^{e^{\pm}}_{\rm jet}$. The details of data see Tables 3 and 4.[]{data-label="Pr-Pjet"}](fig8a.eps "fig:") ![$P_{\rm r}$ and $P_{B}$ as a function of $P^{e^{\pm}}_{\rm jet}$ for these $\gamma$-ray emitting AGNs. The linear fit in the log scale gives $\log P_{\rm r}=-(6.66\pm2.50)+(1.13\pm0.06)\log P^{e^{\pm}}_{\rm jet}$. The details of data see Tables 3 and 4.[]{data-label="Pr-Pjet"}](fig8b.eps "fig:")
![$P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ as a function of $L_{\rm disk}$ (*top-panels*) together with their relations in units of Eddington luminosity (*bottom-panels*) for these $\gamma$-ray emitting AGNs. The solid lines are the linear regression fits for all the sources (except for the green stars). The details of data see Tables 3 and 4.[]{data-label="R_edd-Pr"}](fig9a.eps "fig:") ![$P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ as a function of $L_{\rm disk}$ (*top-panels*) together with their relations in units of Eddington luminosity (*bottom-panels*) for these $\gamma$-ray emitting AGNs. The solid lines are the linear regression fits for all the sources (except for the green stars). The details of data see Tables 3 and 4.[]{data-label="R_edd-Pr"}](fig9b.eps "fig:") ![$P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ as a function of $L_{\rm disk}$ (*top-panels*) together with their relations in units of Eddington luminosity (*bottom-panels*) for these $\gamma$-ray emitting AGNs. The solid lines are the linear regression fits for all the sources (except for the green stars). The details of data see Tables 3 and 4.[]{data-label="R_edd-Pr"}](fig9c.eps "fig:") ![$P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ as a function of $L_{\rm disk}$ (*top-panels*) together with their relations in units of Eddington luminosity (*bottom-panels*) for these $\gamma$-ray emitting AGNs. The solid lines are the linear regression fits for all the sources (except for the green stars). The details of data see Tables 3 and 4.[]{data-label="R_edd-Pr"}](fig9d.eps "fig:")
![$P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ as a function of $M_{\rm BH}$ for these $\gamma$-ray emitting AGNs. The details of data see Tables 3 and 4. []{data-label="M-Pr"}](fig10a.eps "fig:") ![$P_{\rm r}$ and $P^{e^{\pm}}_{\rm jet}$ as a function of $M_{\rm BH}$ for these $\gamma$-ray emitting AGNs. The details of data see Tables 3 and 4. []{data-label="M-Pr"}](fig10b.eps "fig:")
![Eddington ratio as a function of $M_{\rm BH}$. The RL-NLS1 sample data are from Viswanath et al. (2019), the CSS sample data are from Liao & Gu et al. (2020), the RG sample data are from Berton et al. (2016), and the data of other $\gamma$-ray emitting AGNs see Tables 3 and 4. []{data-label="M-Redd"}](fig11.eps)
[lccccccc]{} 3C 138&80.2912&16.6393&21.4&1.94$\pm$0.47&2.19$\pm$0.02&128.1(8.4)&4FGL J0521.2+1637\
3C 216&137.3897&42.8965&138.6&4.13$\pm$0.23&2.91$\pm$0.06&11.8(-1.6)&4FGL J0910.0+4257\
3C 286&202.7845&30.5092&54.2&2.32$\pm$0.16&2.75$\pm$0.10&64.3(4.5)&4FGL J1331.0+3032\
3C 309.1&224.7816&71.6722&235.5&3.97$\pm$0.11&2.67$\pm$0.02&167.0(10.2)&4FGL J1459.0+7140\
3C 380&277.3824&48.7462&2184.2&18.44$\pm$0.10&2.50$\pm$0.01&149.1(9.4)&4FGL J1829.5+4845\
4C 15.05&31.2182&15.2326&611.5&10.05$\pm$0.52&2.47$\pm$0.05&211.4(12.0)&4FGL J0204.8+1513\
[lccccccccccccccccccccc]{} 3C 138&0.759&7.44E17&0.6&2.8&5.5&18&1&531&5E5&139.8&1.8&3.46&&2.95&277&5E2&5E3&2.5E5&6E-03&2.2&4\
3C 216&0.67&1.01E18&0.4&3.6&3.6&15.9&1&1427&1E5&25.3&1.8&4.04&&17.54&66&1E2&2E4&1E6&8.8E-04&2.46&4\
3C 286&0.849&1.13E18&0.2&4.5&4.5&12.7&1&1666&3E5&5.1&1.6&4.5&&19.18&72&1E3&1.9E4&9.5E5&4.7E-04&2.22&4\
3C 309.1&0.904&8.75E17&0.22&3.6&2.8&15.5&50&1914&1E5&15&1.5&4.2&&15.65&77&5E2&1.5E4&7.5E5&4.3E-04&2.22&4\
3C 380&0.692&1.11E18&0.15&4.0&2.4&9.5&1&1845&1E5&40.1&1.68&4.0&&7.11&190&1E2&1.5E4&7.5E5&5E-03&2.4&4\
4C 15.05&0.833&2.26E18&0.38&8.9&5.0&3.9&20&320&6E4&0.36&1.7&3.6&&2.29&300&1E3&6E3&3E5&1.9E-03&2.0&4\
[lccccccccc]{} 3C 138&2.86E45&2.26E46&1.24E46&3.79E46&1.01E9&2.79E46&1.80E12&4.86E45&0.22\
3C 216&4.92E44&7.93E45&1.99E45&1.04E46&2.39E8&4.87E45&9.49E12&7.19E45&0.16\
3C 286&5.76E44&3.88E45&3.65E45&8.10E45&5.25E8&5.96E46&5.41E12&5.92E45&0.90\
3C 309.1&6.77E44&1.09E45&3.11E45&4.88E45&2.9E9&7.28E46&8.32E12&1.06E46&0.20\
3C 380&8.50E44&5.98E44&2.28E45&3.73E45&7.1E9&3.12E46&5.91E12&1.11E46&0.03\
4C 15.05&5.14E43&6.91E46&3.39E44&6.95E46&&1.93E45&1.37E12&1.26E46&\
[lccccccccccccc]{}\
BL Lacertae$^L$&0.069&19$^{+7}_{-1}$&0.5$^{+0.3}_{-0.33}$&1.74$^{+0.81}_{-0.33}$E3&1.2E14&1.16E45&7.25$^{+1.23}_{-1.61}$E42&1.79$^{+2.48}_{-0.78}$E44&1.12$^{+1.23}_{-0.66}$E43&1.98$^{+2.48}_{-0.78}$E44&7.38E43&8.61&1.44E-03\
BL Lacertae$^H$&&20$^{+6}_{-5.2}$&0.2$^{+0.27}_{-0.1}$&3.80$^{+0.91}_{-0.92}$E3&2.54E14&4.9E44&5.73$^{+3.00}_{-3.05}$E42&4.36$^{+2.41}_{-1.94}$E44&2.20$^{+1.47}_{-0.62}$E42&4.44$^{+2.41}_{-1.94}$E44&7.38E43&8.61&1.44E-03\
Mkn 421$^L$&0.031&29$^{+14}_{-14}$&0.14$^{+1.16}_{-0.09}$&1.05$^{+0.39}_{-0.57}$E5&1.23E17&7.49E44&1.88$^{+1.81}_{-1.81}$E42&5.96$^{+6.73}_{-5.16}$E43&5.12$^{+26.35}_{-1.73}$E42&6.66$^{+7.23}_{-5.17}$E43&1.03E43&8.67&1.75E-04\
Mkn 501$^L$&0.034&14$^{+9}_{-5}$&0.16$^{+0.44}_{-0.13}$&6.12$^{+4.91}_{-2.18}$E4&2.95E16&1.96E44&2.26$^{+1.61}_{-1.61}$E42&2.09$^{+6.74}_{-1.36}$E43&5.78$^{+7.44}_{-4.28}$E42&2.90$^{+6.78}_{-1.43}$E43&3.47E43&9.03&2.57E-04\
Mkn 501$^H$&&15$^{+14}_{-4.2}$&0.4$^{+0.8}_{-0.37}$&9.65$^{+15.69}_{-3.08}$E5&1.8E19&2.91E45&2.19$^{+1.62}_{-1.23}$E43&1.97$^{+17.3}_{-1.15}$E43&3.31$^{+4.31}_{-3.05}$E41&4.19$^{+17.40}_{-1.69}$E43&3.47E43&9.03&2.57E-04\
PKS 2005–489$^H$&0.071&42$^{+15}_{-15}$&0.09$^{+0.19}_{-0.06}$&1.75$^{+0.85}_{-0.51}$E4&5.63E15&2.19E45&3.41$^{+2.44}_{-2.44}$E42&1.66$^{+3.22}_{-0.91}$E44&9.59$^{+6.02}_{-6.03}$E43&2.66$^{+3.28}_{-1.10}$E44&2.35E43&8.76&3.24E-04\
1ES 1218+30.4&0.182&20$^{+13}_{-9.2}$&0.14$^{+0.72}_{-0.11}$&4.78$^{+3.26}_{-2.15}$E4&3.29E16&2.44E45&1.42$^{+1.31}_{-1.31}$E43&6.00$^{+12.08}_{-4.24}$E43&1.41$^{+3.24}_{-0.93}$E43&8.83$^{+12.57}_{-4.53}$E43&2.47E43&8.58&5.17E-04\
W Com$^L$&0.102&15$^{+8}_{-1}$&0.18$^{+0.12}_{-0.13}$&1.10$^{+0.59}_{-0.22}$E4&1.68E15&8.13E44&1.11$^{+0.19}_{-0.19}$E43&8.63$^{+11.6}_{-4.33}$E43&8.49$^{+8.43}_{-4.88}$E42&1.06$^{+1.16}_{-0.44}$E44&2.95E43&8.53$^{\rm P17}$&6.93E-04\
W Com$^H$&&14$^{+19}_{-0.5}$&0.17$^{+0.03}_{-0.15}$&9.36$^{+8.41}_{-0.57}$E3&1.16E15&1.01E45&3.51$^{+0.66}_{-0.66}$E43&2.41$^{+5.66}_{-0.34}$E44&5.75$^{+1.25}_{-3.26}$E42&2.82$^{+5.66}_{-0.34}$E44&2.95E43&8.53&6.93E-04\
PKS 2155–304$^L$&0.116&50$^{+18}_{-18}$&0.16$^{+0.44}_{-0.1}$&1.94$^{+0.78}_{-0.69}$E4&1.07E16&7.47E45&6.20$^{+4.47}_{-4.46}$E42&1.81$^{+2.20}_{-1.16}$E44&2.24$^{+3.07}_{-1.19}$E43&2.10$^{+2.22}_{-1.17}$E44&2.82E44&8.7$^{\rm P17}$&4.47E-03\
PKS 2155–304$^H$&&26$^{+16}_{-1}$&0.4$^{+0.13}_{-0.26}$&2.95$^{+0.97}_{-0.34}$E4&2.2E16&1.33E46&2.42$^{+0.31}_{-0.31}$E44&6.40$^{+4.11}_{-2.08}$E44&1.03$^{+0.49}_{-0.17}$E43&8.93$^{+4.13}_{-2.10}$E44&2.82E44&8.7&4.47E-03\
1ES 1959+650$^L$&0.048&11$^{+6}_{-1.8}$&1.1$^{+0.72}_{-0.52}$&6.67$^{+0.72}_{-1.00}$E4&2.64E17&8.91E44&1.37$^{+0.45}_{-0.46}$E43&2.55$^{+0.10}_{-0.76}$E43&7.04$^{+2.44}_{-2.44}$E43&1.10$^{+0.25}_{-0.26}$E44&1.91E43&8.7$^{\rm P17}$&3.02E-04\
1ES 1959+650$^H$&&12$^{+17}_{-1.3}$&0.25$^{+0.15}_{-0.23}$&1.19$^{+1.52}_{-0.19}$E6&1.48E19&1.81E45&3.73$^{+0.83}_{-0.83}$E43&3.00$^{+17.46}_{-1.17}$E44&5.15$^{+3.16}_{-4.02}$E42&3.42$^{+17.46}_{-1.17}$E44&1.91E43&8.7&3.02E-04\
PG 1553+113&0.3&32$^{+6}_{-6}$&0.13$^{+0.09}_{-0.04}$&1.59$^{+0.16}_{-0.23}$E4&4.87E15&4.11E46&7.64$^{+2.90}_{-2.91}$E43&2.69$^{+0.38}_{-0.63}$E44&2.64$^{+0.68}_{-0.10}$E44&6.09$^{+0.83}_{-0.70}$E44&3.98E44&8.7$^{\rm P17}$&6.32E-03\
1ES 1011+496&0.212&13$^{+12}_{-1.3}$&0.7$^{+0.55}_{-0.61}$&8.48$^{+8.58}_{-1.79}$E4&1.4E17&1.59E46&1.66$^{+0.34}_{-0.36}$E44&8.72$^{+31.47}_{-4.13}$E43&2.40$^{+2.61}_{-1.86}$E44&4.93$^{+4.11}_{-1.94}$E44&4.37E44&8.7$^{\rm P17}$&6.93E-03\
Mkn 180&0.045&6$^{+6}_{-0.7}$&0.4$^{+0.3}_{-0.25}$&3.25$^{+0.50}_{-0.64}$E4&1.96E16&7.54E43&4.64$^{+1.20}_{-1.20}$E42&6.16$^{+1.33}_{-2.62}$E42&4.77$^{+3.94}_{-3.94}$E42&1.56$^{+0.43}_{-0.49}$E43&3.31E42&8$^{\rm P17}$&2.63E-04\
RGB J0152+017&0.08&5$^{+13}_{-0}$&0.28$^{+0}_{-0.27}$&1.29$^{+2.31}_{-0.00}$E5&1.4E17&7.69E43&1.19$^{+0.20}_{-0.20}$E43&2.38$^{+14.02}_{-0.00}$E43&1.06$^{+0.00}_{-0.83}$E42&3.67$^{+14.02}_{-0.22}$E43&&&\
H1426+428&0.129&8.5$^{+7}_{-0.1}$&0.1$^{+0.04}_{-0.082}$&3.79$^{+2.82}_{-0.57}$E5&7.16E17&5.45E44&6.88$^{+0.69}_{-0.69}$E43&2.01$^{+5.55}_{-0.87}$E44&1.03$^{+0.84}_{-0.65}$E42&2.71$^{+5.55}_{-0.88}$E44&&&\
PKS 0548–322&0.069&6$^{+14}_{-0.4}$&0.6$^{+0}_{-0.58}$&1.85$^{+3.52}_{-0.00}$E5&4.43E17&2.03E44&1.10$^{+0.15}_{-0.15}$E43&7.54$^{+53.13}_{-0.49}$E42&7.79$^{+0.50}_{-6.41}$E42&2.63$^{+5.32}_{-0.66}$E43&&&\
1ES 2344+514&0.044&13$^{+9}_{-6}$&0.12$^{+0.3}_{-0.07}$&6.01$^{+1.15}_{-1.63}$E4&2.18E16&1.26E44&1.37$^{+1.79}_{-1.30}$E42&1.31$^{+0.03}_{-0.37}$E43&2.37$^{+0.08}_{-0.08}$E42&1.68$^{+0.18}_{-0.39}$E43&&&\
1ES 1101–232&0.186&12$^{+5}_{-1}$&1.05$^{+0.75}_{-0.6}$&4.51$^{+1.28}_{-0.91}$E4&1.45E17&3.48E45&3.91$^{+0.67}_{-0.67}$E43&8.57$^{+4.84}_{-3.84}$E42&1.02$^{+1.09}_{-0.23}$E44&1.50$^{+1.10}_{-0.24}$E44&&&\
3C 66A&0.44&24$^{+4}_{-4}$&0.2$^{+0.11}_{-0.07}$&2.21$^{+0.33}_{-0.26}$E4&5.38E15&4.01E46&5.40$^{+2.08}_{-1.82}$E44&1.60$^{+0.53}_{-0.34}$E45&4.02$^{+0.64}_{-0.92}$E43&2.18$^{+0.57}_{-0.39}$E45&&&\
PKS 1424+240&0.5&33$^{+8}_{-6}$&0.23$^{+0.23}_{-0.08}$&2.83$^{+0.31}_{-0.62}$E4&1.16E16&1.12E47&1.52$^{+0.56}_{-0.56}$E44&3.47$^{+0.43}_{-1.55}$E44&7.01$^{+5.76}_{-0.13}$E44&1.20$^{+0.58}_{-0.16}$E45&&&\
1ES 0806+524&0.138&12$^{+7}_{-4}$&0.32$^{+1}_{-0.21}$&2.68$^{+0.95}_{-1.07}$E4&7.73E15&7.49E44&1.19$^{+0.80}_{-0.80}$E43&9.71$^{+6.48}_{-6.94}$E43&4.12$^{+9.42}_{-1.10}$E43&1.50$^{+1.15}_{-0.71}$E44&&&\
PKS 0521–36&0.055&&&&&&1.7E44&5.01E43&7.59E42&2.28E44&1.09E44&8.6&2.18E-03\
PKS 0829+46&0.174&&&&&&8.51E43&5.25E43&5.13E43&1.89E44&8.31E43&8.68&1.38E-03\
PKS 0851+202&0.306&&&&&&2.19E44&1.95E44&3.02E43&4.44E44&1.57E44&8.86&1.72E-03\
TXS 0954+658&0.367&&&&&&8.13E43&3.31E44&8.91E42&4.21E44&6.21E43&8.53&1.46E-03\
PMN 1012+063&0.727&&&&&&8.51E43&5.25E43&5.13E43&1.89E44&1.81E44&8.5&4.55E-03\
PKS 1057–79&0.581&&&&&&5.62E44&6.03E44&2.4E43&1.19E45&1.47E45&8.8&1.85E-02\
PKS 1519–273&1.294&&&&&&4.79E44&1.26E44&8.91E44&1.5E45&8.4E44&8.8&1.06E-02\
PKS 1749+096&0.322&&&&&&1.1E44&1.07E44&9.33E43&3.1E44&1.26E45&8.66&2.19E-02\
S5 1803+78&0.68&&&&&&5.01E44&1.15E44&1.45E45&2.06E45&2E46&8.36&6.94E-01\
TXS 1807+698&0.051&&&&&&2.82E43&2.69E44&8.71E42&3.06E44&2.12E43&8.7&3.37E-04\
PKS 2240–260&0.774&&&&&&2.57E44&2.51E44&9.77E43&6.06E44&7.12E44&8.6&1.42E-02\
\
NGC 1218&0.02865&5.6&0.23&4.98E4&7.45E15&5.06E42&5.3E41&4.03E42&1.24E42&5.8E42&4.3E42&8.7$^{\rm R05}$&6.82E-05\
NGC 1275&0.01756&5.8&0.15&1.27E3&1.45E13&7.49E43&6.83E42&5.28E43&3.01E43&8.97E43&5E43&8.61$^{\rm B03}$&9.77E-04\
NGC 6251&0.02471&7.8&0.02&1.6E4&8.13E13&2.19E42&3.75E41&2.49E44&3.54E40&2.5E44&8.72E41&8.73$^{\rm M03}$&1.29E-05\
3C 120&0.03301&1.8&3.7&1.88E3&3.27E13&1.62E44&1.86E44&1.66E43&8.47E43&2.88E44&2.72E43&8.2$^{\rm B03}$&1.36E-03\
PKS 0625–35&0.05459&4.9&1.2&1.96E4&1.62E16&6.85E43&5.81E42&1.08E42&1.87E43&2.56E43&3.69E43&9.19$^{\rm B03}$&1.89E-04\
M 87&0.00428&3&0.1&1.04E4&9.52E13&2.51E41&9.94E40&8.42E42&8.1E40&8.6E42&9.92E40&9.81$^{\rm G09}$&1.22E-07\
Cen A&0.00183&1.2&4.1&9.09E2&1.87E13&1.81E42&6.7E42&2.84E42&8.72E41&1.04E43&1.92E41&8.38$^{\rm S05}$&6.37E-06\
Cen b&0.01292&4.8&0.1&3.04E3&1.93E13&2.17E42&5.54E41&3.66E43&1.31E41&3.73E43&7.39E41&&\
3C 111&0.0485&4.7&0.45&2.01E3&2.96E13&1.25E44&4.4E43&1.37E44&2.26E42&1.84E44&1.12E43&8.8$^{\rm K11}$&1.4E-04\
3C 207&0.6808&9.8&0.42&3.26E3&7.51E13&3.02E45&4.4E44&8.08E44&1.45E43&1.26E45&1.31E45&8.4$^{\rm S11}$&4.13E-02\
3C 380&0.692&8&0.9&1.01E3&2.38E13&6.86E45&1.58E45&8.33E44&2.89E43&2.44E45&1.51E45&9.851$^{\rm W02}$&1.69E-03\
Pictor A&0.03506&2.5&4.2&1.03E3&4.73E13&6.89E43&3.36E43&7.38E42&1.64E43&5.74E43&1.19E43&8.7$^{\rm K11}$&1.88E-04\
\
1H 0323+342$^1$&0.0629&2.8$\pm$0.6&3.7$\pm$1.4&883$\pm$361&2.51$\pm$2.31E13&4.83$\pm$3.34E44&1.19$\pm$0.52E44&2.09$\pm$2.00E43&1.82$\pm$1.80E43&1.59$\pm$0.59E44&4.54E44&7.3$^{\rm Z07}$&0.181\
1H 0323+342$^2$&&3.6$\pm$1.3&2.1$\pm$1.2&591$\pm$273&8.51$\pm$8.45E12&3.05$\pm$3.00E44&3.56$\pm$2.68E43&3.16$\pm$3.10E43&1.51$\pm$1.50E43&8.24$\pm$4.36E43&4.54E44&7.3&0.181\
1H 0323+342$^3$&&4.9$\pm$0.8&2.5$\pm$0.9&383$\pm$160&5.50$\pm$5.06E12&5.17$\pm$4.77E44&3.24$\pm$1.21E43&2.24$\pm$2.20E43&7.59$\pm$7.50E43&1.31$\pm$0.79E44&4.54E44&7.3&0.181\
1H 0323+342$^4$&&4.5$\pm$0.6&1.9$\pm$0.6&378$\pm$151&3.63$\pm$3.34E12&4.61$\pm$4.25E44&5.12$\pm$1.52E43&4.68$\pm$4.60E43&3.09$\pm$2.78E43&1.29$\pm$0.56E44&4.54E44&7.3&0.181\
1H 0323+342$^5$&&6.2$\pm$0.6&2.5$\pm$0.7&271$\pm$60&3.98$\pm$1.37E12&1.06$\pm$0.36E45&6.61$\pm$1.41E43&3.16$\pm$2.05E43&2.04$\pm$1.41E44&3.02$\pm$1.43E44&4.54E44&7.3&0.181\
PMN J0948+0022$^1$&0.5846&11$\pm$1.35&4$\pm$1.5&267$\pm$109&6.31$\pm$6.10E12&2.51$\pm$1.45E46&5.03$\pm$1.39E44&1.95$\pm$1.84E44&2.38$\pm$2.08E45&3.08$\pm$2.09E45&9.32E45&8.97$^{\rm V19}$&0.079\
PMN J0948+0022$^2$&&10.77$\pm$1.25&5.8$\pm$1.35&260$\pm$80&9.55$\pm$5.49E12&4.92$\pm$2.72E46&5.11$\pm$1.38E44&2.24$\pm$1.70E44&4.47$\pm$2.98E45&5.20$\pm$2.99E45&1.58E46&8.97&0.135\
PMN J0948+0022$^3$&&8.6$\pm$1.3&4.6$\pm$1.2&336$\pm$126&9.12$\pm$8.39E12&2.09$\pm$1.44E46&4.94$\pm$1.62E44&2.04$\pm$1.75E44&1.17$\pm$0.92E45&1.87$\pm$0.95E45&1.58E46&8.97&0.135\
PMN J0948+0022$^4$&&11.1$\pm$1&5.1$\pm$1.2&202$\pm$69&5.01$\pm$3.69E12&4.17$\pm$2.40E46&4.18$\pm$1.00E44&1.86$\pm$1.37E44&3.98$\pm$2.38E45&4.59$\pm$2.39E45&1.58E46&8.97&0.135\
PMN J0948+0022$^5$&&11.6$\pm$0.8&3.7$\pm$0.7&234$\pm$64&5.37$\pm$3.09E12&3.80$\pm$1.31E46&6.54$\pm$1.06E44&2.57$\pm$1.48E44&2.57$\pm$1.24E45&3.48$\pm$1.26E45&1.58E46&8.97&0.135\
PMN J0948+0022$^6$&&9.5$\pm$0.5&2.4$\pm$0.4&285$\pm$59&4.37$\pm$1.71E12&1.12$\pm$0.34E46&5.43$\pm$0.70E44&2.57$\pm$1.25E44&4.90$\pm$1.92E44&1.29$\pm$0.24E45&1.58E46&8.97&0.135\
PMN J0948+0022$^7$&&13.5$\pm$1.1&1.7$\pm$0.8&186$\pm$63&1.95$\pm$0.90E12&4.79$\pm$2.21E46&2.38$\pm$0.68E45&6.03$\pm$6.00E44&1.00$\pm$0.90E45&3.98$\pm$1.27E45&1.58E46&8.97&0.135\
PMN J0948+0022$^8$&&13.7$\pm$1.8&2.1$\pm$1.1&145$\pm$63&1.26$\pm$1.20E12&5.25$\pm$4.11E46&1.59$\pm$0.55E45&5.56$\pm$5.50E44&1.66$\pm$1.60E45&3.80$\pm$1.78E45&9.32E45&8.97&0.079\
PMN J0948+0022$^9$&&11.37$\pm$2.2&4.2$\pm$2&350$\pm$147&1.26$\pm$1.16E13&4.27$\pm$3.44E46&9.28$\pm$4.47E44&2.45$\pm$2.40E44&2.99$\pm$2.90E45&4.16$\pm$2.94E45&1.58E46&8.97&0.135\
PKS 1502+036&0.409&9.5$\pm$0.8&4.7$\pm$0.3&277$\pm$60&8.71$\pm$4.01E12&3.36$\pm$1.16E45&1.09$\pm$0.20E44&2.95$\pm$1.50E43&2.29$\pm$0.84E45&2.43$\pm$0.84E45&6.92E44&8.3$^{\rm C13}$&0.028\
SBS 0846+513&0.5835&7.4$\pm$0.8&2$\pm$0.6&366$\pm$84&4.68$\pm$1.61E12&2.13$\pm$0.44E45&3.23$\pm$0.73E44&1.05$\pm$0.65E44&1.58$\pm$1.17E44&5.86$\pm$1.52E44&3.22E44&7.86$^{\rm V19}$&0.035\
PKS 2004–447&0.24&6.4$\pm$0.5&4.1$\pm$0.6&359$\pm$69&1.00$\pm$0.35E13&1.06$\pm$0.25E45&4.03$\pm$0.68E43&1.95$\pm$0.81E43&4.57$\pm$2.00E44&5.17$\pm$2.00E44&3.7E43&6.73$^{\rm O01}$&0.055\
TXS 2116–077$^H$&0.26&5.9$\pm$0.7&3.7$\pm$0.8&269$\pm$81&8.10$\pm$0.00E12&4.81$\pm$0.00E44&3.08$\pm$0.00E43&3.46$\pm$0.00E43&4.18$\pm$0.00E44&4.83$\pm$0.00E44&2.88E44&7.94$^{\rm V19}$&0.026\
TXS 2116–077$^L$&&5.8$\pm$0.7&2.6$\pm$0.5&372$\pm$111&9.11$\pm$0.00E12&2.89$\pm$0.00E44&2.19$\pm$0.00E43&2.44$\pm$0.00E43&1.92$\pm$0.00E44&2.38$\pm$0.00E44&2.88E44&7.94&0.026\
TXS 0929+533&0.6&&&&&&9.12E44&9.77E43&1.1E44&1.12E45&5.01E45&8.01$^{\rm V19}$&3.9E-01\
GB6 J0937+5008&0.28&&&&&&3.24E44&2.24E44&8.13E42&5.56E44&5.13E43&7.56$^{\rm P19}$&1.12E-02\
TXS 0955+326&0.53&&&&&&1.02E45&1.55E44&2.34E44&1.41E45&1.7E46&8.7$^{\rm P19}$&2.7E-01\
FBQS J1102+2239&0.45&&&&&&3.31E45&3.72E44&6.03E44&4.29E45&1E44&7.78$^{\rm P19}$&1.32E-02\
PMN J1222+0413&0.97&&&&&&4.79E46&1.32E45&3.63E45&5.28E46&1.51E46&8.85$^{\rm P19}$&1.7E-01\
SDSS J1246+0238&0.36&&&&&&4.79E44&6.46E43&2.82E44&8.25E44&3.02E44&8.63$^{\rm V19}$&5.63E-03\
TXS 1419+391&0.49&&&&&&6.76E44&1.74E44&2.24E44&1.07E45&2.19E45&8.63$^{\rm V19}$&4.08E-02\
TXS 1518+423&0.48&&&&&&5.75E44&1.74E44&2.45E44&9.95E44&5.01E44&7.85$^{\rm P19}$&5.63E-02\
RGB J1644+263&0.14&&&&&&5.5E43&1.78E43&2.69E43&9.97E43&3.02E44&8.3$^{\rm V19}$&1.2E-02\
PMN J2118+0013&0.46&&&&&&4.57E44&6.31E43&2.57E44&7.77E44&8.71E44&7.98$^{\rm V19}$&7.26E-02\
\
3C 273&0.158&7.41$\pm$0.9&8.5$\pm$1.6&328$\pm$79&2.24$\pm$1.03E13&1.42$\pm$0.41E45&5.85$\pm$1.47E44&1.00$\pm$0.58E44&1.12$\pm$0.67E45&1.81$\pm$0.69E45&8.23E46&9.3&0.328\
3C 454.3&0.859&15.6$\pm$0.6&5.1$\pm$0.8&137$\pm$32&2.88$\pm$1.33E12&8.14$\pm$2.44E45&3.18$\pm$0.28E45&5.63$\pm$2.60E44&2.81$\pm$0.98E45&6.55$\pm$1.05E45&9.27E46&9.17&0.498\
PKS 0208–512&1.003&15.8$\pm$0.7&3.42$\pm$1.2&105$\pm$40&9.84$\pm$8.61E11&2.41$\pm$1.39E45&1.72$\pm$0.18E45&4.60$\pm$4.25E44&1.19$\pm$0.82E45&3.38$\pm$0.94E45&2.88E46&9.21&0.141\
PKS 0420–01&0.916&12.8$\pm$0.7&8.193$\pm$1.1&385$\pm$118&2.75$\pm$1.90E13&4.71$\pm$1.36E45&1.56$\pm$0.18E45&2.69$\pm$1.24E44&3.09$\pm$1.07E45&4.92$\pm$1.09E45&1.51E46&9.03&0.112\
PKS 0528+134&2.07&17.42$\pm$0.9&2.88$\pm$1.1&230$\pm$86&3.00$\pm$2.42E12&1.39$\pm$0.90E46&8.43$\pm$1.09E45&1.44$\pm$1.36E45&5.44$\pm$3.90E44&1.04$\pm$0.18E46&8.46E46&9.4&0.268\
B3 0650+453&0.933&14.1$\pm$1&1.325$\pm$0.3&111$\pm$37&4.04$\pm$2.97E11&5.08$\pm$2.81E44&1.52$\pm$0.23E45&4.43$\pm$3.49E44&1.17$\pm$0.63E44&2.08$\pm$0.42E45&1.05E46&8.17&0.566\
PKS 0727–11&1.589&20.6$\pm$1.2&5.38$\pm$1.05&254$\pm$61&1.00$\pm$0.46E13&1.19$\pm$0.41E46&4.74$\pm$0.52E45&3.16$\pm$1.75E44&5.01$\pm$2.19E45&1.01$\pm$0.23E46&3.92E46&&\
PKS 1127–145&1.184&13.1$\pm$0.8&11.5$\pm$1.2&213$\pm$32&1.15$\pm$0.32E13&1.81$\pm$0.46E46&2.04$\pm$0.28E45&2.66$\pm$0.98E44&5.11$\pm$1.67E45&7.42$\pm$1.70E45&1.41E47&9.18&0.741\
1Jy 1308+326&0.997&12.6$\pm$0.9&3.39$\pm$0.9&353$\pm$129&8.91$\pm$7.18E12&1.40$\pm$0.64E45&2.50$\pm$0.38E45&3.98$\pm$3.21E44&4.57$\pm$2.74E44&3.36$\pm$0.57E45&1.22E46&8.94&0.112\
PKS 1508–055&1.185&16.96$\pm$1.1&7$\pm$1.5&141$\pm$41&3.86$\pm$2.22E12&4.63$\pm$2.03E45&8.65$\pm$1.23E44&1.49$\pm$0.93E44&5.30$\pm$2.72E45&6.32$\pm$2.72E45&1.11E47&8.97&0.943\
PKS 1510–089&0.36&11$\pm$0.5&3.15$\pm$0.5&305$\pm$32&8.91$\pm$1.23E12&5.83$\pm$0.67E44&5.94$\pm$0.68E44&1.02$\pm$0.28E44&5.01$\pm$1.73E44&1.20$\pm$0.19E45&5.92E45&8.65&0.105\
TXS 1846+322&0.798&13.1$\pm$0.6&2.48$\pm$0.7&206$\pm$72&2.66$\pm$1.96E12&6.00$\pm$3.59E44&6.26$\pm$0.69E44&1.82$\pm$1.42E44&3.63$\pm$2.09E44&1.17$\pm$0.26E45&2.61E46&8.21&1.278\
PKS 2123–463&1.67&17.9$\pm$0.6&3.6$\pm$0.6&243$\pm$56&5.11$\pm$2.35E12&8.27$\pm$2.86E45&5.17$\pm$0.41E45&5.30$\pm$2.53E44&1.18$\pm$0.41E45&6.88$\pm$0.64E45&4.42E46&&\
TXS 2141+175&0.213&10.34$\pm$0.6&5.1$\pm$0.75&42$\pm$6&2.78$\pm$0.74E11&4.09$\pm$1.03E44&2.39$\pm$0.31E44&2.63$\pm$1.05E44&1.27$\pm$0.48E45&1.77$\pm$0.49E45&1.12E46&8.98&0.094\
PKS 2144+092&1.113&14.28$\pm$1&3.77$\pm$1.2&175$\pm$66&2.48$\pm$2.28E12&2.69$\pm$1.99E45&1.34$\pm$0.21E45&3.26$\pm$2.82E44&8.62$\pm$5.88E44&2.53$\pm$0.69E45&1.96E46&8.7$^{\rm P17}$&0.312\
PKS 2204–54&1.215&14.45$\pm$0.9&5.66$\pm$1.2&205$\pm$46&5.71$\pm$2.24E12&4.75$\pm$1.53E45&1.07$\pm$0.15E45&2.42$\pm$1.30E44&1.81$\pm$0.89E45&3.13$\pm$0.91E45&3.27E46&9$^{\rm P17}$&0.26\
PMN 2345–1555&0.621&13.778$\pm$1&2.555$\pm$0.8&141$\pm$55&1.38$\pm$1.27E12&4.71$\pm$2.71E44&4.15$\pm$0.70E44&1.43$\pm$1.20E44&5.72$\pm$3.90E44&1.13$\pm$0.41E45&2.63E45&8.16&0.145\
S4 0133+47&0.859&13.1$\pm$1.2&10.48$\pm$1.45&191$\pm$65&8.91$\pm$7.18E12&6.35$\pm$1.83E46&1.46$\pm$0.27E45&1.20$\pm$0.64E44&5.75$\pm$2.65E45&7.34$\pm$2.66E45&7.06E45&8.3&0.281\
PKS 0227–369&2.115&17.8$\pm$1&5.93$\pm$0.9&331$\pm$102&1.26$\pm$0.87E13&2.03$\pm$0.58E47&5.02$\pm$0.66E45&4.47$\pm$2.06E44&2.29$\pm$0.90E45&7.75$\pm$1.14E45&&&\
4C 28.07&1.213&14.6$\pm$1.1&6.88$\pm$1&223$\pm$51&8.13$\pm$3.74E12&6.87$\pm$2.69E46&1.46$\pm$0.23E45&2.04$\pm$1.08E44&2.75$\pm$1.14E45&4.41$\pm$1.17E45&5.91E46&9.22&0.283\
PKS 0347–211&2.944&26.2$\pm$1.5&10$\pm$1.5&222$\pm$68&1.12$\pm$0.78E13&5.61$\pm$1.94E47&4.92$\pm$0.64E45&1.95$\pm$1.03E44&1.91$\pm$0.70E46&2.42$\pm$0.70E46&7.08E46&9.78$^{\rm P17}$&0.093\
PKS 0454–234&1.003&19.98$\pm$1.9&6.6$\pm$0.8&184$\pm$56&7.94$\pm$5.49E12&8.72$\pm$2.21E46&2.14$\pm$0.41E45&1.32$\pm$0.67E44&1.15$\pm$0.48E46&1.37$\pm$0.48E46&6.59E45&9.17&0.035\
S4 0917+44&2.19&18.2$\pm$1.3&9.82$\pm$1.8&213$\pm$66&9.12$\pm$6.30E12&2.92$\pm$1.01E47&4.41$\pm$0.66E45&3.47$\pm$2.00E44&7.08$\pm$2.93E45&1.18$\pm$0.30E46&1.81E47&9.29&0.739\
4C 29.45&0.729&11.6$\pm$1&8.25$\pm$1.6&400$\pm$112&3.16$\pm$1.82E13&3.85$\pm$1.51E46&7.43$\pm$1.40E44&1.10$\pm$0.68E44&2.57$\pm$1.36E45&3.42$\pm$1.37E45&1.25E46&8.61&0.245\
3C 279&0.536&12$\pm$0.5&5.9$\pm$0.35&219$\pm$37&8.13$\pm$2.81E12&2.74$\pm$0.28E46&1.10$\pm$0.09E45&1.91$\pm$0.48E44&2.00$\pm$0.41E45&3.29$\pm$0.43E45&8.26E45&8.28&0.345\
PKS 1454–354&1.424&20.2$\pm$1.8&7.5$\pm$1.3&276$\pm$99&1.48$\pm$1.36E13&2.24$\pm$0.57E47&5.07$\pm$1.02E45&3.63$\pm$2.01E44&1.00$\pm$0.51E46&1.54$\pm$0.52E46&3.98E46&9.3$^{\rm P17}$&0.159\
PKS 1502+106&1.839&27$\pm$2.3&7.14$\pm$1.5&192$\pm$66&8.91$\pm$6.57E12&3.96$\pm$1.28E47&7.86$\pm$1.51E45&2.45$\pm$1.47E44&2.29$\pm$1.16E46&3.10$\pm$1.17E46&4.91E46&8.98&0.409\
B2 1520+31&1.487&20.8$\pm$1.6&4.3$\pm$0.9&283$\pm$93&1.00$\pm$0.69E13&5.34$\pm$1.54E46&2.54$\pm$0.49E45&2.04$\pm$1.22E44&3.47$\pm$1.84E45&6.22$\pm$1.90E45&2.16E46&8.92&0.207\
4C 66.20 &0.657&12.2$\pm$1.2&7.16$\pm$1.4&240$\pm$95&8.91$\pm$8.85E12&2.60$\pm$0.90E46&9.85$\pm$1.97E44&1.17$\pm$0.73E44&2.57$\pm$1.42E45&3.67$\pm$1.44E45&6.68E45&9.14&0.039\
PKS 2325+093&1.843&17.6$\pm$1.6&15.1$\pm$1.6&354$\pm$107&3.98$\pm$2.75E13&1.26$\pm$0.36E48&5.64$\pm$1.11E45&3.09$\pm$1.49E44&1.70$\pm$0.74E46&2.29$\pm$0.75E46&4.46E46&8.7&0.709\
Aaron, S. E., Wardle, J. F. C., & Roberts, D. H. 1997, Vistas in Astronomy, 41, 225 Abdo, A. A., Ackermann, M., Ajello, M., et al. 2009, , 707, L142 Abdollahi, S., Acero, F., Ackermann, M., et al. 2020, , 247, 33 Achterberg, A., Gallant, Y. A., Kirk, J. G., et al. 2001, , 328, 393 Ackermann, M., Ajello, M., Atwood, W. B., et al. 2015, , 810, 14 Akujor, C. E., & Garrington, S. T. 1995, , 112, 235 Aller, M. F., Aller, H. D., & Hughes, P. A. 2003, , 586, 33 An, T., Lao, B.-Q., Zhao, W., et al. 2017, , 466, 952 An, T., Cui, Y.-Z., Baan, W. A., et al. 2016, , 826, 190 Angel, J. R. P., & Stockman, H. S. 1980, , 18, 321 Barthel, P. D., Pearson, T. J., & Readhead, A. C. S. 1988, , 329, L51 Berton, M., Caccianiga, A., Foschini, L., et al. 2016, , 591, A98 Berton, M., Foschini, L., Caccianiga, A., et al. 2017, Frontiers in Astronomy and Space Sciences, 4, 8 Bettoni, D., Falomo, R., Fasano, G., et al. 2003, , 399, 869 Blandford, R. D., & Znajek, R. L. 1977, , 179, 433 Blandford, R. D., & Payne, D. G. 1982, , 199, 883 Bloom, S. D., Marscher, A. P., Gear, W. K., et al. 1994, , 108, 398 Burbidge, G. R., & Burbidge, E. M. 1969, , 222, 735 Caccianiga, A., Ant[ó]{}n, S., Ballo, L., et al. 2014, , 441, 172 Calderone, G., Ghisellini, G., Colpi, M., et al. 2013, , 431, 210 Celotti, A., Padovani, P., & Ghisellini, G. 1997, , 286, 415 Cleary, K., Lawrence, C. R., Marshall, J. A., et al. 2007, , 660, 117 Cohen, A. M., Porcas, R. W., Browne, I. W. A., et al. 1977, , 84, 1 Collin, S., Kawaguchi, T., Peterson, B. M., et al. 2006, , 456, 75 Cotton, W. D., Dallacasa, D., Fanti, C., et al. 1997, , 325, 493 D’Ammando, F., Orienti, M., Finke, J., et al. 2012, , 426, 317 Davis, S. W., & Laor, A. 2011, , 728, 98 de Vries, W. H., O’Dea, C. P., Baum, S. A., et al. 1997, , 110, 191 Dermer, C. D., & Schlickeiser, R. 1994, , 90, 945 Fan, J.-H., Yang, J.-H., Tao, J., et al. 2010, , 62, 211 Fan, X.-L., & Wu, Q. 2019, , 879, 107 Fanti, R., Fanti, C., Schilizzi, R. T., et al. 1990, , 231, 333 Forbes, D. A., Crawford, C. S., Fabian, A. C., et al. 1990, , 244, 680 Franceschini, A., Rodighiero, G., & Vaccari, M. 2008, , 487, 837 Gabuzda, D. C., Cantwell, T. M., & Cawthorne, T. V. 2014, , 438, L1 Gawro[ń]{}ski, M. P., & Kus, A. J. 2006, Proceedings of the 8th European VLBI Network Symposium, 20 Gebhardt, K., & Thomas, J. 2009, , 700, 1690 Georganopoulos, M., Perlman, E. S., Kazanas, D., et al. 2006, , 653, L5 Ghisellini, G., Tavecchio, F., Maraschi, L., et al. 2014, , 515, 376 Ghisellini, G., & Celotti, A. 2001, , 379, L1 Ghisellini, G., Tavecchio, F., Foschini, L., et al. 2011, , 414, 2674 Gu, M., Cao, X., & Jiang, D. R. 2001, , 327, 1111 Gu, M., Chen, Y., Komossa, S., et al. 2015, , 221, 3 Guo, F., Liu, Y.-H., Daughton, W., et al. 2015, , 806, 167 Harris, D. E., & Krawczynski, H. 2006, , 44, 463 Heinz, S., & Sunyaev, R. A. 2003, , 343, L59 Herbig, T., & Readhead, A. C. S. 1992, , 81, 83 Jones, K. M., Ghosh, T., & Salter, C. J. 2018, , 155, 254 Kameno, S., Inoue, M., Fujisawa, K., et al. 2000, , 52, 1045 Kameno, S., Inoue, M., Matsumoto, K., et al. 1995, , 47, 711 Kang, S.-J., Chen, L., & Wu, Q. 2014, , 215, 5 Kataoka, J., Stawarz, [Ł]{}., Takahashi, Y., et al. 2011, , 740, 29 Kim, D.-W., Trippe, S., & Kravchenko, E. V. 2020, , 636, A62 Kirk, J. G., Guthmann, A. W., Gallant, Y. A., et al. 2000, , 542, 235 Komossa, S., Voges, W., Xu, D., et al. 2006, , 132, 531 Krolik, J. H., & Hawley, J. F. 2002, , 573, 754 Kus, A. J., Wilkinson, P. N., & Booth, R. S. 1981, , 194, 527 Liao, M., & Gu, M. 2020, , 491, 92 Mattox, J. R., Bertsch, D. L., Chiang, J., et al. 1996, , 461, 396 Merloni, A., Heinz, S., & di Matteo, T. 2003, , 345, 1057 Nolan, P. L., Abdo, A. A., Ackermann, M., et al. 2012, , 199, 31 O’Dea, C. P. 1998, , 110, 493 Oshlack, A. Y. K. N., Webster, R. L., & Whiting, M. T. 2001, , 558, 578 Paliya, V. S. 2019, Journal of Astrophysics and Astronomy, 40, 39 Paliya, V. S., Marcotulli, L., Ajello, M., et al. 2017, , 851, 33 Paliya, V. S., Parker, M. L., Jiang, J., et al. 2019, , 872, 169 Paliya, V. S., Ajello, M., Rakshit, S., et al. 2018, , 853, L2 Paragi, Z., Frey, S., Fejes, I., et al. 2000, , 52, 983 Pearson, T. J., Perley, R. A., & Readhead, A. C. S. 1985, , 90, 738 Pei, Z.-Y., Fan, J.-H., Bastieri, D., et al. 2019, Research in Astronomy and Astrophysics, 19, 070 Peng, F.-K., Zhang, H.-M., Wang, X.-Y., et al. 2019, , 884, 91 Perlman, E. S., Padovani, P., Giommi, P., et al. 1998, , 115, 1253 Polatidis, A. G., & Wilkinson, P. N. 1998, , 294, 327 Polatidis, A. G., & Conway, J. E. 2003, , 20, 69 Randall, K. E., Hopkins, A. M., Norris, R. P., et al. 2011, , 416, 1135 Readhead, A. C. S., & Wilkinson, P. N. 1980, , 235, 11 Rinn, A. S., Sambruna, R. M., & Gliozzi, M. 2005, , 621, 167 Ros, E., & Lobanov, A. P. 2001, 15th Workshop Meeting on European VLBI for Geodesy and Astrometry, 208 Saikia, D. J., Jeyakumar, S., Wiita, P. J., et al. 1995, , 276, 1215 Sbarrato, T., Ghisellini, G., Maraschi, L., et al. 2012, , 421, 1764 Shah, Z., Jithesh, V., Sahayanathan, S., et al. 2019, , 484, 3168 Shen, Y., Richards, G. T., Strauss, M. A., et al. 2011, , 194, 45 Shen, Y., & Ho, L. C. 2014, , 513, 210 Shen, Z.-Q., Shang, L.-L., Cai, H.-B., et al. 2005, , 622, 811 Shen, Z.-Q., Jiang, D. R., Kameno, S., et al. 2001, , 370, 65 Silge, J. D., Gebhardt, K., Bergmann, M., et al. 2005, , 130, 406 Sironi, L., & Spitkovsky, A. 2014, , 783, L21 Smith, H. E., & Spinrad, H. 1980, , 236, 419 Spencer, R. E., McDowell, J. C., Charlesworth, M., et al. 1989, , 240, 657 Spinrad, H., Djorgovski, S., Marr, J., et al. 1985, , 97, 932 Stickel, M., Rieke, G. H., Kuehr, H., et al. 1996, , 468, 556 Sun, X.-N., Zhang, J., Lin, D.-B., et al. 2015, , 798, 43 Takeuchi, Y., Kataoka, J., Stawarz, [Ł]{}., et al. 2012, , 749, 66 Tavecchio, F., Maraschi, L., & Ghisellini, G. 1998, , 509, 608 Tavecchio, F., Maraschi, L., Sambruna, R. M., et al. 2000, , 544, L23 Taylor, G. B., Ge, J., & O’Dea, C. P. 1995, , 110, 522 Urry, C. M., & Padovani, P. 1995, , 107, 803 van Breugel, W., Miley, G., & Heckman, T. 1984, , 89, 5 Virtanen, J. J. P., & Vainio, R. 2005, , 621, 313 Viswanath, G., Stalin, C. S., Rakshit, S., et al. 2019, , 881, L24 von Montigny, C., Bertsch, D. L., Chiang, J., et al. 1995, , 440, 525 Wilkinson, P. N. 1972, , 160, 305 Wilkinson, P. N., Tzioumis, A. K., Benson, J. M., et al. 1991, , 352, 313 Wilkinson, P. N., Booth, R. S., Cornwell, T. J., et al. 1984, , 308, 619 Woo, J.-H., & Urry, C. M. 2002, , 579, 530 Wu, Q. 2009, , 398, 1905 Xi, S.-Q., Zhang, H.-M., Liu, R.-Y., et al. 2020, arXiv e-prints, arXiv:2003.07830 Xue, Z.-W., Zhang, J., Cui, W., et al. 2017, Research in Astronomy and Astrophysics, 17, 090 Yang, H., Yuan, W., Yao, S., et al. 2018, , 477, 5127 Yao, S., Yuan, W., Zhou, H., et al. 2015, , 454, L16 Yao, S., Komossa, S., Liu, W.-J., et al. 2019, , 487, L40 Yuan, W., Zhou, H. Y., Komossa, S., et al. 2008, , 685, 801 Zenitani, S., & Hoshino, M. 2001, , 562, L63 Zhang, H.-M., Zhang, J., Lu, R.-J., et al. 2018a, Research in Astronomy and Astrophysics, 18, 040 Zhang, H.-M., Wang, Z.-J., Zhang, J., et al. 2020, arXiv e-prints, arXiv:2003.11175 Zhang, J., Xue, Z.-W., He, J.-J., et al. 2015, , 807, 51 Zhang, J., Liang, E.-W., Zhang, S.-N., et al. 2012, , 752, 157 Zhang, J., Bai, J. M., Chen, L., et al. 2010, , 710, 1017 Zhang, J., Sun, X.-N., Liang, E.-W., et al. 2014, , 788, 104 Zhang, J., Du, S.-. shi ., Guo, S.-C., et al. 2018b, , 858, 27 Zhang, J., Zhang, S.-N., & Liang, E.-W. 2013, , 767, 8 Zhang, S.-N. 2013, Frontiers of Physics, 8, 630 Zhou, H., Wang, T., Yuan, W., et al. 2007, , 658, L13 Zhu, L., Zhang, S. N., & Tang, S. 2009, , 700, 1173 Zhu, Y.-K., Zhang, J., Zhang, H.-M., et al. 2016, Research in Astronomy and Astrophysics, 16, 170
[^1]: http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html
[^2]: https://tools.ssdc.asi.it/SED/
[^3]: The inner radiative edge of the accretion disk may be at the marginally stable orbit radius and outside the Schwarzschild radius. However, $R_{\rm in}=4.5R_{\rm s}$ is taken for 3C 286 since the small $R_{\rm in}$ would result in very high disk luminosity and Eddington ratio.
[^4]: The NLS1 data (green stars) from Paliya et al. (2019) are not included in correlation analysis and linear fits since $P_{\rm r}$ is estimated by slightly different method.
[^5]: It was suggested that $M_{\rm BH}$ of NLS1s derived from the emission line may be underestimated (Collin et al. 2006; Zhu et al. 2009; Calderone et al. 2013; Viswanath et al. 2019), hence the $M_{\rm BH}$ values of RL-NLS1s in this paper are estimated by fitting the big blue bump spectrum with the standard accretion disk model (Calderone et al. 2013; Viswanath et al. 2019).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
The purpose of this paper is to calculate explicitly the volumes of Siegel sets which are coarse fundamental domains for the action of ${\mathrm{SL} _n (\mathbb{Z})}$ in $\mathrm{SL} _n (\mathbb{R})$, so that we can compare these volumes with those of the fundamental domains of ${\mathrm{SL} _n (\mathbb{Z})}$ in $\mathrm{SL} _n (\mathbb{R})$, which are also computed here, for any $n\geq 2$. An important feature of this computation is that it requires keeping track of normalization constants of the Haar measures. We conclude that the ratio between volumes of fundamental domains and volumes of Siegel sets grows super-exponentially fast as $n$ goes to infinity. As a corollary, we obtained that this ratio gives a super-exponencial lower bound, depending only on $ n $, for the number of intersecting Siegel sets. We were also able to give an upper bound for this number, by applying some results on the heights of intersecting elements in $ {\mathrm{SL} _n (\mathbb{Z})}$.\
**Keywords:** Arithmetic Groups, Siegel Sets, Coarse Fundamental Domains, Volumes.
author:
- Gisele Teixeira Paula
title: 'Comparison of Volumes of Siegel Sets and Fundamental Domains for $\mathrm{SL}_n (\mathbb{Z})$ '
---
[Correspondence to be sent to: e-mail: giseletp@impa.br]{}
\[section\]
\[section\]
\[section\] \[section\] \[section\]
\[section\] \[section\]
Introduction {#intro}
============
Siegel sets were first introduced in the study of quadratic forms by Siegel [@siegel2] in 1939, with some results following from previous works of Hermite and Korkine-Zolotarreff. In a fundamental paper [@borelharish], Borel and Harish-Chandra have generalised this notion and used Siegel domains to prove finiteness of covolumes of non-cocompact arithmetic subgroups.
The simple structure of Siegel sets, compared to those of the actual fundamental domains makes them appealing for applications. For example, in his recent paper [@young], R. Young exploited their properties to obtain new results in geometric group theory. Still very little is known about the geometry of Siegel sets in general. In his book [@morris], Morris describes algebraically examples of Siegel sets not only for $\mathrm{SL} _n (\mathbb{R})$, with $n\geq 2$ , but also in the case of any semisimple Lie group G with a given Iwasawa decomposition.
In this paper we recall one of the main properties of Siegel sets – the finiteness of their volumes. We evaluate these volumes explicitely in the basic case of Siegel sets for ${\mathrm{SL} _n (\mathbb{Z})}$ in $\mathrm{SL} _n (\mathbb{R})$ for any $n\geq 2$. We then compare these volumes with the actual covolumes of ${\mathrm{SL} _n (\mathbb{Z})}$. To this end, we have to deal with an essential difficulty related to the normalization of the Haar measure. For calculating the volumes of Siegel sets, the main difficulty is to find a nice way to describe the region of integration, which we solve with an appropriate change of coordinates. Most of the volume computations that followed Siegel’s original approach were not careful about the normalization constants, just noting that they are computable and could be calculated from the proof. In Section \[domfund\], we follow Garret’s notes on Siegel’s method [@garret] to compute the volumes of the quotients ${\mathrm{SL} _n (\mathbb{Z})}\backslash \mathrm{SL} _n (\mathbb{R})$ for $n\geq 3$ using induction and the volume of $\mathrm{SL}_2({\mathbb{Z}}) \backslash\mathrm{SL}_2({\mathbb{R}})$, that is computed in [@garret]. Our main goal here is to keep a careful track of the normalization constants. The main tools we use are the Poisson Summation formula, the Iwasawa decomposition of $G$ and the choice of a good Haar measure normalization on each group. At the end of the section we discuss the relation between the normalization of the measure we used and the canonical normalization that comes from the metric associated to the Killing form on $\mathfrak{sl}_n({\mathbb{R}})$.
By comparing the volumes of Siegel sets and the volumes of fundamental domains of ${\mathrm{SL} _n (\mathbb{Z})}$, we conclude that somewhat surprinsingly the ratio between them grows super-exponentially fast with $n$.
As an application of the computations presented here, in Section \[morr\] we show that given a Siegel set $\Sigma$ of ${\mathrm{SL} _n (\mathbb{Z})}$, we have an explicit lower bound for the number of elements $\gamma \in {\mathrm{SL} _n (\mathbb{Z})}$ such that $\gamma \Sigma$ intersects $\Sigma$. This bound is given by the ratio between $\mathrm{vol}(\Sigma)$ and $\mathrm{vol}(\mathrm{SL}_n({\mathbb{Z}}) \backslash\mathrm{SL}_n({\mathbb{R}}))$ – see Corollary \[corol1\]. We also give a proof that this result is consistent with a recent work of M. Orr [@martinorr], which generalizes a previous result of P. Habegger and J. Pila [@habegger] on the height of such elements $\gamma$, motivated by the study of Shimura varieties and their unlikely intersections. More precisely, Orr’s result gives, as a corollary, an upper bound for the number of intersecting Siegel sets while our work provides a lower bound for this number (see Corollary \[final\]).
It would be interesting to compute the volumes of Siegel sets in other cases, for example for the action of well known Bianchi groups $\Gamma_d = \mathrm{SL}_2(\mathcal{O}_d)$ on the hyperbolic three-dimensional space $\mathbb{H}^3$. In this case we should have to deal with another difficulty when describing Siegel sets, because of the fact that as $d$ grows the quotients $\Gamma_d \backslash \mathbb{H}^3$ have a growing number of cusps. It would be worth doing these computations in the future, and then comparing them to the results obtained in this paper.
The Iwasawa decomposition of . {#iwasawa}
==============================
Let $n\geq 2$, $G=\mathrm{SL}_n(\mathbb{R})$ and $\Gamma = \mathrm{SL}_n(\mathbb{Z})$. Consider the action of $\Gamma$ by left translations on $G$ and let
$$K = \mathrm{SO}_n;$$ $$A =\left\{\mbox{diag}(a_1,\ldots ,a_n); \displaystyle{ \prod_{i=1}^n{a_i} = 1} ; a_i > 0, \mbox{ for any } i=1,\ldots, n\right\};$$ $$N =\left\{(n_{ij})_{i,j} \in G ; n_{ii}=1 \mbox{ and } n_{ij}= 0 \mbox{ for } i>j\right\}.$$
The product map $$\Phi: K\times A \times N \longrightarrow G$$ $$(k,a,n)\mapsto kan$$ is a homeomorphism.
We can construct an inverse map for $\Phi$ by using the Gram-Schmidt orthonormalization process.
Take $g\in G$ and let $x_1, \ldots ,x_n$ be its columns. Then define inductively $y_1, \ldots ,y_n$ by $$y_1 = \frac{x_1}{\left\|x_1\right\|};$$ $$y_i = \frac{\widetilde{y}_i}{\left\|\widetilde{y}_i\right\|}, \mbox{ where } \widetilde{y}_i = x_i - \displaystyle{\sum_{l=1}^{i-1}{\left\langle x_i,y_l\right\rangle}y_l}; \mbox{ for } i = 2, \ldots, n.$$
Let $e_1, \ldots ,e_n$ be the standard orthonormal basis of $\mathbb{R}^n$. Then there exists an unique $k\in {\mathrm{SO} _n}$ such that $k(y_i) = e_i$, $\mbox{ for any } i = 1, \ldots n$. Therefore $$k(\widetilde{y}_i) = k(\left\|\widetilde{y}_i\right\| y_i) = \left\|\widetilde{y}_i\right\| k(y_i) = \left\|\widetilde{y}_i\right\|e_i,\mbox{ for any } i=1, \ldots , n.$$ So there is a diagonal matrix $a = \mbox{diag}(\left\|\widetilde{y}_1\right\|, \ldots, \left\|\widetilde{y}_n\right\|)$, such that $$k(\widetilde{y}_i) =a(e_i), \mbox{ for any } i=1, \ldots , n.$$ Also, it is easy to see that $y_i \in \left\langle x_1,\ldots , x_i\right\rangle, \mbox{ for any } i=1, \ldots , n$. Thus we have: $$g^{-1}\widetilde{y}_i= g^{-1}(x_i - \displaystyle{\sum_{l=1}^{i-1}{\left\langle x_i,y_l\right\rangle}y_l}) \in g^{-1}x_i + g^{-1}\left\langle x_1, \ldots , x_{i-1}\right\rangle$$ $$\Rightarrow g^{-1}\widetilde{y}_i \in e_i + \left\langle e_1, \ldots , e_{i-1}\right\rangle.$$
From this, we conclude that there exists $u \in N $ such that $g^{-1}\widetilde{y}_i= u e_i$, for every $i= 1, \ldots, n$. Therefore, $$u^{-1}g^{-1}\widetilde{y}_i= e_i = a^{-1}k(\widetilde{y}_i), \mbox{ for any } i \Rightarrow u^{-1}g^{-1} = a^{-1}k \Rightarrow g=k^{-1}a u^{-1}.$$ It is easy to see now that $\mathrm{det}(a) =1$, so $a \in A$ and thus we can define a continuous inverse map $g\in G \mapsto (k^{-1}, a, u^{-1} ) \in K\times A \times N$.
The previous lemma gives us the Iwasawa decomposition $G=KAN$ of $\mathrm{SL} _n (\mathbb{R})$. Note that $K\cap A = K\cap N = A \cap N = \{I\}$ and that for this Iwasawa decomposition, $ AN = NA $ and $ K(AN) = (AN)K $ (see [@morris], page 148).
Haar measure on . {#haar}
=================
Given a locally compact Hausdorff topological group $G$, a left invariant Haar measure on $G$ is, by definition, a regular Borel measure $\mu$ on $G$ such that for all $g \in G$ and all Borel sets $E \subset G$ we have $\mu (gE) = \mu (E)$. It is well known that every connected Lie group admits such a Haar measure. Moreover, it is unique up to scalar multiples. We can define analogously right-invariant Haar measures. See [@venka] for more results about Haar measures on Lie groups.
Since $G = \mathrm{SL} _n (\mathbb{R})$ is unimodular, i.e. the left and right invariant Haar measures coincide, and $dg$ is invariant under left translation by elements of $K$ and under right translation by elements of $AN$, we get that the Haar measure of $G$ in $k, v, a$ coordinates is given by the product measure $dg=dk\, du\, da$, where $da$, $du$ and $dk$ are the Haar measures on $A$, $N$ and $K$, repectively. This means that for every compactly supported and continuous function $f$ on $G$, we have $$\int_G{f(g) dg} = \int_K \int_{A} \int_{N} {f(kau) du \, da \, dk}.$$
It can be proved by induction on $n$ that the Haar measure on $N$ is given by $du= \displaystyle \prod_{i<j}{du_{ij}}$. It is usually convenient to change the order of integration on the variables $a$ and $u$, and to this end we can change the coordinates from $u$ to $v=aua^{-1}$. Then $v$ is also an upper triangular unipotent matrix of the form $$v = Id + \displaystyle \sum_{i<j\leq n} \frac{a_i}{a_j}u_{ij}E_{ij}.$$ It is easily seen that $dv= \displaystyle \prod_{i<j}dv_{ij} = \displaystyle \prod_{i<j}\frac{a_i}{a_j}du_{ij}$. This gives us $$\int_G{f(g) dg} = \int_K \int_{A} \int_{N} {f(kva) dv \, da \, dk}= \int_K \int_{N} \int_{A} {f(kau)\displaystyle \prod_{i<j}\frac{a_i}{a_j} da \, du \, dk}.$$ Also for convenience, we change coordinates from $ au $ to $ k^{-1}auk $ in the last integral. This has Jacobian equal to 1 (for each $ k\in K $), so we get:
$$\int_G{f(g) dg} = \int_{N} \int_{A}\int_K {f(auk)\displaystyle \prod_{i<j}\frac{a_i}{a_j} dk \, da \, du}.$$
In this work, we will consider the Haar measure in $K$ to be the following: it is easy to see that the isotropy group of $e_n =(0, \ldots, 0, 1)$ by the action of ${\mathrm{SO} _n}$ in ${\mathbb{S}}^{n-1}$ is isomorphic to $\mathrm{SO}_{n-1}$. Then ${\mathbb{S}}^{n-1} \cong \mathrm{SO}_{n-1} \backslash {\mathrm{SO} _n}$. We have that the natural map $\pi : {\mathrm{SO} _n}\rightarrow {\mathbb{S}}^{n-1}$ is a Riemannian submersion if we rescale it by a factor of $\frac{1}{\sqrt{2}}$. Thus
$$\mathrm{vol}({\mathrm{SO} _n}) = 2^{\frac{1}{2}(n-1)} \mathrm{vol}(\mathbb{S}^{n-1})\cdot \mathrm{vol}(\mathrm{SO}_{n-1}).$$
By using induction and the fact that $\mathrm{vol}({\mathbb{S}}^{n-1}) = \frac{2\pi ^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}$, we obtain $$\mathrm{vol}({\mathrm{SO} _n}) = 2^{\frac{1}{4}n(n-1)} \mathrm{vol}(\mathbb{S}^{n-1})\cdot \mathrm{vol}(\mathbb{S}^{n-2}) \ldots \mathrm{vol}(\mathbb{S}^{1}) = 2^{(n-1)(\frac{n}{4}+1)} \prod^n_{i=2}{\frac{\pi^\frac{i}{2}}{\Gamma (\frac{i}{2})}}.$$
It remains to define a Haar measure on $A$. We claim that $da = \displaystyle \prod _{i=1}^{n-1} {\frac{da_i}{a_i}}$ is such a measure. Indeed, let $\phi: A \rightarrow {\mathbb{R}}^{n-1}$ be the map $$a = \left(
\begin{array}{ccccc}
a_{1} & 0 & \ldots & 0 & 0 \\
0 & a_{2} & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & a_{n-1} & 0\\
0 & 0 & \cdots & 0 & \displaystyle \prod _{i=1}^{n-1}{a_i^{-1}}\\
\end{array}
\right) \mapsto (t_1 , \ldots , t_{n-1}) = (\log a_1, \ldots ,\log a_{n-1}).$$ As $\phi$ is a group isomorphism and Haar measure is preserved by isomorphisms we get that $da = \displaystyle \prod _{i=1}^{n-1} dt_i = \displaystyle \prod _{i=1}^{n-1} {\frac{da_i}{a_i}}$ is a Haar measure on $A$.
Siegel Sets for . {#Siegelsets}
=================
Let $\Gamma$ be some group acting properly discontinuously on a topological space $X$. We call $\mathcal{F} \subset X$ a coarse fundamental domain for $\Gamma$ if:
- $\Gamma \mathcal{F} = X$;
- $\left\{\gamma \in \Gamma ; \gamma \mathcal{F} \cap \mathcal{F} \neq \emptyset \right\}$ is finite.
A Siegel set in $\mathrm{SL} _n (\mathbb{R})$ is a set ${\Sigma _{t,\lambda}}$ of the form $$\Sigma _{t,\lambda} = A_tN_\lambda K,$$ where $t,\lambda$ are positive real numbers, $$A_t = \left\{a \in A ; \frac{a_i}{a_{i+1}} \leq t , \mbox{ for any } i =1, \ldots , n\right\}$$ and $$N_\lambda = \left\{u \in N | \left|u_{ij}\right|\leq \lambda , \mbox{ for any } i, j =1, \ldots , n\right\}.$$
For certain parameters $t, \lambda$ the Siegel sets ${\Sigma _{t,\lambda}}$ are coarse fundamental domains for ${\mathrm{SL} _n (\mathbb{Z})}$. Another important property is that they have finite volume. Siegel sets can be also defined in a more general way for lattices in other semisimple Lie groups, as it can be seen in Chapter 19 of Morris [@morris]. In many cases, a finite union of copies of Siegel sets glue together to form coarse fundamental domains for general lattices.
In this section we compute the volumes of the Siegel sets in $\mathrm{SL} _n (\mathbb{R})$. We will use the Haar measure on $G$ given in Section \[haar\] in $v,a,k$ coordinates.
$$\mathrm{vol}({\Sigma _{t,\lambda}}) = \frac{1}{2} \mathrm{vol}(\mathrm{SO}_n) (2\lambda)^{\frac{n(n-1)}{2}}\frac{t^{\frac{n(n^2-1)}{6}}}{((n-1)!)^2} .
\label{eq1}$$
$$\mathrm{vol}({\Sigma _{t,\lambda}}) = \int_{\left|u_{ij}\right| \leq \lambda }\int_{\frac{a_i}{a_{i+1}} \leq t} \int_{K} \prod_{i<j}{\frac{a_i}{a_j}}
dk \prod_{i=1}^{n-1} {\frac{da_i}{a_i}} \prod_{1\leq i <j \leq n}{du_{ij}}.$$
$$= \mathrm{vol}(K) (2\lambda)^{\frac{n(n-1)}{2}} \int_{\frac{a_i}{a_{i+1}} \leq t}{\prod_{i<j}{\frac{a_i}{a_j}}\prod_{i=1}^{n-1}{\frac{da_i}{a_i}}}.$$
To compute the integral over $a_1, \ldots, a_n$ (with the condition $\displaystyle \prod_{i=1}^{n}{a_i} = 1$), we change variables from $a_1, \ldots, a_n$ to the variables $$b_i = \frac{a_i}{a_{i+1}}, \mbox{ for any } i = 1, \ldots , n-1.$$
By elementary computation, we get $$\displaystyle \prod_{i<j}{\frac{a_i}{a_j}} = \prod_{i=1}^{n-1}{b_i ^{i(n-i)}}.$$ Moreover, as $a_i = b_i a_{i+1}$, the Jacobian of the change of coordinates from $a_i$ to $b_i$ is $\frac{1}{2a_1}$. The integral then becomes $$\int_{b_i \leq t} {\displaystyle \prod_{i=1}^{n-1}{b_i ^{i(n-i)}} \frac{1}{b_1 a_2b_2 a_3 \ldots b_{n-1} a_n} \frac{1}{2a_1}\prod_{i\leq n-1}{db_i}}$$ $$=\frac{1}{2} \int_{b_i \leq t} {\prod_{i=1}^{n-1}{b_i ^{[i(n-i)-1]}} \prod_{i\leq n-1}{db_i}} = \frac{1}{2} \prod_{i=1}^{n-1}{\frac{t^{ni-i^2}}{(ni - i^2)}} = \frac{1}{2} \frac{t^{\frac{n(n^2-1)}{6}}}{((n-1)!)^2}.$$
Thus we get to $$\mathrm{vol}({\Sigma _{t,\lambda}}) = \frac{1}{2} \mathrm{vol}(\mathrm{SO}_n) (2\lambda)^{\frac{n(n-1)}{2}}\frac{t^{\frac{n(n^2-1)}{6}}}{((n-1)!)^2} .
\label{eq1}$$
Borel proves in [@borel] the following theorem:
For $t\geq \frac{2}{\sqrt{3}}$ and $\lambda \geq \frac{1}{2}$, one has $\Sigma _{t,\lambda} \Gamma = G$. Moreover, $ \Sigma _{t,\lambda} $ is a coarse fundamental domain for $ \Gamma $ in $ G $.
The quotient $ {\Gamma\backslash G}$ has finite volume, which satisfies $$\mathrm{vol}({\Gamma\backslash G}) \prec e^{cn^3}, \mbox{ as } n \rightarrow \infty,$$ for some positive constant $ c $.
It is clear that $\mathrm{vol}({\Gamma\backslash G}) < \infty$, since $\Sigma _{t,\lambda}$ has finite volume and it contains a fundamental domain for ${\mathrm{SL} _n (\mathbb{Z})}$ if $t\geq \frac{2}{\sqrt{3}}$ and $\lambda \geq \frac{1}{2}$. Thus $\mathrm{vol}({\Gamma\backslash G}) \leq \mathrm{vol}(\Sigma_{t,\lambda})$ for these values of $ t $ and $ \lambda $.
By taking $\lambda=\frac{1}{2}$ and $t= \frac{2}{\sqrt{3}}$ in formula , we get that $$\mathrm{vol}(\Sigma_{{\scriptscriptstyle{\frac{2}{\sqrt{3}}, \frac{1}{2}}}}) = 2^{(n-1)(\frac{n}{4}+1)-1} \Big( \prod ^n_{i=2}{\frac{\pi^\frac{i}{2}}{\Gamma (\frac{i}{2})}} \Big) \frac{(\frac{2}{\sqrt{3}})^{\frac{n(n^2-1)}{6}}}{((n-1)!)^2}$$ $$=\frac{ 2^{\frac{2n^3 +3n^2+ 7n -24}{12}} \pi^{\frac{n^2+n-2}{2}}}{3^{\frac{n(n^2-1)}{12}} ((n-1)!)^2 \displaystyle \prod^n_{i=2}{\Gamma (\frac{i}{2})}}$$ Using Stirling’s formula, this volume is easily seen to grow assymptotically like $e^{cn^3}$, for some positive constant $c$ and this finishes the proof.
On the other hand, as we will see in the next section, $\mathrm{vol}({\mathrm{SL} _n (\mathbb{Z})}\backslash \mathrm{SL} _n (\mathbb{R}))$ computed with respect to the same normalization of the Haar measure goes to zero as $n$ grows.
Volume of . {#domfund}
===========
It is a well-known fact that $\mathrm{vol}({\mathrm{SL} _n (\mathbb{Z})}\backslash \mathrm{SL} _n (\mathbb{R}))$ is finite. Our goal is to calculate it, with respect to the same normalization of the Haar measure used in the previous section. The whole computation follows the original approach of Siegel [@siegel], but we have to be careful with the normalization constants. We use Poisson summation, induction and the previously known fact that $\mathrm{vol}(\mathrm{SL}_2({\mathbb{Z}}) \backslash \mathrm{SL}_2({\mathbb{R}})) = \sqrt{2}\zeta (2)$, which can be proved in a similar way (see [@garret], being careful with respect to the different normalization of $\mathrm{vol}(\mathrm{SO} _2)$ we are considering).
We will state first the Poisson Summation Formula, which will play a fundamental role in the computations, and for which the reader can refer to [@psf].
Given a lattice $\Lambda$ in ${\mathbb{R}}^n$, we define $\left|\Lambda \right|$ to be the covolume of $\Lambda$, i.e. the volume of ${\mathbb{R}}^n/ \Lambda$ and the dual lattice of $\Lambda$ by $$\Lambda^* = \left\{y \in {\mathbb{R}}^n; \left\langle x,y\right\rangle \in {\mathbb{Z}}\mbox{ for any } x \in \Lambda \right\}.$$
\[psf\] Given any lattice $\Lambda$ in ${\mathbb{R}}^n$, a vector $w \in {\mathbb{R}}^n$ and an adimissible function $f:{\mathbb{R}}^n \rightarrow {\mathbb{R}}$ in $\mathcal{L}^1$, we have $$\displaystyle \sum_{x \in \Lambda}{f(x+w)} = \frac{1}{\left|\Lambda\right|}\sum_{t \in \Lambda^*}{e^{-2\pi i \left\langle w,t\right\rangle}\hat{f}(t)},$$
Here, $\hat{f}(t) = \int_{{\mathbb{R}}^n}{f(x) e^{2\pi i\left\langle x,t\right\rangle} dx}$ is the Fourrier transform of $f$ and admissibilty of $f$ means that there exist constants $\epsilon, \delta > 0$ such that $\left|f(x)\right|$ and $\left|\hat{f}(x)\right|$ are bounded above by $\epsilon(1+\left|x\right|)^{-n-\delta}$.
Let then $f \in \mathcal{L}^1$ be an admissible function on ${\mathbb{R}}^n$. We can ask $ f $ to be a $ C^{\infty} $ function with compact support. We then define $F:G\mapsto{\mathbb{R}}$ by $$F(g) = \displaystyle \sum_{v \in {\mathbb{Z}}^n}{f(vg)}.$$ Here we are considering the multiplication of line-vectors $v \in {\mathbb{R}}^n$ by elements of $ G $ by the right. Clearly, $F$ is left $\Gamma$-invariant, as ${\mathbb{Z}}^n \Gamma = {\mathbb{Z}}^n$ under the action of ${\mathrm{SL} _n (\mathbb{Z})}$ on ${\mathbb{R}}^n$ by right multiplication of line vectors by the inverse elements of $ {\mathrm{SL} _n (\mathbb{Z})}$.
Consider $\int_{{\Gamma\backslash G}}{F(g)dg}$. We will use this integral to calculate $\mathrm{vol}({\Gamma\backslash G})$.
Let $$Q = \mathrm{stab}_G(e) = \left\{ \left( \begin{array}{cc}
h & v \\
0 & 1 \\
\end{array} \right); h \in \mathrm{SL}_{n-1}({\mathbb{R}}), v \in {\mathbb{R}}^{n-1}\right\},$$ where $e=(0,\ldots, 0,1) \in {\mathbb{R}}^{n}$, and write $Q_{{\mathbb{Z}}} = Q \cap \Gamma $. Using linear algebra over ${\mathbb{Z}}$, note that $${\mathbb{Z}}^{n} - \{0\} = \displaystyle\bigcup_{\ell >0} \displaystyle\bigcup_{\gamma \in {Q_{\mathbb{Z}} \backslash \Gamma}} \ell e \gamma,$$ where $\ell$ runs over positive integers.
Then we can write $$\displaystyle \int_{{\Gamma\backslash G}}{F(g)dg} = \int_{{\Gamma\backslash G}}{f(0)dg} + {\int_{{\Gamma\backslash G}}{\sum_{\ell >0}\sum_{\gamma \in {Q_{\mathbb{Z}} \backslash \Gamma}}f(\ell e \gamma g)dg}}$$ $$= \mathrm{vol}({\Gamma\backslash G})f(0) + \sum_{\ell >0} \int_{Q_{{\mathbb{Z}}} \backslash G}{f(\ell eg)dg}.$$
For the second equality note that a fundamental domain for $Q_{{\mathbb{Z}}} $ in $G$ is the union of images of a fundamental domain for $\Gamma$ in $G$ by representatives of classes in ${Q_{\mathbb{Z}} \backslash \Gamma}$. In addition, the Schwartz condition on $f$ ensures that the integral over $Q_{{\mathbb{Z}}} \backslash G$ is finite. Indeed, in his article [@siegel45], Siegel proves that the function $ F(g) $ is integrable over a fundamental domain for $ {\Gamma\backslash G}$. On pages 344-345 of \[loc. cit.\] we can see that the integrals over $ \mathbb{Q}_{\mathbb{Z}}\backslash G$ are also convergent (for any fixed $ l\in \mathbb{N} $). We observe that although he uses different decomposition of $ G $ and normalization of Haar measures, this does not change the finiteness of the integrals.
Write $$P = \left\{ \left( \begin{array}{cc}
h & * \\
0 & \frac{1}{\mbox{det}(h)} \\
\end{array} \right); h \in \mathrm{GL}_{n-1}({\mathbb{R}}) , \mbox{det}(h) > 0 \mbox{ and } * \in {\mathbb{R}}^{n-1} \right\};$$ $$N' = \left\{ \left( \begin{array}{cc}
I_{n-1} & v \\
0 & 1 \\
\end{array} \right); v \in {\mathbb{R}}^{n-1} \right\}, N'_{{\mathbb{Z}}} = N' \cap \Gamma.$$ $$M = \left\{ \left( \begin{array}{cc}
h & 0 \\
0 & 1 \\
\end{array} \right); h \in \mathrm{SL}_{n-1}( {\mathbb{R}}) \right\}, M_{{\mathbb{Z}}} = M \cap \Gamma;$$ $$A' = \left\{ \left( \begin{array}{cc}
t^{{\scriptscriptstyle\frac{1}{n-1}}}I_{n-1} & 0 \\
0 & t^{-1} \\
\end{array} \right); t > 0 \right\};$$
Note that $P = N'MA' \supset NA$, $Q= N'M$ and $G = N'MA'K$. However this time we have that $N'MA'$ intersects $K$ non-trivially, i.e. this is not an Iwasawa decomposition. The product $N'MA'K$ projects on $G$ with fiber $\mathrm{SO} (n-1)$. Therefore we get the following
For every left $G$-invariant function $\Phi$, we have $$\displaystyle \int_{{Q_{\mathbb{Z}} \backslash G}}{\Phi (g)dg} = \frac{1}{\mathrm{vol}(\mathrm{SO}_{n-1})} \int_{Q_{{\mathbb{Z}}} \backslash ( N'MA'K)}{\Phi (n'ma'k)dn'\, dm\,da' \, dk}$$ where $dg$ is the Haar measure in $ G $ coming from its Iwasawa decomposition (as in Section \[Siegelsets\]).
Here $dn'$, $dm$, $da'$ and $dk$ are the left Haar measures on $N'$, $M$, $A'$ and $K$, respectively. We see that $ dn' = \displaystyle \prod_{i=1}^{n-1}v_i $ and that $ M $ is isomorphic to $ \mathrm{SL}_{n-1}({\mathbb{R}}) $ and thus $ dm $ will appear as the measure of this group. This allows us to use induction in the calculations. On the other hand, $ A' $ is isomorphic to $ {\mathbb{R}}_{>0} $ via the isomorphism $$\left( \begin{array}{cc}
t^{{\scriptscriptstyle\frac{1}{n-1}}}I_{n-1} & 0 \\
0 & t^{-1} \\
\end{array} \right) \in A' \mapsto t \in {\mathbb{R}}_{>0}.$$ Thus we have $ da'= \frac{dt}{t} $ where $dt$ is the usual measure in $ {\mathbb{R}}$.
Again it will be convenient to change the order of integration, by letting the variable $a' \in A'$ to be the last one. This will give us $d(a' q a'^{-1}) = t^n dq$, for $q = n'm \in N'M$. Indeed, for $ a' = \left( \begin{array}{cc}
t^{{\scriptscriptstyle\frac{1}{n-1}}}I_{n-1} & 0 \\
0 & t^{-1} \\
\end{array} \right) \in A'$ and $ q = \left( \begin{array}{cc}
h & v \\
0 & 1 \\
\end{array} \right) \in Q $, we have $ a'q (a')^{-1} = \left( \begin{array}{cc}
h & t^{\frac{n}{n-1}}v \\
0 & 1 \\
\end{array} \right)$, and thus the $ M $-contribuction to the measure doesn’t change, but the $ N' $-contribution is multiplied by $ (t^{\frac{n}{n-1}})^{n-1} = t^n $ and we get to $d(a' q a'^{-1}) = t^n dq$ as stated.
Then, if we require $f$ to be $K$-invariant, the integral $\int_{{\Gamma\backslash G}} {F(g)dg}$ becomes equal to
$$\mathrm{vol}({\Gamma\backslash G})f(0) + \frac{1}{\mathrm{vol}(\mathrm{SO}_{n-1})}\sum_{\ell >0} \int_{Q_{{\mathbb{Z}}} \backslash ( N'MA' K)}{f(\ell en'ma'k) dn' \, dm\, da' \, dk}$$ $$= \mathrm{vol}({\Gamma\backslash G})f(0) + \frac{\mathrm{vol}(Q_{{\mathbb{Z}}} \backslash K )}{\mathrm{vol}(\mathrm{SO}_{n-1})}\sum_{\ell >0} \int_{Q_{{\mathbb{Z}}} \backslash ( N'M)}\int_{A'} {f(\ell en'ma')\displaystyle t^n da' \, dn' \, dm }.$$
We have $K \cap Q_{{\mathbb{Z}}} = \mathrm{SO}_{n-1}({\mathbb{Z}})$. Noting that $${\mathbb{S}}^{n-1} \cong \mathrm{SO}_{n-1} \backslash \mathrm{SO}_n \cong \frac{\mathrm{SO}_{n-1}({\mathbb{Z}}) \backslash \mathrm{SO}_n}{\mathrm{SO}_{n-1}({\mathbb{Z}}) \backslash \mathrm{SO}_{n-1}},$$ we get $$\mathrm{vol}({\mathbb{S}}^{n-1}) = \mathrm{vol}(\mathrm{SO}_{n-1} \backslash \mathrm{SO}_n) = \frac{\mathrm{vol}(\mathrm{SO}_{n-1}({\mathbb{Z}}) \backslash \mathrm{SO}_n)}{\mathrm{vol}(\mathrm{SO}_{n-1}({\mathbb{Z}}) \backslash \mathrm{SO}_{n-1})}.$$
As $\mathrm{SO}_{n}({\mathbb{Z}})$ acts properly and freely in $\mathrm{SO}_n$, for any $n \in \mathbb{N}$ we have that $$\mathrm{SO}_n \longrightarrow \mathrm{SO}_{n}({\mathbb{Z}}) \backslash \mathrm{SO}_n$$ is a finite covering with $\# \mathrm{SO}_{n}({\mathbb{Z}})$ sheets, which gives us $$\mathrm{vol}(\mathrm{SO}_n) = \#(\mathrm{SO}_{n}({\mathbb{Z}})) \mathrm{vol}(\mathrm{SO}_{n}({\mathbb{Z}}) \backslash \mathrm{SO}_n).$$ Altogether, we obtain: $$\mathrm{vol}(Q_{{\mathbb{Z}}} \backslash K ) = \mathrm{vol}(\mathrm{SO}_{n-1}({\mathbb{Z}}) \backslash \mathrm{SO}_n) = \frac{\mathrm{vol}({\mathbb{S}}^{n-1}) \mathrm{vol}(\mathrm{SO}_{n-1})}{\# (\mathrm{SO}_{n-1}({\mathbb{Z}}))}.$$
As the integrand is invariant under $N'M$ ($en'm = e$, for any $n' \in N'$ and $m \in M$) and the volume of $N'_{{\mathbb{Z}}} \backslash N'$ is $1$, this implies
$$\mathrm{vol}({\Gamma\backslash G})f(0) + \frac{\mathrm{vol}(Q_{{\mathbb{Z}}} \backslash K )}{\mathrm{vol}(\mathrm{SO}_{n-1})}\sum_{\ell >0} \int_{Q_{{\mathbb{Z}}} \backslash ( N'M)}\int_{A'} {f(\ell en'ma')\displaystyle t^n da'\, dn' \, dm}$$ $$=\mathrm{vol}({\Gamma\backslash G})f(0) + \frac{\mathrm{vol}(Q_{{\mathbb{Z}}} \backslash K )}{\mathrm{vol}(\mathrm{SO}_{n-1})} \mathrm{vol}(\mathrm{SL}_{n-1}( {\mathbb{Z}}) \backslash \mathrm{SL}_{n-1}( {\mathbb{R}})) \sum_{\ell >0} \int_{A'} \! \! \! \! \! {f(\ell ea')t^n da'}$$ $$= \mathrm{vol}({\Gamma\backslash G})f(0) + \frac{\mathrm{vol}({\mathbb{S}}^{n-1})}{\# (\mathrm{SO}_{n-1}({\mathbb{Z}}))} \mathrm{vol}(\mathrm{SL}_{n-1}( {\mathbb{Z}}) \backslash \mathrm{SL}_{n-1}( {\mathbb{R}})) \sum_{\ell >0} \int_{A'} f(\ell ea')t^n da'$$
By replacing $a' \in A'$ by $t \in {\mathbb{R}}_{>0}$ and using the description of $ da' $, we get to $$\mathrm{vol}({\Gamma\backslash G})f(0) + \frac{\mathrm{vol}({\mathbb{S}}^{n-1}) }{\# (\mathrm{SO}_{n-1}({\mathbb{Z}}))} \mathrm{vol}(\mathrm{SL}_{n-1}( {\mathbb{Z}}) \backslash \mathrm{SL}_{n-1}( {\mathbb{R}})) \sum_{\ell >0} \int_{0}^{\infty} f(\ell et)t^n \frac{dt}{t}.$$
By replacing $t$ by $\frac{t}{\ell}$, we obtain $$\mathrm{vol}({\Gamma\backslash G})f(0) + \frac{\mathrm{vol}({\mathbb{S}}^{n-1}) }{\# (\mathrm{SO}_{n-1}({\mathbb{Z}}))} \mathrm{vol}(\mathrm{SL}_{n-1}( {\mathbb{Z}}) \backslash \mathrm{SL}_{n-1}( {\mathbb{R}})) \sum_{\ell >0}\frac{1}{\ell ^n} \int_{0}^{\infty} f(et)t^n \frac{dt}{t}.$$
By using polar coordinates in ${\mathbb{R}}^n = \{ (v, t), v \in \mathbb{S}^{n-1}, t \in {\mathbb{R}}_{>0}\}$, we get $$\mathrm{vol}({\mathbb{S}}^{n-1})\int_{0}^{\infty} f(et)t^n \frac{dt}{t} = \int_{{\mathbb{S}}^{n-1}}\int_{0}^{\infty} f(v,t)t^{n-1} dtdv = \int_{{\mathbb{R}}^{n}} f(x)dx = \hat{f}(0).$$
Thus what we get until now is the following
The initial integral becomes $$\displaystyle \int_{{\Gamma\backslash G}}{F(g)dg} = \mathrm{vol}({\Gamma\backslash G})f(0) + \frac{\mathrm{vol}(\mathrm{SL}_{n-1}( {\mathbb{Z}}) \backslash \mathrm{SL}_{n-1}( {\mathbb{R}}))}{\# (\mathrm{SO}_{n-1}({\mathbb{Z}}))} \zeta(n) \hat{f}(0),$$ where $\zeta(n) = \displaystyle \sum_{l\in{\mathbb{Z}}}{\frac{1}{l^n}}$ is the Riemman zeta function.
The previous result allows us to compute explicitely the value of $ \mathrm{vol}({\Gamma\backslash G}) $: $$\mathrm{vol}({\Gamma\backslash G}) = \frac{ \mathrm{vol}(\mathrm{SL}_{n-1}( {\mathbb{Z}}) \backslash \mathrm{SL}_{n-1}({\mathbb{R}}))}{\# (\mathrm{SO}_{n-1}({\mathbb{Z}}))} \zeta(n) = \sqrt{2} \displaystyle \prod_{i=2}^{n}\zeta(i) \prod_{i=1}^{n-1}{\frac{1}{\# (\mathrm{SO}_{i}( {\mathbb{Z}}))}} .$$
For every $g \in G$, we are going to apply the Poisson summation formula to the lattice $\Lambda = \left\{vg;v \in {\mathbb{Z}}^n \right\}$ in ${\mathbb{R}}^n$, the vector $w=0$ and the initial function $f$. Note that $\Lambda^* = \left\{vg^*; v \in {\mathbb{Z}}^n \right\}$, where $g^* = ^{\top}\! \! g^{-1}$. Then we get $$F(g) = \displaystyle \sum_{v \in {\mathbb{Z}}^n}{f(vg)} = \displaystyle \sum_{v \in {\mathbb{Z}}^n}{\hat{f}(vg^*)} = \hat{F}(g^*), \mbox{ for any } g \in G.$$
The automorphism $g \mapsto g^*$ preserves the measure on $G$ and stabilizes $\Gamma$, so we can do an analogous computation with the roles of $f$ and $\hat{f}$ reversed. Since $\hat{\hat{f}}(0) = f(0)$ and $\int_{\Gamma \backslash G}{F(g)dg} = \int_{\Gamma \backslash G}{\hat{F}(g)dg}$, we obtain $$\mathrm{vol}({\Gamma\backslash G})f(0) + \frac{ \mathrm{vol}(\mathrm{SL}_{n-1}( {\mathbb{Z}}) \backslash \mathrm{SL}_{n-1}( {\mathbb{R}}))}{\# (\mathrm{SO}_{n-1}({\mathbb{Z}}))} \zeta(n) \hat{f}(0) = \int_{{\Gamma\backslash G}}\!\!\!{F(g)dg}$$ $$=\int_{\Gamma \backslash G}\!\!\!{\hat{F}(g)dg} = \mathrm{vol}({\Gamma\backslash G})\hat{f}(0) + \frac{\mathrm{vol}(\mathrm{SL}_{n-1}( {\mathbb{Z}}) \backslash \mathrm{SL}_{n-1}( {\mathbb{R}}))}{\# (\mathrm{SO}_{n-1}({\mathbb{Z}}))} \zeta(n) f(0).$$
By asking additionally that $f$ is such that $f(0) \neq \hat{f}(0)$ and using indution on $n$, we get to the desired result.
We observe that for every $i\in \mathbb{N}$, $\# (\mathrm{SO}_{i}( {\mathbb{Z}})) = 2^{i-1}i!$. Indeed, the group $\mathrm{SO}_{i}( {\mathbb{Z}})$ consists of monomial matrices whose nonzero entries are equal to $\pm 1$ and which have determinant equal to $1$. The first condition gives us $2^i i!$ matrices. Now if we look at the surjective group homomorphism $$\mathrm{det}: B=\{\mbox{monomial matrices with nonzero entries} \in \{\pm 1\}\} \rightarrow \{\pm 1 \},$$ we get $B/Ker(\mathrm{det}) \cong \{\pm 1 \}$, which implies $$\# (\mathrm{SO}_{i}( {\mathbb{Z}})) = \#(Ker (\mathrm{det})) = \frac{\#(B)}{2} = \frac{2^{i}i!}{2} = 2^{i-1}i! .$$
Thus we have proved the following
The explicit volume of $ {\Gamma\backslash G}$, by considering the Haar measures described in Section \[haar\] is given by $$\mathrm{vol}({\mathrm{SL} _n (\mathbb{Z})}\backslash \mathrm{SL} _n (\mathbb{R})) =\sqrt{2} \displaystyle \prod_{i=2}^{n}\zeta(i) \displaystyle \prod_{i=1}^{n-1}{\frac{1}{2^{i-1}i!}} = \frac{\displaystyle \prod_{i=2}^{n}\zeta(i)}{2^{{\scriptscriptstyle\frac{n^2-3n+1}{2}}} \displaystyle \prod_{i=2}^{n} i!} .
\label{eq2}$$
It is not difficult to see that this function goes to zero like $e^{-c'n^2}$ as $n$ grows, where $c'$ is a positive constant. It has a completely different behaviour from the volume growth of Siegel Sets described by formula . What we can conclude directly from all this is that although the geometry of a Siegel set is simpler than that of the actual fundamental domain for a lattice, their volumes can differ dramatically as $n$ grows. Thus we should be careful if we want to replace fundamental domains of any lattice by simpler structures such as Siegel sets, due to the possibility that some of their relevant geometric features, e.g. volume, may have different behavior to that of fundamental domains.
As a consequence of Sections \[Siegelsets\] and \[domfund\], we obtain:
The ratio between volumes of the minimal Siegel sets $\Sigma = \Sigma_{{\scriptscriptstyle{\frac{1}{2}, \frac{2}{\sqrt{3}}}}}$ for $ {\mathrm{SL} _n (\mathbb{Z})}$ and the actual fundamental domains for these groups in $ {\mathrm{SL} _n (\mathbb{R})}$ is given by $$C(n) = \frac{\mathrm{vol}(\Sigma)}{\mathrm{vol}({\Gamma\backslash G})} = \frac{2^{{\scriptscriptstyle\frac{2n^3+ 9n^2+25n-30}{12}}}\pi^{{\scriptscriptstyle\frac{n^2+n-2}{4}}} \displaystyle \prod_{i=1}^{n-1}{i!}}{3^{{\scriptscriptstyle\frac{n^3-n}{12}}}((n-1)!)^2 \displaystyle \prod_{i=2}^{n}{\Gamma(\frac{i}{2})} \displaystyle \prod_{i=2}^n{\zeta(i)}}.$$
Moreover, $ C(n) \sim e^{\tilde{c}n^3} $ for some constant $\tilde{c}$ that does not depend on $n$.
A natural question arising here is the following: “How is our normalization of the Haar measure related to the canonical normalization defined by using the Killing form on $\mathfrak{sl}_n({\mathbb{R}})$?”
To answer to this question we can compare our formula with a result of Harder [@harder], who computed the volume of ${\mathrm{SL} _n (\mathbb{Z})}\backslash X$, where $X$ is the symmetric space $\mathrm{SL} _n (\mathbb{R})/{\mathrm{SO} _n}$. In order to do this comparison, note that by equation we have $$\mathrm{vol}({\mathrm{SL} _n (\mathbb{Z})}\backslash X) =\frac{\mathrm{vol}({\Gamma\backslash G})}{\mathrm{vol}({\mathrm{SO} _n})} = \frac{\sqrt{2} \displaystyle \prod_{i=1}^{n-1}{\frac{1}{2^{i-1}i!}}\displaystyle \prod_{i=2}^{n}{\zeta(i)}}{2^{(n-1)(\frac{n}{4}+1)} \displaystyle \prod^n_{i=2}{\frac{\pi^\frac{i}{2}}{\Gamma (\frac{i}{2})}}}.
\label{eq3}$$
By Harder’s formula, we obtain that this volume in the canonical normalization is given by $$\mathrm{vol}_1 ({\mathrm{SL} _n (\mathbb{Z})}\backslash X) = \frac{\displaystyle \prod_{i=1}^{n-1}{i!}\displaystyle \prod_{i=2}^{n}{\zeta(i)}}{(2\pi)^{{\scriptscriptstyle\frac{n(n+3)}{2}}} 2^{\tau}n!},
\label{eq4}$$ where $\tau = n$ if $n$ is odd and $\tau = n-1$ if $n$ is even.
We see that these volumes differ by a factor given by $$C_1(n) = \frac{\mathrm{vol}_1 ({\mathrm{SL} _n (\mathbb{Z})}\backslash X)}{\mathrm{vol}({\mathrm{SL} _n (\mathbb{Z})}\backslash X)}= \frac{2^{{\scriptscriptstyle\frac{n^2-5n-2}{4} - \tau }} \Bigl(\displaystyle \prod_{i=1}^{n-1}{i!}\Bigl)^2 }{ n!\pi^{{\scriptscriptstyle\frac{n^2+5n+2}{4}}} \displaystyle \prod^n_{i=2}{\Gamma \Bigl(\frac{i}{2}\Bigl)}},$$
where $\tau = n$ if $n$ is odd and $\tau = n-1$ if $n$ is even. We note that again by using Stirling’s formulas, we obtain that $ C_1(n) $ grows assymptotically with $ n $ like $ e^{\kappa n^2} $, for some positive constant $ \kappa $.
The same renormalization can be applied to in order to obtain the volumes of Siegel sets in the symmetric spaces with respect to the standard normalization of the measure.
Bounding the number of intersecting domains {#morr}
===========================================
Another relevant consequence of this work is the following corollary:
\[corol1\] Let $N$ be the cardinality of the set $\mathcal{I} :=\left\{\gamma \in \Gamma ; \gamma \Sigma \cap \Sigma \neq \emptyset\right\}$, where $\Sigma = \Sigma_{{\scriptscriptstyle{\frac{1}{2}, \frac{2}{\sqrt{3}}}}}$ . Then $N\geq C(n) = \frac{\mathrm{vol}(\Sigma)}{\mathrm{vol}({\Gamma\backslash G})}$.
As $\Sigma$ is a Siegel set, it must contain a fundamental domain $\mathcal{F}$ for $\Gamma$. We affirm that $\Sigma \subset \underset{{\scriptscriptstyle \gamma \in \mathcal{I}}}{\bigcup} \gamma\mathcal{F}$.
Indeed, given $x \in \Sigma$, if $x\in \mathcal{F}$, there is nothing to prove. If $x \notin \mathcal{F}$, as the images of $\mathcal{F}$ tesselate $\mathrm{SL} _n (\mathbb{R})$ we must have $x \in \gamma \mathcal{F}$, for some $Id \neq \gamma \in \Gamma$. As $\gamma \mathcal{F} \subset \gamma \Sigma$, we obtain $x \in \gamma \Sigma \cap \Sigma$, and thus $\gamma \in \mathcal{I}$. Therefore the inclusion above is true.
From this we obtain $N \mathrm{vol}({\Gamma\backslash G}) = N \mathrm{vol}(\mathcal{F}) \geq \mathrm{vol}(\Sigma)$ and thus $N \geq C(n)$, as stated.
In his recent work [@martinorr], Martin Orr shows in a more general setting that given a reductive algebraic group $G$ defined over $\mathbb{Q}$, a general Siegel set $\Sigma \subset G({\mathbb{R}})$ for some arithmetic subgroup $\Gamma \subset G(\mathbb{Q})$, and $\theta \in G(\mathbb{Q})$, there exists an upper bound for the height of elements $\gamma\in \Gamma$ such that $\theta \Sigma \cap \gamma \Sigma \neq \emptyset$. The height of an element is defined by: $$H(\gamma) = \displaystyle \max_{1\leq i,j\leq n} H(\gamma_{ij}),$$ where given a rational number $a/b$, $H(a/b)$ is defined as the maximum of the absolute values of $a$ and $b$. Orr shows that, given any element $\gamma$ of the set $$\Sigma_{N,D} := \Sigma \Sigma^{-1}\cap \left\{\gamma \in G(\mathbb{Q}), \mathrm{det} \gamma \leq N\mbox{ and the denominators of } \gamma \mbox{ are } \leq D\right\},$$ there exists some constant $C_1$, depending on the group $G$, on the Siegel set $\Sigma$ and on the way the group $G$ is embedded in some $GL_n({\mathbb{R}})$, such that $$H(\gamma) \leq C_1N^nD^{n^2},$$ where $N = \left|\mathrm{det} \gamma\right|$ and $D$ is the maximum of the denominators of entries of $\gamma$. Note that for $\Gamma = {\mathrm{SL} _n (\mathbb{Z})}$, the set $\mathcal{I}$ defined above is contained in $\Sigma_{N,D}$.
In this section we are going to compare this result with ours, i.e., to see what happens in the case when $G= \mathrm{SL} _n (\mathbb{R})$ and $\Gamma = {\mathrm{SL} _n (\mathbb{Z})}$. Note that in this case, for any $\gamma \in \Gamma$, we have $N= \left|\mathrm{det} \gamma\right| = 1$ and also $D = 1$ because the entries of $\gamma$ are all integers. Thus Orr’s result gives us, for this case, $$H(\gamma)\leq C_1(n).$$ By the definition, the height of an element $\gamma \in {\mathrm{SL} _n (\mathbb{Z})}$ is equal to $\left|\gamma\right|_{max}$. Therefore, his result turns to $$\left|\gamma \right|_{max} \leq C_1(n), \mbox{ for any } \gamma \in \Sigma \Sigma^{-1}.$$
By Example $1.6$ on page $5$ of [@sarnack], the set $\left\{\gamma \in {\mathrm{SL} _n (\mathbb{Z})};\left\|\gamma \right\| \leq C_1(n) \right\}$ has cardinality of assymptotic order $c_nC_1(n)^{(n^2-n)}$, with $c_n \rightarrow 0$ as $n\rightarrow\infty$. Thus if we assume that $n$ is sufficiently large, we can suppose that $c_n < \epsilon$ for some $\epsilon>0$ fixed. Therefore, we have $$\left|\left\{\gamma \in {\mathrm{SL} _n (\mathbb{Z})};\left\|\gamma \right\| \leq C_1(n) \right\}\right| \prec C_1(n)^{(n^2-n)},$$ where the notation $f(n) \prec g(n)$ used above means that there exists a positive constant $C$ such that for sufficiently big $n$, we have $f(n)\leq Cg(n)$.
Note that the result in [@sarnack] is proved for the Euclidean norm $\left\|.\right\|$ in $M_{n\times n}$ and we know that $\left\|\gamma\right\| \leq n\left|\gamma\right|_{max}$. Thus $$\left|\left\{\gamma \in {\mathrm{SL} _n (\mathbb{Z})};\left|\gamma\right|_{max} \leq C_1(n) \right\}\right| \prec (n C_1(n))^{(n^2-n)}.$$
We are going to show that $$C_1(n)\leq e^{\frac{n^2-n}{2} ln(n)}.$$ From this we obtain that $\left|\mathcal{I}\right| \prec e^{\frac{n^4}{2}ln(n)}$. Hence we have:
\[final\] For $\Gamma = {\mathrm{SL} _n (\mathbb{Z})}$ in $\mathrm{SL} _n (\mathbb{R})$ and $\mathcal{I}$ defined above, there exist constants $c_1, c_2 >0$ such that $$e^{c_1 n^3} \leq \left|\mathcal{I}\right| \leq e^{c_2 n^4 ln(n)}.$$
In order to obtain the second inequality we adapt the proofs in [@martinorr] for the $\mathrm{SL} _n (\mathbb{R})$ case, with the difference that we give explicit values for the constants.
Let $\gamma \in {\mathcal{I}}$. From this element, we can define:
- A partition of ${\left\{1,\ldots,n\right\}}$ (with respect to $\gamma$) is a list of disjoint subintervals of ${\left\{1,\ldots,n\right\}}$, which we call components, whose union is all of ${\left\{1,\ldots,n\right\}}$ and such that:
- $\gamma $ is block upper triangular with respect to the chosen partition;
- $\gamma $ is not block upper triangular with respect to any other finer partition of ${\left\{1,\ldots,n\right\}}$;
- A leading entry of $\gamma$ is a pair $(i,j) \in {\left\{1,\ldots,n\right\}}^2$ such that $\gamma_{ij}$ is the leftmost non-zero entry of the $i$-th row of $\gamma$.
For a concrete description of what are the possible partitions in the $\mathrm{GL}_3$ case see Section 3.2 of [@martinorr].
We will make use of the following lemma whose proof can be found in [@martinorr]:
\[lema1\] If $i, j$ are in the same component, then there exists a sequence of indices $i_1, \ldots, i_s$ such that $i_1 =i, i_s = j$ and $$(*) \mbox{ For every } p \leq s-1, \mbox{ either } i_p \leq i_{p+1} \mbox{ or } (i_p, i_p+1) \mbox{ is a leading entry.}$$
In the proof of the following lemmas for the $\mathrm{GL}_n$ case, Martin Orr uses the notation $A\ll B$ meaning that there exists a constant $C$, depending on $n$, such that $\left|A\right| \leq C\left|B\right|$. Our point here is to compute such constants so that we can make explicit the value of $C_1(n)$.
\[lema2\] If $(i,j)$ is a leading entry of $\gamma$, then $\alpha_j \leq \sqrt{n} \beta_i $.
For any $\gamma \in \Sigma \Sigma^{-1}$, we can write $\gamma= \nu \beta \kappa \alpha^{-1}\mu^{-1}$, with $\kappa \in {\mathrm{SO} _n}$, $\nu, \mu \in N_{\frac{1}{2}}$ and $\alpha, \beta \in A_{\frac{2}{\sqrt{3}}}$. This gives us the equation $\gamma \mu \alpha = \nu \beta \kappa$. We will compare the lengths of the $i$-th rows on each side of this equation.
As $\kappa \in {\mathrm{SO} _n}$, multiplying by $\kappa$ on the right does not change the length of each row. If we expand out lengths we obtain $$\displaystyle \sum_{p=1}^{n}{\Big(\sum_{q=1}^{n}{\gamma_{iq} \mu_{qp}}\Big)\alpha_{p}^2} = \sum_{p=1}^{n}{\nu_{ip}^2\beta_p^{2}}.$$
As $\nu$ is upper triangular, the non-zero terms on the right hand side of the last equation must have $p \geq i$. By the definition of $A_t$, for all $p \geq i$ we have $$\beta_p \leq \frac{1}{t^{(p-i)}}\beta_i \leq \beta_i,$$ where in the second inequality we used that $t=\frac{2}{\sqrt{3}}$ and $p\geq i $ imply $ \frac{1}{t^{(p-i)}} \leq 1$. Since $\nu \in N_{\frac{1}{2}}$, $\left|\nu_{ip}\right| \leq 1$ for any $i,p$. Alltogether, $$\sum_{p=1}^{n}{\nu_{ip}^2\beta_p^{2}} \leq \left|\sum_{p=1}^{n}{\nu_{ip}^2\beta_p^{2}}\right| \leq \sum_{p\geq i}^{n}{\beta_p^{2}} \leq (n-i)\beta_i^2 \leq n \beta_i^2.$$
On the other hand, by looking at the left hand side of the equation, we obtain: $$\Big(\sum_{q=1}^{n}{\gamma_{iq} \mu_{qj}}\Big)\alpha_{j}^2 \leq \displaystyle \sum_{p=1}^{n}{\Big(\sum_{q=1}^{n}{\gamma_{iq} \mu_{qp}}\Big)\alpha_{p}^2}.$$ As $(i,j)$ is a leading entry, we can only have $\gamma_{iq} \neq 0$ if $q \geq j$. But as $\mu$ is upper triangular, $\mu_{qj} \neq 0$ implies $q\leq j$. Thus the only non-zero term in the first sum is the one for $q=j$ and then we get $$\Big(\sum_{q=1}^{n}{\gamma_{iq} \mu_{qj}}\Big)\alpha_{j}^2 = \gamma_{ij}^2\mu_{jj}^2\alpha_j^2 = \gamma_{ij}^2 \alpha_j^2.$$ Note that $\gamma_{ij}\neq 0$ and that as $\gamma$ has integer entries, we must have $\left|\gamma_{ij}\right| \geq 1$, which implies $\gamma_{ij}^2\geq 1$.
Altogether, we obtain $$\alpha_j^2 \leq \alpha_j^2\gamma_{ij}^2 \leq n\beta_i^2 \Rightarrow \alpha_j \leq \sqrt{n}\beta_i,$$ from what we conclude the proof.
\[lema3\] For all $k\in {\left\{1,\ldots,n\right\}}$, $\alpha_k \leq \sqrt{n} \beta_k $.
We affirm that there must exist a leading entry $(i,j)$ such that $j\leq k \leq i$. To prove this notice that as $\gamma$ is invertible, there must exist $i\geq k$ such that the $i$-th row of $\gamma$ contains a non-zero entry in the $k$-th column or to its left (otherwise the leftmost k columns of $\gamma$ would have rank less than k). Choose j so that $\gamma_{ij}$ is the leading entry of $\gamma$ in the $i$-th line and it will satisfy $j\leq k$ as claimed.
By Lemma \[lema2\] and by the definition of $A_t$ we obtain $$\alpha_k \leq \frac{1}{t^{(k-j)}}\alpha_j \leq \sqrt{n} \beta_i \leq \sqrt{n}\frac{1}{t^{(i-k)}}\beta_k \leq \sqrt{n} \beta_k.$$
\[lema4\] For all $j\in {\left\{1,\ldots,n\right\}}$, $\beta_j \leq (\sqrt{n})^{n-1} \alpha_j $.
As $\alpha$ and $\beta$ are diagonal with positive real entries, we have (by using Lemma \[lema3\] in the inequality) $$\beta_j \mathrm{det}(\alpha) = \beta_j \displaystyle \prod_{k=1}^n{\alpha_k} \leq \beta_j \alpha_j (\sqrt{n})^{n-1}\prod_{k\neq j}{\beta_k} = (\sqrt{n})^{n-1} \alpha_j \mathrm{det}(\beta).$$ But as $\mathrm{det}(\beta) = \mathrm{det}(\alpha)= 1$, $$\beta_j \leq (\sqrt{n})^{n-1} \alpha_j$$ and the lemma is proved.
\[lema5\] If $i$ and $j$ are in the same component, $\beta_j \leq (\sqrt{n})^{n^2-n}\alpha_{i}$.
We can apply Lemma \[lema1\] to obtain a sequence $i_1=i, \ldots, i_s=j$ such that for any $p\in \left\{1,\ldots,s\right\},$ we have either $i_p\leq i_{p+1}$ or $(i_p,i_{p+1})$ is a leading entry. We take this subsequence as the smallest possible.
If $i_p\leq i_{p+1}$ then as $\alpha \in A_t$ and $(\sqrt{n})^{n}\geq 1$, we get $$\frac{\alpha_{i_p}}{\alpha_{i_{p+1}}}\geq t^{i_{p+1}-i_p}\geq 1 \Rightarrow \alpha_{i_{p+1}}\leq \alpha_{i_{p}} \leq (\sqrt{n})^{n}\alpha_{i_{p}}.$$ On the other hand if $(i_p,i_{p+1})$ is a leading entry then by Lemmas \[lema2\] and \[lema4\] we have $$\alpha_{i_{p+1}}\leq \sqrt{n}\beta_{i_p} \leq (\sqrt{n})^{n}\alpha_{i_p}.$$ If we apply the last inequality successively we get to $$\alpha_j = \alpha_{i_s} \leq (\sqrt{n})^{n(s-1)}\alpha_{i}.$$ Now we just apply Lemma \[lema4\] and notice that $s\leq n$ to obtain $$\beta_j\leq (\sqrt{n}))^{n-1}\alpha_{j}\leq (\sqrt{n})^{n^2-1}\alpha_{i}.$$
We write $$Q = \left\{g \in G; g \mbox{ is block upper triangular according to the components of }\gamma \right\};$$ $$L = \left\{g \in G; g \mbox{ is block diagonal according to the components of }\gamma \right\}.$$
We affirm that $\kappa\in L$. Indeed, as the matrices $\gamma, \mu, \alpha, \beta$ and $\nu$ are in Q by the construction, we also have $\kappa \in Q$. On the other hand, if a matrix is block upper triangular and is also orthogonal, then it is block diagonal. Thus $\kappa \in L$.
\[lema6\] If $i, j \in {\left\{1,\ldots,n\right\}}$, then $\left|\gamma_{ij} \right| \leq C_1(n) = n^{\frac{n^2-n}{2}}$.
Write $\gamma = \nu\beta\kappa\alpha^{-1}\mu^{-1}$. Because $\alpha, \beta$ are diagonal, the $pq$-th entry of $\beta\kappa\alpha^{-1}$ is $\beta_p\kappa_{pq}\alpha_q^{-1}$.
If $p$ and $q$ are not in the same component, as $\kappa \in L$, we get that $\kappa_{pq} = 0$. On the other hand, if they are in the same component, then by Lemma \[lema5\] $$\beta_p\kappa_{pq}\alpha_q^{-1} \leq \kappa_{pq} (\sqrt{n})^{n^2-1}.$$ By the definition of ${\mathrm{SO} _n}$, $\left|\kappa\right|_{max}\leq 1$ for every $\kappa\in {\mathrm{SO} _n}$. Therefore $$\beta_p\kappa_{pq}\alpha_q^{-1} \leq (\sqrt{n})^{n^2-1}.$$
As we have $\mu, \nu \in N_{\frac{1}{2}}$, we have $\left|\mu\right|_{\infty}, \left|\nu\right|_{\infty} \leq 1.$ Altogether, we obtain $$\left|\gamma_{ij} \right| \leq (\sqrt{n})^{n^2-1}.$$
Therefore we conclude the proof that $H(\gamma) \leq C_1(n)$, where $$C_1(n) = (\sqrt{n})^{n^2-1} = e^{\frac{n^2-1}{2} ln(n)}$$ and this finishes the proof of Corollary \[final\].
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Professor Mikhail Belolipetsky for several suggestions on the development of this paper and also on the text. I also thank Paul Garret and Martin Orr for their very helpful works and for always answering my emails with good suggestions, and Cayo Dória for helping me to understand better some topics. Finally, I also thank the refferee for carefully reading the paper and for giving suggestions that improved the presentation of the results.
Borel, A.: Introduction aux Groupes Arithmètiques. *Hermann*, Paris (1969). Borel, A. and Harish-Chandra.: Arithmetic subgroups of algebraic groups. *Ann. of Math*. 75, 485–535 (1962). Duke, W.; Rudinick, Z. and Sarnak, P.: Density of Integer Points on Affine Homogeneous Varieties. *Duke Math. J.*, Vol.71, No.1, 143-179 (1993). Garret, P.: Volume of ${\mathrm{SL} _n (\mathbb{Z})}\backslash \mathrm{SL} _n (\mathbb{R})$ and $Sp_n({\mathbb{Z}})\backslash Sp_n({\mathbb{R}})$. Paul Garrett’s homepage http://www-users.math.umn.edu/$\sim$garrett/m/v/volumes.pdf (2014). Accessed 07 November 2017. Habegger, P. and Pila, J.: Some unlikely intersections beyond André–Oort. *Compos. Math.* 148, 1-27 (2012). Harder, G.: A Gauss-Bonnet formula for discrete arithmetically defined groups. *Ann. Sci. Ec. Norm. Supér.* $4^e$ série, tome 4, n° 3, 409-455 (1971). Morris, D. W.: Introduction to Arithmetic Groups. *Deductive Press*, United States (2015). Orr, M.: Height bounds and the Siegel property. Preprint, arXiv:1609.01315v3 (2016). Siegel, C.L.: Einführung in die Theorie der Modulfunktionen n-ten Grades. *Math. Ann* 116, 617–657 (1939). Siegel, C.L.: A Mean Value Theorem in Geometry of Numbers. *Annals of Mathematics* Vol 45, No 2 (1945). Siegel, C.L.: Lectures on the Geometry of Numbers. *Springer-Verlag*, Berlin (1989). Stein, E. M. and Weiss, G. L.: Introduction to Fourier Analysis on Euclidean Spaces. *Princeton Math. Ser.* 32, *Princeton Univ. Press*, Princeton, NJ (1971). Venkataramana, T.N.: Lattices in Lie Groups. In Workshop on Geometric Group Theory, India (2010). Young, R.: The Dehn function of $\mathrm{SL} _n (\mathbb{Z})$. *Ann. of Math*. 177, 969–1027 (2013).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper addresses how to improve the computational efficiency and estimation reliability in cascading outage analysis. We first formulate a cascading outage as a Markov chain with specific state space and transition probability by leveraging the Markov property of cascading outages. It provides a rigorous formulation that allows analytic investigation on cascading outages in the framework of standard mathematical statistics. Then we derive a sequential importance sampling (SIS) based simulation strategy for cascading outage simulation and blackout risk analysis with theoretical justification. Numerical experiments manifest that the proposed SIS strategy can significantly bring down the number of simulations and reduce the estimation variance of cascading outage analysis compared with the traditional Monte Carlo simulation strategy.'
author:
- 'Jinpeng Guo, Feng Liu, Jianhui Wang, Junhao Lin, and Shengwei Mei, [^1] [^2] [^3] [^4]'
bibliography:
- 'ESORef0915.bib'
title: 'Towards High-Efficiency Cascading Outage Simulation and Analysis in Power Systems: A Sequential Importance Sampling Approach'
---
[Shell : Bare Demo of IEEEtran.cls for Journals]{}
Cascading outage; Markov chain; sequential importance sampling; blackout risk.
Introduction
============
outage is a sequence of component outages triggered by one or several initial disturbances or failures of system components [@r1; @r2]. In certain extreme conditions, cascading outages can lead to unacceptably serious consequences. A number of blackouts happened in the power systems worldwide in recent years is a case in point [@r4; @r5]. Despite that the probability of blackouts due to cascading outage is tiny, the catastrophic consequence and vast influential range raise great attention to the investigation of cascading outages, especially in large-scale interconnected power systems.
Due to the random nature of cascading outages, statistics and probability analysis are extensively deployed as basic mathematic tools to analyze cascading outages based on historical data [@r6; @r7; @r8]. However, it is difficult to acquire accurate and adequate data in practice as blackouts are essentially rare events and very limited information has been recorded to date. In this regard, several high-level statistic models were proposed for analyzing cascading outages, such as CASCADE model [@r9] and branching process model [@r10; @r11]. These models aim to capture the macroscopic features of the overall system in the sense of statistics while omitting the details of cascading outage process. To achieve a closer sight into cascading outages, researchers, however, consider to pick up such details back, including the uncertain occurrence of initial disturbances, action of protection and dispatch of control center. This consequentially results in different blackout models, such as hidden failure model [@r12; @r13], ORNL-PSerc-Alaska (OPA) model [@r14; @r15], AC-OPA model [@r16], to name a few. As this kind of approaches are capable of analyzing the cascading outage process in a detailed way, it is expected to exploit mechanisms behind cascading outages by carrying massive simulations on these models.
Regarding every simulation as an independent identical distribution (i.i.d.) sample, the simulation-based cascading outage analysis is essentially a statistic analysis based on a sample set produced by Monte Carlo simulations (MCS). In the past decade, the MCS approach contributes a lot to reveal the underlying physical mechanism of cascading outages and has been popularly used. However, the intrinsic deficiency of the MCS seriously limits its practicability and deployments. The main obstacle stems from the notorious “curse of computational dimensionality”. It is recognized that a realistic large-scale power system is always composed of numerous components, such as transmission lines, transformers, and generators. The possible evolutionary paths of cascading outages diverge dramatically. Hence a specific cascading outage with serous consequence is indeed an extremely rare event. In this context, the MCS analysis turns out to be computationally intractable as a huge number of simulations are required to achieve a reliable estimation of the probability distribution of cascading outages. Empirical results also confirm that the estimation variance can remain unacceptably large even if thousands of simulations have been conducted for a system with only tens of buses. This crucial issue, however, has not been cared seriously enough in the literature and the reliability of the MCS-based blackout analysis could be overestimated to a large extent. This motivates two essential questions: 1) how many simulations are required to guarantee the reliability of the estimation? and 2) whether or not the number of simulations can be effectively reduced without degrading the reliability of the estimation?
In [@r17], a condition is proposed to characterize the relationship between the estimation accuracy and the sample size of the MCS, answering the first question. As for the second question is still kept open to date. This paper aims to bridge the gap both theoretically and algorithmically. Noting that the theoretic results given in [@r17] are built on the standard MCS, it is intuitive to expect that the sample size could be shrunken by adopting certain advanced sampling techniques instead of the naive Monte Carol sampling strategy. In the literature, importance sampling (IS) is an effective method to improve the efficiency of the MCS [@r18; @r19], which has already been successfully deployed in various fields including power systems, such as security analysis for power grids [@r20] and risk management in electricity market [@r21]. It has also been intuitively used in cascading outage simulations in a heuristic manner [@r28; @r29; @r30; @r31]. Nevertheless, due to the absence of solid mathematical formulation of cascading outages, it is difficult to carry on the analytic investigation in a rigorous fashion. It is also unknown what are the scope and the conditions of the application of the IS strategy in cascading outage simulations, and how to set the parameters of the IS in simulations.
Sequential importance sampling (SIS) is an extension of the IS method, which decomposes the IS into a sequence of sampling steps to facilitate the implementation for multi-period random process [@r18; @r22]. Inspired by the success of IS/SIS in diverse fields, this paper applies the SIS to derive a novel simulation strategy for the sake of achieving a high-efficiency and reliable cascading outage analysis. The main contributions of this paper are twofold:
1. The process of cascading outage in power systems is formulated as a Markov chain. Differing from the current formulations in the literature, we specifically define the state space and transition probability associated with the Markov chain, resulting a well-defined analytic model of cascading outages. Based on the formulation, rigorous mathematical statistics analysis is allowed.
2. Benefiting from the proposed analytic formulation, a high-efficiency cascading outage simulation strategy is derived based on the SIS with theoretical guarantees. Taking full advantage of the Markov property of cascading outages, it is capable of considerably reducing both the number of simulations and the estimation variance.
We demonstrate the proposed formulation and simulation strategy outperforms the traditional MCS strategy using the data of standard IEEE 300-bus system and a real provincial power grid in China.The rest of this paper is organized as follows: traditional blackout modeling and the MCS based analysis are briefly reviewed in Section II. Section III gives the new formulation of cascading outages based on Markov chain. Then the SIS based simulation strategy associated with the theoretic analysis are presented in Section IV. Case studies in Section V show the benefits and efficiency of the proposed simulation strategy. Finally, Section VI concludes the paper with remarks.
MCS-based Analysis on Cascading Outages
========================================
In cascading outage analysis, load shedding is usually adopted to evaluate the severity of the cascading outage. To characterize load shedding distribution as a consequence of cascading outages, various kinds of blackout models are built by emulating the cascading outage process in a “[*descriptive*]{}” way. Massive simulations on such models can provide a number of i.i.d. samples for statistical analyses. This approach is essentially based on the MCS if one simply regards each simulation as a sample. Though there are many kinds of blackout models, the simulation principles are quite similar. In simple terms, at the $j$-th stage of the $i$-th sampling of a cascading outage, the blackout model for simulation determines the outage probability of each component in the system at the next stage, depending on the system states $x_j^i$ and other related factors, such as weather and maintenance conditions. Then the outage components at stage $j$ are sampled and the system state $x_j^i$ transits to ${x_{j+ 1}^{i }}$. Repeating the above steps until there is no occurrence of new outages , one simulation is completed. It gives a sample of load shedding $Y$[^5], denoted by $y_M^i$.
Define the sample sets as ${Y_M}: = \{ y_M^i,i = 1, \cdot \cdot \cdot {N_M}\} $ obtained after $N_M$ simulations. Then the probability distribution of load shedding can be estimated by statistics based on $Y_M$. We care about the probability of a given incident $A$ that describes the load shedding being greater than a certain level $Y_{0} $. The unbiased estimation of the true probability $\mu (A)$ is given by $$\label{eq1}
\tilde{\mu} (A)=\frac{1}{N_M} \sum \nolimits_{i=1}^{N_M}\delta _{\{ {y_M^i} \ge Y_{0} \} }$$ where $\delta _{\{ \cdot \} } $ is the indicator function of set $\{ y^{i}_M \ge Y_{0} \} $, which means $\delta _{\{ y^{i}_M \ge Y_{0} \} } =1$ if $ y_{i} \ge Y_{0}$; otherwise, $\delta _{\{ y^{i}_M \ge Y_{0} \} } =0$. It is easy to see ${\delta _{\{ \cdot \} }} = \delta _{\{ \cdot \} }^2$.
The variance of the estimation on $N_M$ samples is given by $$\label{eq2}
\sigma ^2(A) = D(A)={\frac{1}{N_M}\left( \mu (A)(1-\mu (A)) \right)}$$
In addition to probability distribution of the load shedding, another important indicator in the cascading outage analysis is the blackout risk. Theoretically, the blackout risk of a power system can be defined as the expectation of such load shedding greater than the given level, $Y_0$. That is $$\label{eq3}
Risk(Y_0)=\mathbb{E} (Y \cdot \delta _{\{ Y \ge Y_{0} \} } )$$ Similar to the probability estimation, it can be estimated by $$\label{eq4}
\tilde Ris{k}(Y_0) =\frac{1}{{{N_M}}}\sum\nolimits_{i = 1}^{{N_M}} {y_M^i \times {\delta _{\{ y_M^i \ge {Y_0}\} }}}$$
The definition of the blackout risk in represents the risk of cascading outages with serious consequences. It is closely related to the well-known risk measures, value at risk (*VaR*) and conditional value at risk (*CVaR*) [@r23; @r24]. Actually, the risk measure, *Risk*, defined in is *CVaR*$_\alpha$ times $(1-\alpha )$, provided *VaR*$_\alpha$ is known as the risk associated to the given load shedding level $Y_{0}$ with a confidence level of $\alpha$.
With the sample set obtained by repeatedly carrying out simulations on the cascading outage model, the probability of load shedding and the blackout risk can be estimated by using and , respectively. However, it should be noted that, to achieve acceptably small variance of estimation, a tremendous number of samples are usually required even if the system merely has tens or hundreds of buses. To illustrate this matter of fact, we use the IEEE 300-bus system as an example. Based on OPA model, the probability of load shedding is estimated by using based on 10 groups of simulations, where each group contains 2000 i.i.d. simulations. As shown in Fig. \[fig.1\], the variance of the probability estimation is quite large. It is also found that the simulations capture few events with the load shedding larger than 800MW, showing that the traditional MCS approach might be neither efficient nor reliable enough to cope with the cascading outage analysis in large-scale power systems. To the best of our knowledge, this issue has not been paid enough attention in the literature.
![Probability estimation of the load shedding[]{data-label="fig.1"}](image1.eps){width="40.00000%"}
A Markov Chain Based Formulation
=================================
A cascading outage is always triggered by one or several initial disturbances or componentwise failures. As a consequence, the protection devices and the control center begin to take actions, and then the system state changes sequentially according to these actions. Such state changes happen in a random way, implying that a cascading outage could be formulated as a stochastic process. To this end, the state space of the cascading outage as well as the associated state-transition probability have to be defined appropriately. In this paper, the system configuration is taken as the random state variables. Note that the system configuration defined here is generic, which can incorporate either controlled or uncontrolled changes, such as line tripping, shunt capacitor switching and On-Load Tap Changer regulation. We denote $X_j$ as the state variable at stage $j$ of a cascading outage. All possible system states span the state space, denoted by $\mathcal X$.
Assume the system has $N_c$ components and denote $N:=\{1,2,\cdots n\}(n\in\mathbb{Z^+})$ as the total stages of cascading outages. Then an $n$-stage cascading outage can be defined below.
\[cascadingoutage\] An $n$-stage cascading outage is a stochastic sequence $
Z:= \{ {X_1, X_2, ..., X_j, ..., X_n}, \forall j\in N, X_j\in {\mathcal X} \}
$ with respect to the random state space $\mathcal X$ and a given joint probability distribution $f(X_{n} ,\cdots X_{2} ,X_{1} )$.
In the above definition, $j$ is the stage label of the cascading outage, while $n$ is the total number of stages, or the *length* of the cascading outage. State variable $X_{j} $ is a discrete random vector with the dimension of $N_c$. Each element of $X_j$ stands for a state of the corresponding component at stage $j$ during the cascading outage. Correspondingly, $\mathcal X$ is a $N_c$-dimensional state space. Moreover, denoting the number of possible states of component $k$ by $s^k$, there is $$\label{eq5}
|\mathcal X|= \prod \limits _{k = 1}^{{N_c}} {{s^k}}$$ where $|\mathcal X|$ denotes the number of elements in $|\mathcal X|$.
For simplicity, we abuse the notation $Z:=\{X_j^N\}$ to denote a cascading outage. Then the joint probability distribution $f(X_{n} ,\cdots X_{2} ,X_{1} )$ is simplified into $f(Z)$. On the other hand, since the number of components in the system is finite, the number of possible stochastic sequences representing the cascading outages is finite as well. We denote $\mathcal{Z}$ as the set of all possible cascading outages in a system. Thus, $|\mathcal{Z}|$ is finite.
It is worthy of noting that, the joint probability distribution $f(Z)$ is practically difficult to obtain, even if the probability distribution functions (PDFs) of individual components are known. Next we show this issue can be circumvented by using the intrinsic Markov properties of cascading outages.
In Definition 1, for a given $n$-stage cascading outage, the associated load shedding is merely a stochastic variable being a function of the stochastic sequence $\{X_j^N\}$, which is denoted by $Y=h(X_{1} ,\cdots, X_{n} )=h(Z)$ .
Note that in a cascading outage process, all the actions of protections, controls and operations at arbitrary stage $i$ are completely determined by the previous stage $i-1$. In this context, the cascading outage $\{ X_j^N\} $ in the definition above is indeed a Markov chain. Then invoking the conditional probability formula and the Markov property, the joint probability distribution $f(Z)$ should satisfy $$\label{eq6}
\begin{array}{rcl}
f(Z) & = &f({X_n}, \cdots, {X_2},{X_1}) \\
&= &f_n({X_n}|{X_{n - 1}} \cdot \cdot \cdot {X_1})\cdot f_{n-1}({X_{n - 1}}|{X_{n - 2}} \cdots {X_1}) \\
& & \cdots f_2(X_2|X_1)\cdot f_1({X_1}) \\
&= &f_n({X_n}|{X_{n - 1}})\cdot f_{n-1}({X_{n - 1}}|{X_{n - 2}}) \cdot \cdot \cdot f_1({X_1})
\end{array}$$ where $f_{j+1}({X_{j+1}}|{X_j})$ is the related conditional probability.
Assume in the sampling process, $x^i_j$ is the sample of the state at stage $j$ of the $i$-th sampling, while the length of the cascading outage in the $i$-th sampling is $n^i$. Then we have $$\label{eq7}
\begin{aligned}
& {\mathbf{Pr}({X_{n^i}} = {x^i_n}|{X_{{n^i} - 1}} = {x^i_{{n^i} - 1}}, \cdots {X_2} = {x^i_2},{X_1} = {x^i_1})} \\
=&{ \mathbf{Pr}({X_{n^i}} = {x^i_n}|{X_{{n^i} - 1}} = {x^i_{{n^i} - 1}})} \\
\end{aligned}$$
Eqs. and mathematically indicate that, a cascading outage can be simulated following the sequential conditional probability, other than directly using the joint probability distribution. Specifically, denote ${F^i_{j}} $ as the set of outage components at stage $j$ of the $i$-th sampling of the cascading outage, and $\bar {F}^i_{j} $ as the set of the normal components after stage $j$ of the $i$-th sampling. Let $$\label{eq8}
p_{j,k}^i = {\varphi _k}(x_j^i)$$ as the outage probability of component $k$ at stage $j$ of the $i$-th sampling, where $\varphi _k$ is the corresponding PDF. Then the transition probability, $\hat{p}^i_{j,j+1}$, from state ${x^i_{j}} $ to state ${x^i_{j + 1}} $ is $$\label{eq9}
{\hat{p}^{i}_{j,j + 1}} = f({x^i_{j + 1}}|{x^i_{j}}) = \prod\limits_{k \in {F^i_{j}}} {p^i_{j,k}} \prod\limits_{k \in {\bar{F}^i_{j}}} {(1 - p^i_{j,k})}$$
Based on , the probability of the $i$-th sample of the cascading outage (the complete path), denoted by $p_c^i$, is given by $$\label{eq10}
{p_c^{i}} = \prod\limits_{j = 1}^{n^i-1} {{\hat{p}^i_{j,j + 1}}}$$
\[R1\] This sequential treatment actually has been heuristically used in most cascading outage simulations albeit without justifying its validity. By strictly defining cascading outages as a Markov chain with appropriate state space and transition probability distribution, our work provides not only a justification for such a extensively-used treatment, bust also a solid mathematical foundation for deriving efficient cascading outage simulation strategies and carry out theoretical analysis, as we discuss in Section IV.
\[R2\] Eq. indicates that the probability of a cascading outage can be very small as it is the product of a series of small probabilities. Particularly, a cascading outage with a severe consequence usually involves many stages with very small probabilities, resulting in an extremely small probability. It is the main cause that blackout events can hardly be captured by using traditional MCS. As a consequence, insufficient samples of rare events may further give unreliable estimation results of the blackout risk with biased expectation and/or large variance. This problem, theoretically, cannot be alleviated effectively in large-scale system by merely increasing the number of simulations, as the size of state space $\mathcal X$ expands dramatically when the number of system components increases (according to Eq. ).
Cascading Outage Simulation Based on SIS
========================================
Importance Sampling for Cascading Outage Simulations
----------------------------------------------------
For improving the sampling efficiency and depressing the estimation variance, importance sampling (IS) technique is recognized an effective tool. Its basic idea is to sample the stochastic process under a proposal joint probability distribution $g(X_{n} ,X_{n-1} ,\cdots , X_{1} )$ ($g(Z)$ for short) instead of the true joint probability distribution $f(Z)$. Specially, the probability of arbitrary possible cascading outage under the proposal joint probability distribution needs to be positive, i.e., $g(Z)>0,Z \in \mathcal Z$. Then after $N_{s} $ i.i.d. simulations, we can obtain a sample set of cascading outages, $Z_s:=\{z_s^i, i =1, 2, \cdots, N_s\}$, where, ${z_s^i} = \{{x^i_{1}}, x^i_2, \cdots, {x^i_{n^i}}\} $ is the $i^{th}$ sample of cascading outages; $n^i$ the length of the sampled cascading outage in the $i^{th}$ simulation; $x^i_j$ the sampled state at stage $j$ of the $i^{th}$ simulation. Afterward, we can obtain the sample set of load shedding, $Y_s:=\{ y^i_{s}, i=1\cdots N_{s} \} $, where $y^i_{s}=h(z^i_{s})$. For simplicity, we abuse the notation $\delta_{Y_0}$ throughout to stand for $\delta_{\{ h(Z) \ge Y_0\}}$. As $|\mathcal Z|$ is finite, the true probability of event $A$ defined previously is given by $$\label{eq11}
\mu (A) =\sum\limits_{Z \in \mathcal Z} {{\delta _{ Y_0}}f(Z)}$$
As the true probability $\mu(A)$ cannot be obtained accurately, we usually estimate it through Eq. based on $N_M$ samples given by the MCS under the original joint probability distribution $f(Z)$. The variance of estimation, $D(A)$, is given by .
We are interested in the expectation and variance based on the IS under the proposal probability distribution $g(Z)$. To this end, we let $w(Z)=f(Z)/g(Z)$, yielding $$\label{eq12}
\mu (A)= \sum _{Z \in \mathcal Z} {{\delta _{{Y_0} }}w(Z)} g(Z)$$
As the IS with the proposal joint probability distribution $g(Z)$ is deployed, the unbiased estimation of $\mu (A)$ based on $N_{s}$ samples turns to be $$\label{eq13}
{{\tilde {\mu}} _{IS}(A)=
\frac{1}{{{N_{s}}}}\left(\sum\limits_{i = 1}^{{N_{s}}} {{w}({z^i_s}) }\cdot {\delta _{\{ {y^i_s} \ge {Y_0}\} }}\right)}$$ where $w(z^i_s)>0$ is the sampling weight subject to $$\label{eq14}
{w}({z_s^i}) = \frac{{f({z^i_s})}}{{g({z^i_s})}}$$
Moreover, the variance of the probability estimation is $$\label{eq15}
\begin{array}{ll} {{D _{IS}}(A)} &{ = \frac{{\mathbb{E}{{\{ {\delta _{{Y_0} }}w(Z) - \mathbb{E}[{\delta _{{Y_0} }}w(Z)]\} }^2}}}{{{N_{s}}}} } \\
&{ = \frac{{\mathbb{E}\{ {{[{\delta _{{Y_0} }}w(Z)]}^2}\} - {{\{ \mathbb{E}[{\delta _{{Y_0} }}w(Z)]\} }^2}}}{{{N_{s}}}}}\\
&{ = \frac{{\sum\limits_{Z \in \mathcal Z} {\delta _{{Y_0} }^2{w^2}(Z)g(Z)} - {{[\sum\limits_{Z \in \mathcal Z} {{\delta _{{Y_0} }}w(Z)g(Z)} ]}^2}}}{{{N_{s}}}} } \end{array}$$
Let $$\label{eq16}
{w_0} = \frac{{\sum\limits_{Z \in \mathcal Z} {\delta _{{Y_0} }^2{w^2}(Z)g(Z)} }}{{\sum\limits_{Z \in \mathcal Z} {\delta _{{Y_0} }^2w(Z)g(Z)} }}$$ Then substituting and into yields $$\label{eq17}
\begin{array}{ll} {D _{IS} (A)} &{=\frac{1}{{{N_{s}}}} \left( {{w_0}\sum\limits_{z \in {\mathcal Z}} {{\delta _{{Y_0} }}w(z)g(z)} - {\mu^2}(A)} \right)} \\ &{=\frac{1}{N_{s} } \left( w_{0} \mu(A)-\mu^{2} (A) \right)} \end{array}$$
Next we present some important propositions.
\[p1\] Given $g(Z)$, $w(Z)$ and $\mathcal Z$, there must be $${w_0} \in [\mathop {min}\limits_{Z \in {\mathcal Z}} w(Z),\mathop {\max }\limits_{Z \in {\mathcal Z}} w(Z)]$$
Since $w(Z)$ and $g(Z)$ are non-negative, we have $${w_0} \le \frac{{\sum\limits_{Z \in \mathcal Z} {\delta _{{Y_0} }^2w(Z)g(Z)\mathop {max}\limits_{Z \in \mathcal Z} w(Z)} }}{{\sum\limits_{Z \in \mathcal{Z} } {\delta _{{Y_0} }^2w(Z)g(Z)} }} = \mathop {max}\limits_{Z \in \mathcal Z} w(Z)$$ Similarly, we have $${w_0} \ge \frac{{\sum\limits_{Z \in \mathcal Z} {\delta _{{Y_0} }^2w(Z)g(Z)\mathop {min}\limits_{Z \in \mathcal Z} w(Z)} }}{{\sum\limits_{Z \in \mathcal Z} {\delta _{{Y_0} }^2w(Z)g(Z)} }} = \mathop {\min }\limits_{Z \in \mathcal Z} w(Z)$$
\[p2\] Let $D _{IS} (A)$ and $D (A)$ be the variances of the probability estimation of event $A$ defined previously by using the IS and the MCS, respectively. If $N_s=N_M$, then $D _{IS} (A)<D (A)$ holds if and only if the proposal joint probability distribution $g(Z)$ satisfies $w_0<1$, or equivalently, $$\label{eq18}
w_{0} \mu(A)<\mu(A)$$
\[p3\] Let $D _{IS} (A)$ and $D (A)$ be the variances of the probability estimation of event $A$ defined previously by using the IS and the MCS, respectively. If $D_{IS}(A)=D(A)$, then $N _{IS}<N_M$ holds if and only if the proposal joint probability distribution $g(Z)$ satisfies $w_0<1$, or equivalently, $$\label{eq19}
w_{0} \mu(A)<\mu(A)$$
It is easy to prove Proposition 2 and Proposition 3 by directly comparing with .
Proposition 1 guarantees the existence of $w_0$, while Propositions 2 and 3 give the necessary and sufficient conditions that the IS can reduce sample size and estimation variance compared with that obtained by the MCS. In practice, it may be difficult to check the conditions or . A more convenient way is to use the following sufficient condition: $$\label{eq20}
g(Z)>f(Z), \forall Z\in \{Z \in \mathcal Z|\; h(Z)>Y_0\}$$
Similar conclusion can be drawn for the blackout risk assessment. Given $g(Z)$ for the IS, then the blackout risk is $$\label{eq21}
Risk_{IS} (Y_0) = \mathbb{E}(Y \cdot w(Z)\cdot \delta_{Y_0})$$ The estimation of blackout risk based on $N_s$ samples is $$\label{eq22}
\tilde Ris{k_{IS}}(Y_0) = \frac{1}{{{N_{s}}}}\left( \sum\limits_{i = 1}^{{N_{s}}} {y_s^iw(y_s^i){\delta _{\{ y_s^i \ge {Y_0}\} }}}\right)$$
Then the estimation variance of and are given by $$\label{eq23}
D (R) = \frac{{\sum\limits_{Z \in \mathcal Z} {{h^2}(Z){\delta _{{Y_0} }}f(Z)} - {Risk(Y_0)}^2}}{N_M}$$ and $$\label{eq24}
{D _{IS}}(R) = \frac{{\sum\limits_{Z \in \mathcal Z} {{w^2}(Z){h^2}(Z){\delta _{{Y_0} }}g(Z)} - {Risk(Y_0)}^2}}{{{N_{s}}}}$$ respectively. According to and , the condition of the variance reduction can be obtained accordingly.
The theoretical analysis indicates that the IS can reduce both the sample size and the estimation variance, provided an appropriately selected proposal probability distribution $g(Z)$ . Considering the unbiasedness of the estimation using the IS and the MCS, the lower variance indicates that the IS has a better estimation performance than the MCS [@r26].
Sequential Importance Sampling based Simulation Strategy
--------------------------------------------------------
Similar to , for $g(Z)$ we have $$\label{eq25}
\begin{array}{ll}
g(Z)&=g(X_{n} ,\cdot \cdot \cdot X_{2} ,X_{1} )\\
&{=g_n(X_{n} |X_{n-1} )\cdot g_{n-1}(X_{n-1} |X_{n-2} )\cdot \cdot \cdot g_1(X_{1} )}
\end{array}$$ It means the proposal joint probability distribution $g(Z)$ can be chosen sequentially at individual stages in a cascading outage. Thus the problem of choosing $g(Z)$ turns out to be one of choosing the series $g_{j+1}(X_{j+1} |X_{j} )$ sequentially. For the purpose of acquiring more information about the cascading outage with severe load shedding, $g_{j+1}(X_{j+1} |X_{j} )$ should be carefully chosen to amplify the probability of cascading outages in future stages versus original $f_{j+1}(X_{j+1} |X_{j} )$. Heuristically, we modify the outage probability of components given in into $$\label{eq26}
q_{j,k}^i = \min (\eta p_{j,k}^i,\max (\varphi _k))$$ where $q^i_{j,k}$ is the modified component’s outage probability; $\eta $ is the SIS parameter stands for the amplification factor of component’s outage probability. Correspondingly the modified transition probability becomes $$\label{eq27}
\hat q_{j,j + 1}^i = \prod\limits_{k \in F_j^i} {q_{j,k}^i} \prod\limits_{k \in \bar F_j^i} {(1 - q_{j,k}^i)}$$
For the $i$-th sample, the original load shedding probability $p_c^i $ is given by while the modified probability $q_c^i $ is given by $$\label{eq28}
q_c^i = \prod\limits_{j = 1}^{{n^i-1}} {\hat q_{j,j + 1}^i}$$ The corresponding sampling weight is $$\label{eq29}
w(z_s^i) = \frac{{p_c^i}}{{q_c^i}} = \prod\limits_{j = 1}^{{n^i-1}} {\frac{{\hat p_{j,j + 1}^i}}{{\hat q_{j,j + 1}^i}}}$$
Simulating cascading outages with sampling weights given by , the load shedding probability and the blackout risk can be estimated by using and , respectively. According to the previous analyses, both the number of simulations and the variance of estimations can be reduced, provided appropriately selected sampling weights .
To guarantee high sampling efficiency of the SIS, $\eta$ should be choose carefully so that or is satisfied. Unfortunately, it is not really a trivial work because, in the light of the necessary and sufficient condition, $w_0$ cannot be known a priori. However, noticing $p^i_{j,k}$ in and and $q^i_{j,k}$ in are usually very small, we have $(1-p^i_{j,k}) \approx (1-q^i_{j,k}) \approx 1$. It implies that the following condition holds $$w(z_s^i) = \prod\limits_{j = 1}^{{n^i}} {\frac{{\hat p_{j,j + 1}^i}}{{\hat q_{j,j + 1}^i}}} \approx \prod\limits_{j = 1}^{{n^i}} {\frac{{\prod\nolimits_{k \in F_j^i} {p_{j,k}^i} }}{{\prod\nolimits_{k \in F_j^i} {q_{j,k}^i} }}}$$ for most samples. Thus, if $\eta$ is selected such that $\eta>1$, then the sufficient condition can hold approximately. Numerical experiments empirically support this conclusion.
Algorithm
---------
The algorithm of the SIS based strategy is given as follows
------------------------------------------------------------------------
- [**Step 1: Data preparation.**]{} Initialize the system data and parameters. Specifically, choose $\eta>1$.
- [**Step 2: Sampling states.**]{} For the $i$-th sampling, according to the system state, $ x_j^i $ at stage $j$ , and the outage probability of components based on and , simulate the component outages and acquire the new state $ x_{j+1}^i$ at the next stage. Afterward, calculate the state transition probability and the sampling weight using and , respectively.
- [**Step 3: Termination judgment.**]{} If $x_j^i $ is the same as $x_{j+1}^i$, the $i$-th sample of cascading outage simulation is completed at stage $j$ and the $i$-th sample $z_s^i = \{ x_1^i \cdot \cdot \cdot x_j^i\} $ is obtained. If all $N_{s} $ simulations are completed, the sampling process is ended; otherwise let $i=i+1$ and go back to Step .
- [**Step 4: Data analysis.**]{} According to and , estimate the probability of load shedding and blackout risk.
------------------------------------------------------------------------
In addition to the IS/SIS, the SPLITTING method has been used for effectively improving the rare events analysis in power systems [@r32; @r33; @r34]. Its main idea is to divide the path of cascading outages into multiple sub-paths to dramatically increase the probability of rare events of interest. Similar to the IS/SIS, its simulation settings and parameters must be tuned carefully. As the SPLITTING is still a MCS-based method essentially, the simulations for each sub-path may still need a huge number of samples as the state space is large. It is interesting that this problem can be surmounted by using the IS/SIS. This further motivates an improved approach that combines both the IS/SIS and the SPLITTING methods, which is our ongoing work.
Case Studies
============
In this section, numerical experiments are carried out on two systems based on the simplified OPA model without slow dynamic [@r14]. One test system is the IEEE 300-bus system with a total load of $24,000$ MW, while the other is a real provincial power grid in China, with $1,122$ buses ,$1,792$ transmission lines or transformers and $52,000$ MW total load.
Case 1: IEEE 300-bus System
---------------------------
### Efficiency of Probability Distribution Estimation
In this case, the probability of load shedding in IEEE 300-bus system is estimated by using the MCS and the SIS, respectively. The sample size of the MCS is 50,000 while that of the SIS is only 2,000 as the MCS requires much more samples to achieve a small variance of estimation. As mentioned previously, both the MCS and the SIS strategies give unbiased estimation on the load shedding probability. According to the estimation results shown in Fig. \[fig.2\], the two strategies output almost the same estimations on the probability distribution as the load shedding less than 1,000MW. This result justifies that the SIS simulation strategy can achieve a given estimation accuracy with much less number of simulations, and thus it is of higher efficiency than the MCS strategy.
![Probability estimation of the load shedding with MCS and SIS[]{data-label="fig.2"}](image2.eps){height="4.0cm" width="6.5cm"}
In terms of the load shedding greater than 1,000MW (the corresponding probability is less than $10^{-4}$ ), the MCS fails to find any event in 50,000 simulations and cannot come up to estimation for such very rare events. In the contrary, the SIS strategy successfully finds out many rare events with load shedding as large as 1,400MW in only 2,000 simulations (the corresponding probability is nearly $10^{-8}$ ). It indicates that the SIS strategy can considerably facilitate capturing very rare events of cascading outages even with much less simulations. It also implies that the blackout risk analysis based on the the MCS might not be reliable enough since the captured rare events are usually far from being sufficient.
### Variance of Probability Distribution Estimation
In this case, we compare the variance of probability estimation with the two strategies (see Fig. \[fig.3\]) in IEEE 300-bus system. Since the true variance of probability estimation cannot be obtained directly, the sample variance is used as a surrogate. Take the MCS as an example. Denote $\tilde{\mu}^m (A)$ as the estimation of $m$-th sample sets, then the sample variance is $\tilde D(A)=\frac{1}{{{m_{\max }} - 1}}\sum\limits_{m = 1}^{{m_{\max }}} {{{[{\mu^m}(A) - (\frac{1}{{{m_{\max }}}}\sum\limits_{m = 1}^{{m_{\max }}} {{\mu^m}(A)} )]^2}}}$, where ${m_{\max }}$ is the number of i.i.d sample sets, which is set as 75 here.
For comparison, the sample sizes of the MCS and the SIS are both set as 2,000. The SIS parameter is selected as $\eta=1.5$. As shown in Fig. \[fig.3\], the estimation variance for the SIS is lower than the MCS. The equivalent sampling weight bound $w_0P(A)$ is given in Fig \[fig.4\]. It shows that the sufficient condition is satisfied almost everywhere, empirically verifying the theoretic analysis in Remark 4.
![Variance of probability estimations with MCS and SIS[]{data-label="fig.3"}](image3.eps){height="4.0cm" width="6.5cm"}
![$w_0P(A)$ v.s. $P(A)$[]{data-label="fig.4"}](image4.eps){height="4.0cm" width="6.5cm"}
Fig. \[fig.5\] presents the estimation variances decrease along with the increase of sample size. Here, the probability is estimated according to cascading outages with load shedding larger than (a)650MW, (b)750MW and (c)850MW, respectively. As shown in Fig. \[fig.5\], the estimation variances of the SIS decrease much faster compared with that of the MCS, demonstrating that SIS simulation strategy is capable of achieving more reliable estimation results with much less simulations.
![Convergence of the variance of the probability estimation[]{data-label="fig.5"}](image5.eps){height="7.6cm" width="50.00000%"}
### Impacts of the SIS Parameters $\eta$
In this case, we analyze the influence of the SIS parameter $\eta$ on the estimation variance of cascading outages in IEEE 300-bus system (see Fig. \[fig.6\]). Here, $\eta$ is selected as 1.2, 1.5, 2, respectively, while other conditions are the same as the previous cases. It is found that $\eta$ can impact on the probability estimation of cascading outages in twofold: On the one hand, as a larger $\eta$ is adopted, more detailed information of the rare events can be captured. From Fig. \[fig.6\], it is observed that the SIS with $\eta=2$ obtains blackout samples with load shedding even over 2,000MW (the corresponding probability is nearly $10^{-16}$), while the SIS with a smaller $\eta$, say 1.2 or 1.5, does not capture such rare events.
![ Variance of probability estimation with different SIS parameters[]{data-label="fig.6"}](image6.eps){height="5.6cm" width="53.00000%"}
On the other hand, whereas more rare event samples are captured, the estimation variance of normal events with lower load shedding increases. In this case, the SIS with $\eta=2$ exhibits larger variance of the probability estimation of the load shedding less than 600MW versus either the SIS with smaller parameters or the MCS. However, when $\eta$ is scaled down to 1.5 or 1.2, the variance of the probability estimation of normal events drops down to the same as the MCS, albeit much less rare events can be found. This case empirically indicates, a larger SIS parameter can facilitate capturing more rare events with higher load shedding, at the expense of increasing the estimation variance of normal events. This expense, nevertheless, does make sense and is acceptable as we mainly are concerned with the potential blackouts with quite large load shedding. This feature of the SIS also allows to purposely adjust resolution of cascading outage analysis according to desired levels of load shedding by carefully tuning the SIS parameter.
### Blackout Risk Estimation
In this part, we deploy the SIS and the MCS simulation strategies to analyze the blackout risk defined as in , where the load shedding level $Y_0$ is set as 750MW. The mean value and the variance are obtained based on 75 sample sets. In each of sample set, 2,000 simulations are carried out with the SIS and the MCS, separately. The curves of the mean value and the variance along with the sample size are shown in Fig. \[fig.7\], showing that the SIS can significantly improve both the efficiency and the reliability of blackout risk analysis.
Case 2: A Real Power System
---------------------------
For further demonstrating the practicality of the SIS based strategy, we compare it with the MCS strategy in a large real power grid in China. The sample size is still set as 2,000. Similar to th previous case, both strategies can give unbiased estimation. Because of the space limitation, the results about the unbiasedness of estimation are omitted here, while the estimation variance of load shedding probability and blackout risk are shown in Tab. \[t1\] and Tab. \[t2\], respectively.
In this case, the SIS outperforms the MCS again. As for small ${Y_0}$, the estimation variance of the SIS is smaller compared with the MCS. When ${Y_0}$ increases, the difference is getting more and more significant. When ${Y_0}$ is large enough, say, $4,000$ MW in this case, the MCS cannot obtain any effective samples to carry out statistic analysis on rare events, while the SIS is still effective for capturing those rare events . This case further exhibits the proposed SIS strategy can remarkably improve the efficiency and the reliability of cascading outage analysis compared with the traditional MCS strategy, especially when extremely rare blackouts are involved.
${Y_0}$(MW) 1,000 2,000 3,000 4,000 5,000 6,000
------------- -------- ----------------- ----------------- ----------------- ---------------- ------------------
MCS $5.6$ $5.6{e^{ - 1}}$ $9.6{e^{ - 2}}$ - - -
SIS $7.8 $ $4.1{e^{ - 1}}$ $4.8{e^{ - 3}}$ $4.3{e^{ - 3}}$ $2.8{e^{ -5}}$ $2.0{e^{ - 11}}$
: Estimation variance of load loss probability ($\times 10^{-7}$)[]{data-label="t1"}
${Y_0}$(MW) 1,000 2,000 3,000 4,000 5,000 6,000
------------- -------- ----------------- ----------------- ----------------- ----------------- ------------------
MCS $1.13$ $0.21$ $6.1{e^{ - 2}}$ - - -
SIS $0.77$ $2.7{e^{ - 2}}$ $7.5{e^{ - 3}}$ $9.4{e^{ - 5}}$ $8.1{e^{ - 5}}$ $1.3{e^{ - 10}}$
: Estimation variance of blackout risk with MCS and SIS[]{data-label="t2"}
Conclusion
==========
In this paper, we have formulated a cascading outage in power systems as a Markov chain with specific state space and transition probability, based on which we have further derived a sequential importance sampling strategy for cascading outage simulations. Theoretical analysis and case studies show that
1. The Markov chain based formulation of cascading outages is well defined, which admits standard and strict stochastic analysis. With the formulation, it is expected that more powerful analytic tools can be applied.
2. The SIS based simulation strategy can significantly enhance the computational efficiency and the estimation reliability of cascading outage analysis.
3. The SIS based simulation strategy can dramatically improve the capability of capturing very rare events in cascading outage simulations.
Whereas the Markov chain based formulation and the SIS based simulation strategy are derived for the cascading outage analysis in power systems in this paper, it could provide a generic framework for cascading outage analysis of a broad class of complex networks. We believe Our ongoing work is to quantitatively characterize the confidence bounds of the estimation results of SIS based simulation strategies.
Acknowledgment {#acknowledgment .unnumbered}
==============
The authors would like to thank S. H. Low and L. Guo for very helpful discussions.
[^1]: Manuscript received XXX, XXXX; revised XXX, XXX. *(Corresponding author: Feng Liu)*.
[^2]: Jinpeng Guo, Feng Liu, and Shengwei Mei are with the Department of Electrical Engineering, Tsinghua University, Beijing, 100084, China (e-mail: lfeng@tsinghua.edu.cn).
[^3]: Jianhui Wang is with the the Energy Systems Division, Argonne National Laboratory, Argonne, IL 60439 USA (e-mail: jianhui.wang@anl.gov)
[^4]: Junhao Lin is with the Department of Electrical and Electronic Engineering, University of Hong Kong, HKSAR, Hong Kong (e-mail: jhlin@eee.hku.hk).
[^5]: Here, load shedding, $Y$, is recognized as a random variable, as we will strictly define later on.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A general real-space multigrid algorithm MIKA (Multigrid Instead of the K-spAce) for the self-consistent solution of the Kohn-Sham equations appearing in the state-of-the-art electronic-structure calculations is described. The most important part of the method is the multigrid solver for the Schrödinger equation. Our choice is the Rayleigh quotient multigrid method (RQMG), which applies directly to the minimization of the Rayleigh quotient on the finest level. Very coarse correction grids can be used, because there is in principle no need to represent the states on the coarse levels. The RQMG method is generalized for the simultaneous solution of all the states of the system using a penalty functional to keep the states orthogonal. Special care has been taken to optimize the iterations towards the self-consistency and to run the code in parallel computer architectures. The scheme has been implemented in multiple geometries. We show examples from electronic structure calculations employing nonlocal pseudopotentials and/or the jellium model. The RQMG solver is also applied for the calculation of positron states in solids.'
address: 'Laboratory of Physics, Helsinki University of Technology, P.O. Box 1100, FIN-02015 HUT, FINLAND'
author:
- 'T. Torsti, M. Heiskanen, M. J. Puska, and R. M. Nieminen'
title: 'MIKA: a multigrid-based program package for electronic structure calculations'
---
=10000
[2]{}
Introduction {#sec:introduction}
============
The goal of computational materials science and also that of modeling of nanoscale man-made structures is to calculate from first principles the various chemical and/or physical properties. This requires the solution of the electronic (and ionic) structures of the system in question. The density-functional theory (DFT) [@kohn98] makes a huge step towards this goal by casting the untractable problem of many interacting electrons to that of noninteracting particles under the influence of an effective potential. However, in order to apply DFT in practice one has to resort to approximations for electron exchange and correlation such as the local-density approximation (LDA) or the generalized-gradient approximation (GGA). Moreover, in the case of systems consisting of hundreds or more atoms it is still a challenge to solve numerically efficiently for the ensuing Kohn-Sham equations.
We have developed a real-space multigrid method called MIKA (Multigrid Instead of the K-spAce) for the numerical solution of the Kohn-Sham equations [@mgarticle1]. In real-space methods[@beckrev; @arias; @waghmare], the values of the wave-functions and potentials are presented using three-dimensional point grids, and the partial differential equations are discretized using finite differences. Multigrid methods[@brandt1; @beckrev] overcome the critical slowing-down (CSD) phenomenon occuring with basic real-space relaxation methods. Several approaches employing the multigrid idea have appeared during recent years[@briggs; @ancilotto; @fattebert2; @wang1].
From the different multigrid methods available for the solution of the Schrödinger equation, we have picked up the Rayleigh Quotient Multigrid (RQMG) method introduced by Mandel and McCormick [@McCormick]. This approach differs from full-approximation-storage[@brandt2; @beck1; @wang1; @costiner] (FAS) methods, as well as from those methods[@briggs], where the eigenproblem is linearized.
In the RQMG method the coarse grid relaxation passes are performed so that the Rayleigh quotient calculated on the [*fine*]{} grid will be minimized. In this way there is no requirement for the solution to be well represented on a coarse grid and the coarse grid representation problem is avoided. Mandel and McCormick[@McCormick] introduced the method for the solution of the eigenpair corresponding to the lowest eigenvalue. We have generalized it to the simultaneous solution of a desired number of lowest eigenenergy states by developing a scheme which keeps the eigenstates separated by the use of a penalty functional[@mgarticle1].
Numerical Methods {#sec:methods}
=================
In our RQMG application the coarse grid relaxations are performed by the so-called coordinate relaxation method. It solves the discretized eigenproblem $$H u = \lambda B u$$ by minimizing the Rayleigh quotient $$\label{Ray}
\frac{\langle u\arrowvert H\arrowvert u\rangle}
{\langle u\arrowvert B\arrowvert u\rangle}.$$ Above, $H$ and $B$ are matrix operators chosen so that the Schrödinger equation discretized on a real-space point grid with spacing $h$ is satisfied to a chosen order $O(h^n)$. In Eq. (\[Ray\]) $u$ is a vector containing the wave function values at the grid points. In the relaxation method, the current estimate $u$ is replaced by $ u' = u + \alpha d$, where the search vector $d$ is simply chosen to be unity in one grid point and to vanish in all other points, and $\alpha$ is chosen to minimize the Rayleigh quotient. This leads to a simple [^1] quadratic equation for $\alpha$. A complete coordinate relaxation pass is then obtained by performing the minimization at each point in turn and these passes can be repeated until the lowest state is found with desired accuracy.
Naturally, also the coordinate relaxation suffers from CSD because of the use of local information only in updating $u$ in a certain point. In order to avoid it one applies the multigrid idea. In the multigrid scheme by Mandel and McCormick[@McCormick] the crucial point is that [*coarse*]{} grid coordinate relaxation passes are performed so that the Rayleigh quotient calculated on the [*fine*]{} grid will be minimized. In this way there is no requirement for the solution to be well represented on a coarse grid. In practice, a coarse grid search substitutes the fine grid solution by $$\label{rqmgchgeq}
u_f' = u_f + \alpha I_c^f e_c,$$ where the subscripts $f$ and $c$ stand for the fine and coarse grids, respectively, and $I_c^f$ a prolongation operator interpolating the coarse grid vector to the fine grid. The Rayleigh quotient to be minimized is then $$\begin{aligned}
\label{rqmgeq}
& \frac{\langle u_f + \alpha I_c^f d_c \arrowvert H_f \arrowvert
u_f + \alpha I_c^f d_c \rangle}
{\langle u_f + \alpha I_c^f d_c \arrowvert B_f \arrowvert
u_f + \alpha I_c^f d_c \rangle} = \qquad \qquad \qquad \qquad \nonumber \\
&\qquad \qquad \qquad
\frac{ \langle u_f \arrowvert H_f u_f \rangle
+ 2\alpha \langle I_f^c H_f u_f \arrowvert d_c \rangle
+ \alpha^2 \langle d_c \arrowvert H_c d_c \rangle
}
{ \langle u_f \arrowvert B_f u_f \rangle
+ 2\alpha \langle I_f^c B_f u_f \arrowvert d_c \rangle
+ \alpha^2 \langle d_c \arrowvert B_c d_c \rangle
}.\end{aligned}$$ The second form is obtained by relating the coarse grid operators, $H_c$ and $B_c$, with the fine grid ones, $H_f$ and $B_f$, by the Galerkin condition $$\label{galerkincond}
H_c = I_f^c H_f I_c^f; \quad
B_c = I_f^c B_f I_c^f; \quad
I_f^c = \left(I_c^f\right)^T.$$ The key point to note is that when $H_f u_f$ and $B_f u_f$ are provided from the fine grid to the coarse grid, the remaining integrals can be calculated on the coarse grid itself. Thus one really applies coordinate relaxation on the coarse grids to minimize the *fine level* Rayleigh quotient. This is a major departure from the earlier methods, which to some extent rely on the ability to represent the solution of some coarse grid equation on the coarse grid itself. Here, on the other hand, one can calculate the *exact* change in the Rayleigh quotient due to *any* coarse grid change, no matter how coarse the grid itself is. There is no equation whose solution would have to be representable.
In the MIKA package we have generalized the RQMG method to the simultaneous solution of several mutually orthogonal eigenpairs. The separation of the different states is divided into two or three subtasks. First, in order to make the coarse grid relaxations converge towards the desired state we apply a penalty functional scheme. Given the current approximations for the $k$ lowest eigenfunctions, the next lowest, $(k+1)$’th state is updated by minimizing the functional $$\label{rqmgneq}
\frac{\langle u_{k+1}\arrowvert H\arrowvert u_{k+1}\rangle}
{\langle u_{k+1}\arrowvert B\arrowvert u_{k+1}\rangle}
+ \sum\limits_{i=1}^{k}
q_i \frac{\left|\langle u_i | u_{k+1}\rangle\right|^2}
{\langle u_i | u_i\rangle \cdot
\langle u_{k+1} | u_{k+1}\rangle}.$$ The minimization of this functional is equivalent to imposing the orthonormality constraints against the lower $k$ states, when $q_i \rightarrow \infty$. By increasing the shifts $q_i$ any desired accuracy can be obtained, but in order to obtain a computationally efficient algorithm a reasonable finite value should be used, for example $$q_i = (\lambda_{k+1}-\lambda_i) + {\rm Q},$$ where $Q$ is a sufficiently large positive constant. In our test calculations $Q$ is of the order of $Q=0.5\ldots 2$ Ha.
The substitution (\[rqmgchgeq\]) is introduced in the functional (\[rqmgneq\]) and the minimization with respect to $\alpha$ leads again to a quadratic equation. This time the coefficients contain terms due to the penalty part.
While the penalty functional keeps the states separated on the coarse levels, we apply a simple relaxation method (Gauss-Seidel) on the finest level. The Gauss-Seidel method converges to the nearest eigenvalue, so ideally no additional orthogonalizations would be needed. In practice, however, we use Gramm-Schmidt orthogonalizations and subspace rotations[@mgarticle1]. However, the number of fine grid orthogonalizations remains quite plausible, for example, in comparison with the conjugate gradient search of eigenpairs employing only the finest grid [@seitsonen].
The Kohn-Sham equations have to be solved self-consistently, [*i.e.*]{} the wave functions solved from the single-particle equation determine via the density (solution of the Poisson equation and the calculation of the exchange-correlation potential) the effective potential for which they should again be solved. To approach this self-consistency requires an optimized strategy so that numerical accuracy of the wave functions and the potential increase in balance, enabling the most efficient convergence [@wang1; @waghmare]. Our strategy in MIKA for self-consistency iterations is illustrated in Fig. \[fig:strategy\]. The Poisson equation for the Coulomb potential is solved also by the multigrid method.
Examples {#sec:results}
========
We have demonstrated[@mgarticle1] the performance of the MIKA scheme in calculating the electronic structures of small molecules and solid-state systems described by pseudopotentials. As a typical application Fig. \[fig:deep\_state\] shows the electron density of the so-called deep state localized at a neutral, ideal vacancy in bulk Si. It was shown, that the accuracy of 1 meV for the total energy was reached after three or four V-cycles, and that the amount of cpu-time needed was of the same order as when applying state-of-the-art plane-wave codes. We obtained an average convergence rate of approximately one decade per self-consistency iteration. This is of the same order as those reported by Wang and Beck [@wang1] in their FAS scheme or by Kresse and Furthmüller [@kresse2] in their plane-wave scheme employing self-consistency iterations. The convergence rate of one decade per self-consistency iteration is better than that obtained by Ancilotto [*et al.*]{} [@ancilotto] in the FMG scheme and much better than the rate reached in the linearized multigrid scheme by Briggs [*et al.*]{} [@briggs].
We have applied the RQMG method also for the calculation of positron states in solids. Fig. \[fig:positrons\] shows how the delocalized positron state in the perfect $\alpha$-quartz is trapped in to a Si-vacancy. Positron states are a particularly simple case for our method, because only the lowest energy wave function needs to be calculated in a given potential, so that no orthogonalizations or penalty functionals are needed. Moreover, in a simple scheme an electron density calculated without the influence of the positron can be used as the starting point [@pos_rev]. However, even for the positron states the superior performance of the multigrid method in comparison with straightforward relaxation schemes is evident. For example, we have calculated the positron state in the Si-vacancy in bulk Si using a supercell containing 1727 atoms. The solution of the positron wave-function using the RQMG-method took less than a minute of cpu-time on a typical work station. To put this in proper context, J. E. Pask [*et al*]{} [@Pask] report a similar calculation, based on the finite element method but without multigrid acceleration, for a supercell containing 4096 Cu atoms. The result converged within 1 ps took ’ just 14.3 hr ’ of CPU time.
We have also applied the MIKA scheme in two-dimensional problems for quantum dots employing the current-spin-density functional theory (CSDFT), see Ref. [@Henri]. Moreover, we have implemented the RQMG-method in cylindrical coordinates enabling very efficient and accurate calculations for atomic chains, or systems which can be described using axisymmetric jellium models. Fig. \[fig:cylinder\] shows a selected wavefunction of a system where a chain of four carbon atoms is sandwhiched between two planar jellium leads.
Summary and outlook {#sec:conclusions}
===================
In the MIKA program package the RQMG method introduced by Mandel and McCormick [@McCormick] is generalized for the simultaneous solution of a desired number of lowest eigenenergy states. The approach can be viewed to belong to a third group of multigrid methods, in addition to FAS and techniques where the eigenproblem is linearized. In principle, one can use arbitrarily coarse grids in RQMG, whereas in the other multigrid methods one has to be able to represent all the states also on the coarsest grid.
We are convinced that our method will compete with the standard plane-wave methods for electronic structure calculations. However, some straightforward programming is still required. Implementation of the Hellmann-Feynman forces, required for the optimization of the ionic structures is under way.
During the RQMG V-cycle, the states are all relaxed simultaneously and independently of each other. A parallelization over states would therefore be natural to implement on a shared memory architecture. We have parallelized the MIKA codes over k-points, and over real-space domains. The domain decomposition is the appropriate method for distributed memory parallel computers.
We acknowledge the contributions by Henri Saarikoski, Paula Havu, Esa Räsänen, Tero Hakala, and Sampsa Riikonen in sharing their experience of the use of the MIKA package in different applications and preparing the figures \[fig:positrons\] (T.H.) and \[fig:cylinder\] (P.H.). T.T. acknowledges financial support by the Vilho, Yrjö and Kalle Väisälä foxundation. This research has been supported by the Academy of Finland through its Centre of Excellence Programme (2000 - 2005).
W. Kohn, Rev. Mod. Phys. [**71**]{}, 1253 (1998). M. Heiskanen, T. Torsti, M. J. Puska, and R. M. Nieminen, Phys. Rev. B [**63**]{}, 245106 (2001). T. L. Beck, Rev. Mod. Phys. [**72**]{}, 1041 (2000). T. A. Arias, Rev. Mod. Phys. [**71**]{}, 267 (1999). U. V. Waghmare, H. Kim, I. J. Park, N. Modine, P. Maragakis, and E. Kaxiras, Comp. Phys. Comm. 137, 341 (2001). A. Brandt, Math. Comp. [**31**]{}, 333 (1977). E. L. Briggs and D. J. Sullivan and J. Bernholc, Phys. Rev. B, [**52**]{}, 5471 (1995); [*ibid.*]{}, [**54**]{}, 14362 (1996). F. Ancilotto, P. Blandin, and F. Toigo, Phys. Rev. B, [**59**]{}, 7868 (1999). J.-L. Fattebert, J. Comput. Phys. [**149**]{}, 75 (1999). J. Wang and T. L. Beck J. Chem. Phys. [**112**]{}, 9223 (2000). J. Mandel and S. McCormick, J. Comput. Phys. [**80**]{} (1989) 442. A. Brandt, S. F. McCormick and J. W. Ruge, SIAM J. Sci. Stat. Comput. [**4**]{}, 244 (1983). T. L. Beck and K. A. Iyer and M. P. Merrick, Int. J. Quantum Chem. [**61**]{}, 341 (1997). S. Costiner and S. Ta’asan, Phys. Rev. E, [**52**]{}, 1181 (1995). A. P. Seitsonen and M. J. Puska and R. M. Nieminen, Phys. Rev. B, [**51**]{}, 14057 (1995). G. Kresse and J. Furthmüller, Comput. Mat. Sci. [**6**]{}, 15 (1996). M. J. Puska and R. M. Nieminen, Rev. Mod. Phys. [**66**]{}, 841 (1994). J.E. Pask, B.M. Klein, P.A. Sterne and C.Y. Fong, Comp. Phys. Comm. [**135**]{}, 1, (2001). H. Saarikoski, M. J. Puska, and R. M. Nieminen, in this conference proceedings.
[^1]: For the sake of simplicity, the wave-functions, and thus $\alpha$, are here assumed real. We have implemented the complex case as well.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this work we present an analysis of production and signature of neutral Higgs bosons $H_2^0$ in the version of the 3-3-1 model containing heavy leptons at the ILC (International Linear Collider) and CLIC (Cern Linear Collider). The production rate is found to be significant for the direct production of $e^{-} e^{+} \rightarrow H_{2}^{0} Z$. We also studied the possibility to identify it using their respective branching ratios.'
author:
- 'J. E. Cieza Montalvo$^1$'
- 'C. A. Morgan Cruz, R. J. Gil Ramírez, G. H. Ramírez Ulloa, A. I. Rivasplata Mendoza$^2$'
- 'M. D. Tonasse$^{3}$[^1]'
title: 'Neutral 3-3-1 Higgs Boson Through $e^{+}e{-}$ Collisions '
---
=5000 =1000
INTRODUCTION \[introd\]
=======================
The Higgs sector still remains one of the most indefinite part of the standard model (SM) [@wsg], but it still represents a fundamental rule by explaining how the particles gain masses by means of a isodoublet scalar field, which is responsible for the spontaneous breakdown of the gauge symmetry, the process by which the spectrum of all particles are generated. This process of mass generation is the so called [*Higgs mechanism*]{}, which plays a central role in gauge theories.
The SM provides a very good description of all the phenomena related to hadron and lepton colliders. This includes the Higgs boson which appears as elementary scalar and which arises through the breaking of electroweak symmetry. The Higgs Boson is an important prediction of several quantum field theories and is so crucial to our understanding of the Universe. So on 4 July 2012, was measured the discovered 126 GeV Higgs boson [@atlas1; @atlas11]. In this model, the Higgs field receives a vacuum expectation value (VEV), $v \simeq 246$ GeV, which breaks the electroweak gauge symmetry and gives masses to the fundamental fermions and gauge bosons.
However, the standard model does not predict the number of scalar multiplets of the theory, for that reason, there are several extensions of the standard model containing neutral and charged Higgs bosons. Since the standard model leaves many questions open, there are several well motivated extensions of it. For example, if the Grand Unified Theory (GUT) contains the standard model at high energies, then the Higgs bosons associated with GUT symmetry breaking must have masses of order $M_{X} \sim {\cal O} (10^{15})$ GeV. Supersymmetry [@supers] provides a solution to this hierarchy problem through the cancellation of the quadratic divergences via the contributions of fermionic and bosonic loops [@cancell]. Moreover, the Minimal Supersymmetric extension of the Standard Model (MSSM) can be derived as an effective theory from supersymmetric Grand Unified Theories [@sgut]. Another promissory class of models is the one based on the $SU(3)_{C}\otimes SU(3)_{L} \otimes U(1)_{N}$ (3-3-1 for short) semisimple symmetry group [@PT93]. In this model the new leptons do not require new generations, as occur in most of the heavy-lepton models [@FH99]. This ones is a chiral electroweak model whose left-handed charged heavy-leptons, which we denote by $P_a$ $=$ $E$, $M$ and $T$, together with the associated ordinary charged leptons and its respective neutrinos, are accommodated in SU(3)$_L$ triplets.
These models emerge as an alternative solution to the problem of violation of unitarity at high energies in processes such as $e^-e^- \to W^-V^-$, induced by right-handed currents coupled to a vector boson $V^-$. The usual way to circumvent this problem is to give particular values to model parameters in order to cancel the amplitude of the process [@PP92], but in this work was proposed an elegant solution assuming the presence of a doubly charged vector boson. The simplest electroweak gauge model is able to realize naturally a double charge gauge boson based on the SU(3)$\otimes$U(1) symmetry [@PP92]. As a consequence of the extended gauge symmetry, the model is compelled to accommodate a much richer Higgs sector.
The main feature of the 3-3-1 model is that it is able to predicts the correct number of fermions families. This is because, contrary to the standard model, the 3-3-1 model is anomalous in each generation. The anomalies are cancelled only if the number of families is a multiple of three. In addition, if we take into account that the asymptotic freedom condition of the QCD is valid only if the number of generations of quarks is to be less than five, we conclude that the number of generations is three [@LS01]. Another good feature is that the model predicts an upper bound for the Weinberg mixing angle at $\sin^{2} {\theta_W} < 1/4$. Therefore, the evolution of $\theta_W$ to high values leads to an upper bound to the new mass scale between 3 TeV and 4 TeV [@JJ97].
In this work we are interested in a version of the 3-3-1 model, whose scalar sector has only three Higgs triplets [@PT93]. The text is organized as follow. In Sect.\[sec2\] we give the relevant features of the model. In Sect.\[sec3\] we compute the total cross sections of the process $e^{-} e^{+} \rightarrow H_{2}^{0} Z$ and the Sect.\[sec4\] contains our results and conclusions.
Basic facts about the 3-3-1 model {#sec2}
==================================
The three Higgs triplets of the model are $$\begin{aligned}
\eta & = & \left(\begin{array}{c} \eta^0 \\ \eta_1^- \\ \eta_2^+ \end{array}\right) \quad \rho = \left(\begin{array}{c} \rho^+ \\ \rho^0 \\ \rho^{++}
\end{array}\right) \quad \chi = \left(\begin{array}{c} \chi^- \\
\chi^{--} \\ \chi^0 \end{array}\right)
$$ transforming as $\left({\bf 3}, 0\right)$, $\left({\bf 3}, 1\right)$ and $\left({\bf 3}, -1\right)$, respectively.
The neutral scalar fields develop the vacuum expectation values (VEVs) $\langle\eta^0\rangle \equiv v_\eta$, $\langle\rho^0\rangle \equiv v_\rho$ and $\langle\chi^0\rangle \equiv v_\chi$, with $v_\eta^2 + v_\rho^2 = v_W^2 = (246 \mbox{ GeV})^2$. The pattern of symmetry breaking is $\mbox{SU(3)}_L \otimes\mbox{U(1)}_N \stackrel{\langle\chi\rangle}{\longmapsto}\mbox{SU(2)}_L\otimes\mbox{U(1)}_Y\stackrel{\langle\eta, \rho\rangle}{\longmapsto}\mbox{U(1)}_{\rm em}$ and so, we can expect $v_\chi \gg v_\eta, v_\rho$. The $\eta$ and $\rho$ scalar triplets give masses to the ordinary fermions and gauge bosons, while the $\chi$ scalar triplet gives masses to the new fermions and new gauge bosons. The most general, gauge invariant and renormalizable Higgs potential is $$\begin{aligned}
V\left(\eta, \rho, \chi\right) & = & \mu_1^2\eta^\dagger\eta + \mu_2^2\rho^\dagger\rho + \mu_3^2\chi^\dagger\chi + \lambda_1\left(\eta^\dagger\eta\right)^2 + \lambda_2\left(\rho^\dagger\rho\right)^2 + \lambda_3\left(\chi^\dagger\chi\right)^2 + \nonumber \\
&& \left(\eta^\dagger\eta\right)\left[\lambda_4\left(\rho^\dagger\rho\right) + \lambda_5\left(\chi^\dagger\chi\right)\right] +
+ \lambda_6\left(\rho^\dagger\rho\right)\left(\chi^\dagger\chi\right) + \lambda_7\left(\rho^\dagger\eta\right)\left(\eta^\dagger\rho\right) + \nonumber \\
&& \lambda_8\left(\chi^\dagger\eta\right)\left(\eta^\dagger\chi\right) + \lambda_9\left(\rho^\dagger\chi\right)\left(\chi^\dagger\rho\right) + \lambda_{10}\left(\eta^\dagger\rho\right)\left(\eta^\dagger\chi\right) + \nonumber \\
&& \frac{1}{2}\left(f\epsilon^{ijk}\eta_i\rho_j\chi_k + {\mbox{H. c.}}\right).
\label{pot}\end{aligned}$$
Here $\mu_i$ $\left(i = 1, 2, 3\right)$, $f$ are constants with dimension of mass and the $\lambda_i$, $\left(i = 1, \dots, 10\right)$ are dimensionless constants. $f$ and $\lambda_3$ are negative from the positivity of the scalar masses. The term proportional to $\lambda_{10}$ violates lepto-barionic number, therefore it was not considered in the analysis of the Ref. [@TO96] (another analysis of the 3-3-1 scalar sector are given in Ref. [@AK] and references cited therein). We can notice that this term contributes to the mass matrices of the charged scalar fields, but not to the neutral ones. However, it can be checked that in the approximation $v_\chi \gg v_\eta, v_\rho$ we can still work with the masses and eigenstates given in Ref. [@TO96]. Here this term is important to the decay of the lightest exotic fermion. Therefore, we will keep it in the Higgs potential (\[pot\]).
As usual, symmetry breaking is implemented by shifting the scalar neutral fields $\varphi = v_\varphi + \xi_\varphi + i\zeta_\varphi$, with $\varphi$ $=$ $\eta^0$, $\rho^0$, $\chi^0$. Thus, the physical neutral scalar eigenstates $H^0_1$, $H^0_2$, $H^0_3$ and $h^0$ are related to the shifted fields as
$$\begin{aligned}
\left(\begin{array}{c} \xi_\eta \\ \xi_\rho \end{array}\right) \approx
\frac{1}{v_W}\left(\begin{array}{cc} v_\eta & v_\rho \\ v_\rho & -v_\eta
\end{array}\right)\left(\begin{array}{c} H^0_1 \\ H^0_2 \end{array}\right),&& \\
\xi_\chi \approx H^0_3, \qquad \zeta_\chi \approx h^0,&&
\label{eign}\end{aligned}$$
and in the charge scalar sector we have $$\begin{aligned}
\eta^+_1 \approx \frac{v_\rho}{v_W}H^+_1, \qquad \rho^+ \approx \frac{v_\eta}{v_W}H_2^+, && \\ \chi^{++} \approx \frac{v_\rho}{v_\chi}H^{++}, &&
\label{eigc}\end{aligned}$$\[eig\] with the condition that $v_\chi \gg v_\eta, v_\rho$ [@TO96].
The content of matter fields form the three SU(3)$_L$ triplets
$$\begin{aligned}
\psi_{aL} = \left(\begin{array}{c} \nu^\prime_{\ell a} \\ \ell^\prime_a \\ P^\prime_a \end{array}\right), \nonumber && \\ Q_{1L} = \left(\begin{array}{c} u^\prime_1 \\ d^\prime_1 \\ J_1 \end{array}\right), \qquad Q_{\alpha L} = \left(\begin{array}{c} J^\prime_\alpha \\ u^\prime_\alpha \\ d^\prime_\alpha \end{array}\right), &&
\label{fer}\end{aligned}$$
transform as $\left({\bf 3}, 0\right)$, $\left({\bf 3}, 2/3\right)$ and $\left({\bf 3}^*, -1/3\right)$, respectively, where $\alpha = 2, 3$. In Eqs. (\[fer\]) $P_a$ are heavy leptons, $\ell^\prime_a = e^\prime, \mu^\prime, \tau^\prime$. The model also predicts the exotic $J_1$ quark, which carries $5/3$ units of elementary electric charge and $J_2$ and $J_3$ with $-4/3$ each. The numbers $0$, $2/3$ and $-1/3$ in Eqs. (\[fer\]) are the U$_N$ charges. We also have the right-handed counterpart of the left-handed matter fields, $\ell^\prime_R \sim \left({\bf 1}, -1\right)$, $P^\prime_R \sim \left({\bf 1}, 1\right)$, $U^\prime_R \sim \left({\bf 1}, 2/3\right)$, $D^\prime_R \sim \left({\bf 1}, -1/3\right)$, $J^\prime_{1R} \sim \left({\bf 1}, 5/3\right)$ and $J^\prime_{2,3R} \sim \left({\bf 1}, -4/3\right)$, where $U = u, c, t$ and $D = d, s, b$ for the ordinary quarks.
The Yukawa Lagrangians that respect the gauge symmetry are $$\begin{aligned}
{\cal L}^Y_\ell & = & -G_{ab}\overline{\psi_{aL}}\ell^\prime_{bR} - G^\prime_{ab}\overline{\psi^\prime_{aL}}P^\prime\chi + {\mbox{H. c.}}, \\
{\cal L}^Y_q & = & \sum_a\left[\overline{Q_1{L}}\left(G_{1a}U^\prime_{aR}\eta + \tilde{G}_{1a}D^\prime_{aR}\rho\right) + \sum_\alpha\overline{Q_{\alpha L}}\left(F_{\alpha a}U^\prime_{aR}\rho^* + \tilde{F}_{\alpha a}D^\prime_{aR}\eta^*\right)\right] + \cr && +\sum_{\alpha\beta}F^J_{\alpha\beta}\overline{Q_{\alpha J}}J^\prime_{\beta R}\chi^* + G^J\overline{Q_{1L}}J_{1R} + {\mbox{ H. c.}}.
\label{yuk}\end{aligned}$$
Here, the $G$’s, $\tilde{G}$’s, $F$’s and $\tilde{F}$’s are Yukawa coupling constants with $a, b = 1, 2, 3$ and $\alpha = 2, 3$.
It should be noticed that the ordinary quarks couple only through $H^0_1$ and $H^0_2$. This is because these physical scalar states are linear combinations of the interactions eigenstates $\eta$ and $\rho$, which break the SU(2)$_L$ $\otimes$U(1)$_Y$ symmetry to U(1)$_{\rm em}$. On the other hand the heavy-leptons and quarks couple only through $H^0_3$ and $h^0$ in scalar sector, [*i. e.*]{}, throught the Higgs that induces the symmetry breaking of SU(3)$_L$$\otimes$U(1)$_N$ to SU(2)$_L$$\otimes$U(1)$_Y$. The Higgs particle spectrum consists of ten physical states: three scalars ($H_{1}^{0}, H_{2}^{0}, H_3^{0}$), one neutral pseudoscalar $h^0$ and six charged Higgs bosons, $H_{1}^{\pm}, H_{2}^{\pm}, H^{\pm\pm}$.
In this work we study the production of a neutral Higgs boson $H_2^0$, which can be radiated from a $Z^{'}$ boson at $e^{+} e^{-}$ colliders such as the International Linear Collider (ILC) ($\sqrt{s} = 1500$ GeV) and CERN Linear Collider (CLIC) ($\sqrt{s} = 3000$ GeV).
CROSS SECTION PRODUCTION {#sec3}
========================
We begin with the direct production of Higgs ($H_{2}^{0}$), that is $e^{-} e^{+} \rightarrow H_{2}^{0} Z$. This process take place via the exchange of a virtual $Z^{\prime}$ boson in the s channel and it can also take place through the $H_{1}^{0}$ and $H_{2}^{0}$, but the contribution of these channels are small due to the small coupling of the Higgs $H_{2}^{0}$ to the electrons. The term involving the $Z$ boson is absent, because there is no coupling between the $Z$ and $H_{2}^{0} Z$. Then using the interaction Lagrangian Eqs. ($2$) and ($10$) we obtain the differential cross section.
$$\begin{aligned}
\left (\frac{d \hat{\sigma}}{d\cos \theta} \right )_{H_{2}^{0} Z} & = &\frac{\beta_{H_{2}^{0}} \alpha^{2} \pi}{32 \sin^{4}_{\theta_{W}} \cos^{2}_{\theta_{W}} s} \ \frac{\Lambda_{ZZ^{\prime} H_{2}^{0}}^2}{(s- M_{Z'}^{2}+ iM_{Z'} \Gamma_{Z'})^{2}}
\Biggl \{ (2M_{Z}^{2}+ \frac{2tu}{M_{Z}^{2}}- 2t- 2u + 2s) \nonumber \\
&& (g_{V'}^{e^{2}}+ g_{A'}^{e^{2}}) \Biggr \} , \nonumber \\
\label{DZZ'H}\end{aligned}$$
the $\beta_{H_{2}^{0}}$ is the Higgs velocity in the c.m. of the subprocess which is equal to $$\beta_{H_{2}^{0}} = \frac{ \left [\left( 1- \frac{(m_{Z}+ m_{H_{2}^{0}})^{2}}{\hat{s}} \right) \left(1- \frac{(m_{Z}- m_{H_{2}^{0}})^{2}}{\hat{s}} \right) \right ]^{1/2}}{1-\frac{m_{Z}^{2}-m_{H_{2}^{0}}^{2}}{\hat{s}}} \ \ ,$$
and $t$ and $u$ are
$$t = m_{Z}^{2} - \frac{s}{2} \Biggl \{ \left(1+ \frac{m_{Z}^{2}- m_{H}^{2}}{s}\right)- \cos \theta \left [\left( 1- \frac{(m_{Z}+ m_{H})^{2}}{s} \right) \left(1- \frac{(m_{Z}- m_{H})^{2}}{s} \right) \right ]^{1/2}\Biggr \},$$
$$u = m_{H}^{2} - \frac{s}{2} \Biggl \{ \left(1- \frac{m_{Z}^{2}- m_{H}^{2}}{s}\right)+ \cos \theta \left [\left( 1- \frac{(m_{Z}+ m_{H})^{2}}{s} \right) \left(1- \frac{(m_{Z}- m_{H})^{2}}{s} \right) \right ]^{1/2}\Biggr \},$$ where $\theta$ is the angle between the Higgs and the incident quark in the CM frame.
The primes $\left(^\prime\right)$ are for the case when we take a $Z'$ boson, $\Gamma_{Z'}$ [@ct2005; @cieto02], are the total width of the $Z'$ boson, $g_{V', A'}^{e}$ are the 3-3-1 lepton coupling constants, $s$ is the center of mass energy of the $e^{-} e^{+}$ system, $g= \sqrt{4 \ \pi \ \alpha}/\sin \theta_{W}$ and $\alpha$ is the fine structure constant, which we take equal to $\alpha=1/128$. For the $Z^\prime$ boson we take $M_{Z^\prime} = \left(1.5 - 3\right)$ TeV, since $M_{Z^\prime}$ is proportional to the VEV $v_\chi$ [@TO96; @PP92; @fra92]. For the standard model parameters, we assume Particle Data Group values, [*i. e.*]{}, $M_Z = 91.19$ GeV, $\sin^2{\theta_W} = 0.2315$, and $M_W = 80.33$ GeV [@Nea10], $\it{t}$ and $\it{u}$ are the kinematic invariants. We have also defined the $\Lambda_{ZZ^{\prime} H_{2}^{0}}$ as the coupling constants of the $Z^{\prime}$ boson to Z boson and Higgs $H_{2}^{0}$, and the $\Lambda_{e \bar{e} Z^{\prime}}$ are the coupling constants of the $Z^{\prime}$ to $e \bar{e}$.
$$\begin{aligned}
\left(\Lambda_{e\bar{e}Z^\prime}\right)_\mu & \approx & -i\frac{g}{2 \sqrt{1-s_w^2}} \gamma_\mu\left[g_{V^\prime}^{e} - g_{A^\prime}^{e} \gamma_5\right], \\
\left(\Lambda_{ZZ^\prime H_2^0}\right)_{\mu\nu} & \approx & \frac{g^2}{\sqrt{3}\left(1 - 4s_W^2\right)}\frac{v_\eta v_\rho}{v_W}g_{\mu\nu},
\label{eigc}\end{aligned}$$
\[eigthen\]
RESULTS AND CONCLUSIONS {#sec4}
=======================
Here we present the cross section for the process $e^+ e^- \rightarrow H_2^0 Z$ for the ILC ($1.5$) TeV and CLIC ($3$ TeV). All calculations were done according to [@TO96; @cnt2] from which we obtain for the parameters and the VEV, the following representative values: $\lambda_{1} =0.3078$, $\lambda_{2}=1.0$, $\lambda_{3}= -0.025$, $\lambda_{4}= 1.388$, $\lambda_{5}=-1.567$, $\lambda_{6}= 1.0$, $\lambda_{7} =-2.0$, $\lambda_{8}=-0.45$, $v_{\eta}=195$ GeV, and $\lambda_{9}=-0.90(-0.76,-0.71)$ correspond to $v_\chi= 1000(1500,2000)$ GeV these parameters and VEV are used to estimate the values for the particle masses which are given in table \[tab1\].
Differently from what we did in the paper [@ct2005], where was taken arbitrary parameters, in this work we take for the parameters and the VEV the following representative values given above and also the fact that the mass of $m_{H_1^0}$ is already defined [@atlas1; @atlas11]. It is remarkable that the cross sections were calculated in order to guarantee the approximation $-f \simeq v_\chi$ [@TO96; @cnt2]. It must be taken into consideration that the branching ratios of $H_2^0$ are dependent on the parameters of the 3-3-1, which determines the size of several decay modes.
$f$ $v_{\chi}$, $m_{J_1}$ $m_E$ $m_M$ $m_{H_3^0}$ $m_{h^0}$ $m_{H_1^0}$ $m_{H_2^0}$ $m_{H^\pm_2}$ $m_V$ $m_U$ $m_{Z^\prime}$ $m_{J_{2, 3}}$
--------- ----------------------- ------- -------- ------------- ----------- ------------- ------------- --------------- -------- -------- ---------------- ---------------- -- -- --
-1008.3 1000 148.9 875 2000 1454.6 126 1017.2 183 467.5 464 1707.6 1410
-1499.7 1500 223.3 1312.5 474.34 2164.32 125.12 1525.8 387.23 694.12 691.76 2561.3 2115
-1993.0 2000 297.8 1750 632.45 2877.07 125.12 2034.37 519.39 922.12 920.35 3415.12 2820
: \[tab1\] Values for the particle masses used in this work. All the values in this Table are given in GeV. Here, $m_{H^{\pm\pm}} =
500$ GeV and $m_T = 2v_\chi$.
The Higgs $H_{2}^{0}$ in 3-3-1 model is not coupled to a pair of standard bosons, it couples to quarks, leptons, Z $Z^{\prime}$, $Z^{\prime}$ $Z^{\prime}$ gauge bosons, $H_{1}^{-} H_{1}^{+}$, $H_{2}^{-} H_{2}^{+}$, $h^{0} h^{0}$, $H_{1}^{0} H_{3}^{0}$ Higgs bosons, $V^{-}V^{+}$ charged bosons, $U^{--} U^{++}$ double charged bosons, $H_{1}^{0} Z$, $H_{1}^{0} Z'$ bosons and $H^{--} H^{++}$ double charged Higgs bosons [@ct2005]. The Higgs $H_{2}^{0}$ can be much heavier than $ 1017.2$ GeV for $v_\chi = 1000 \ $GeV, $1525.8$ GeV for $v_\chi = 1500 \ $GeV, and $2034.37$ GeV for $v_\chi = 2000$ GeV, so the Higgs $H_2^{0}$ is a heavy particle.
In Table \[tab1\] the masses of the exotic boson $Z^{\prime}$, taken above, is in accord with the estimates of the Tevatron, which probes the $Z^{\prime}$ masses in the 923-1023 GeV range, [@tait], while the reach of the LHC is superior for higher masses, that is $1 \ TeV <M_{Z^{\prime}} \leq 5$ TeV [@freitas] and for ATLAS at $8$ TeV with an integrated luminosity approximately of $20$ fb$^{-1}$ the mass range is $2 \ TeV \leq m_{Z^{\prime}} \leq 3$ TeV, [@atlas1; @atlas2014].
ILC - Events
------------
Considering that the expected integrated luminosity for ILC collider will be of order of $500$ fb$^{-1}$, then the statistics we are expecting are the following, the ILC gives a total of $ \simeq 1.68 \times 10^5 (4.83 \times 10^4)$ events per year, if we take the mass of the Higgs boson $m_{H_2^0}= 1100(1300)$ GeV ($\Gamma_{H_2^0} = 878.25, 1091.33 GeV$) and $v_{\chi}=1000$ GeV, see Fig. \[fig1\]. These values are in accord with the Table \[tab1\].
![Total cross section for the process $e^+ e^- \rightarrow H_2^0 Z$ as a function of $m_{H^{0}_{2}}$ for the ILC at 1.5 TeV and $v_{\chi}=1.5$ TeV. []{data-label="fig1"}](Figure1.eps)
To obtain event rates we multiply the production cross sections by the respective branching ratios. Considering that the signal for $H_{2}^{0}Z$ production for $m_{H_{2}^{0}}= 1100(1300)$ GeV and $v_{\chi}=1000$ GeV will be $H_{2}^{0} Z \rightarrow Z H_{1}^{0} Z$, and taking into account that the branching ratios for these particles would be $BR(H^{0}_{2} \to Z H_{1}^{0}) = 39.5 (43.4) \ \% $ [@ctrg2013], and $\operatorname{BR}(Z \to b \bar{b}) = 15.2 \ \% $, and that the particles $H_{1}^{0}$ decay into $W^{+} W^{-}$, and taking into account that the branching ratios for these particles would be $BR(H^{0}_{1} \to W^{+} W^{-}) = 23.1 \ \% $ followed by leptonic decay of the boson $W^{+}$ into $\ell^{+} \nu$ and $W^{-}$ into $\ell^{-} \bar{\nu}$ whose branching ratios for these particles would be $BR(W \to \ell \nu) = 10.8 \ \%$, then we would have approximately $ \simeq 4 (1))$ events per year for ILC for the signal $b\bar{b} b \bar{b} \ell^{+} \ell^{-} X$.
Statistics for $v_{\chi}=1500 (2000)$ gives no result because there is not enough energy to produce the Higgs boson $H_2^0$. That is, we can see that the number of events for the signal for ILC is insignificant.
CLIC - Events
-------------
Considering that the expected integrated luminosity for CLIC collider will be of order of $3000$ fb$^{-1}$/yr, then we obtain a total of $ \simeq 3.1 \times 10^5 (2.8) \times 10^5$ events per year if we take the mass of the Higgs boson $m_{H_2^0}= 1100(1300)$ GeV and $v_{\chi}=1000$ GeV, see Fig.\[fig2\]. Considering the same signal as above for $H_2^0 Z$ production, that is $H_{2}^{0} Z \rightarrow Z H_{1}^{0} Z$, and taking into account that the branching ratios for these particles would be $BR(H^{0}_{2} \to Z H_{1}^{0}) = 39.5 (43.4) \ \% $, [@ctrg2013], and $\operatorname{BR}(Z \to b \bar{b}) = 15.2 \ \% $, and that the particles $H_{1}^{0}$ decay into $W^{+} W^{-}$, and taking into account that the branching ratios for these particles would be $\operatorname{BR}(H^{0}_{1} \to W^{+} W^{-}) = 23.1 \ \% $ followed by leptonic decay of the boson $W^{+}$ into $e^{+} \nu$ and $W^{-}$ into $e^{-} \bar{\nu}$ whose branching ratios for these particles would be $BR(W \to e \nu) = 10.8 \ \%$, then we would have approximately $ \simeq 8(8)$ events per year for CLIC for the signal $b\bar{b} b \bar{b} \ell^{+} \ell^{-} X$.
The statistics for $v_{\chi}=1500$ gives a total of $\simeq 9.8 \times 10^5(8.1 \times 10^5)$ events per year for CLIC, if we take the mass of the Higgs boson $m_{H_{2}^{0}}= 1600(1800)$ GeV, respectively. These values are in accord with Table \[tab1\]. Taking into account the same signal as above, that is $H_{2}^{0} Z \rightarrow Z H_{1}^{0} Z$, and taking into account that the branching ratios for these particles would be $\operatorname{BR}(H^{0}_{2} \to Z H_{1}^{0}) = 44.2 (45.9) \ \% $, [@ctrg2013], $BR(Z \to b \bar{b}) = 15.2 \ \% $, $BR(H^{0}_{1} \to W^{+} W^{-}) = 23.1 \ \% $, $BR(W \to e \nu) = 10.8 \ \%$, we would have approximately $ \simeq 27(23))$ events per year for CLIC for the same signal $b\bar{b} b \bar{b} e^{+} e^{-} X$.
![Total cross section for the process $e^+ e^- \rightarrow H_2^0 Z$ as a function of $m_{H^{0}_{2}}$ for the CLIC at 3.0 TeV and $v_{\chi}=1.0$ TeV (solid line), $v_\chi = 1.5$ TeV (dash-dot line), $v_\chi = 2.0$ TeV (dashed line).[]{data-label="fig2"}](Figure2.eps){width="1.1\columnwidth"}
With respect to vacuum expectation value $v_{\chi}=2000$ GeV, for the masses of $m_{H_{2}^{0}}= 2100(2300)$ it will give a total of $\simeq 2.9 \times 10^5(2.0 \times 10^5 )$ events per year to produce $H_{2}^{0}$. Taking into account the same signal as above, that is $b\bar{b} b \bar{b} \ell^{+} \ell^{-} X$ and considering that the branching ratios for $H_{2}^{0}$ would be $\operatorname{BR}(H^{0}_{2} \to Z H_{1}^{0}) = 46.4 (47.3) \ \% $, [@ctrg2013], $\operatorname{BR}(Z \to b \bar{b}) = 15.2 \ \% $, $\operatorname{BR}(H^{0}_{1} \to W^{+} W^{-}) = 23.1 \ \% $, $BR(W \to e \nu) = 10.8 \ \%$, we will have approximately $ \simeq 8(6)$ events per year
The main background to this signal is $Z W^{+} W^{-} Z$, which cross section is $1.17 \times 10^{-3}$ pb for $\sqrt{s}=3$ TeV. Considering that the $Z Z$ particles decay into $b \bar{b}$, whose branching ratios for these particles would be $\operatorname{BR}(Z \rightarrow b \bar{b}) = 15.2 \%$ followed by leptonic decay of the boson W, that is $\operatorname{BR}(W \rightarrow e \nu) = 10.8 \%$ then we would have approximately a total of $ \simeq 1$ event for the background and $\simeq 8(8)$ events for the signal for $m_{H_{2}^{0}}= 1100(1300)$ GeV and $v_{\chi}=1000$.
Therefore we have that the statistical significance is $\simeq 2.66(2.66) \sigma$ for $m_{H_{2}^{0}}= 1100(1300)$ and $v_{\chi}=1000$ GeV, that is a low probability to detect signals. On the other hand, for $v_{\chi}=1500$ GeV and $m_{H_{2}^{0}}= 1600(1800)$ GeV we have $\simeq 5.10(4.70) \sigma$ discovery in the $b\bar{b} b \bar{b} e^{+} e^{-} X$ final state , for $v_{\chi}=2000$ GeV and $m_{H_{2}^{0}}= 2100(2300)$ GeV which corresponds to $\simeq 2.66(2.27) \sigma$, we have that the signals are too small to be observed.
To extract the signal from the background we must select the $b \bar{b}$ channel using the techniques of b-flavour identification. Later, the Z that comes together with the $H_{2}^{0}$ and the other Z that comes from the decay of $H_{2}^{0}$ would appear as a peak in the invariant mass distribution of b-quark pairs. The charged lepton track from the $W$ decay and the cut on the missing transverse momentum ${p\!\!\slash}_{T} >$ 20 GeV allows for a very strong reduction of the backgrounds.
The $H_{2}^{0} Z$ will also decay into $t \bar{t} \ \ell^{+} \ell^{-}$, and consider that the branching ratios for these particles would be $\operatorname{BR}(H^{0}_{2} \to t \bar{t}) = 5.1 (4.1) \ \% $, , [@ctrg2013], and $BR(Z \to \ell^{+} \ell^{-}) = 10.2 \ \% $ for the mass of the Higgs boson $m_{H_{2}^{0}}= 1100(1300)$ GeV and $v_{\chi}=1000$ GeV and that the particles $t \bar{t}$ decay into $ b \bar{b} W^{+} W^{-}$, whose branching ratios for these particles would be $BR(t \to b W) = 99.8 \ \% $, followed by leptonic decay of the boson W, that is $BR(W \to e \nu) = 10.75 \ \% $, then we would have approximately $\simeq 19 (13)$ events per year for CLIC for the signal $b\bar{b} e^{-} e^{+} \ell^{+} \ell^{-} X$. Considering the vacuum expectation value $v_{\chi}=1500$ GeV and the branching ratios $\operatorname{BR}(H_{2}^{0} \rightarrow t \bar{t}) = 2.8 (2.3) \ \% $, , [@ctrg2013], and taking the same parameters and branching ratios for the same particles given above, then we would have for $m_{H_{2}^{0}}= 1600(1800)$ a total of $ \simeq 32(22)$ events per year for CLIC for the same signal. With respect to vacuum expectation value $v_{\chi}=2000$ GeV, for the masses of $m_{H_{2}^{0}}= 2100(2300)$ and taking into account the same signal as above, that is $b\bar{b} e^{-} e^{+} \ell^{+} \ell^{-} X$ and considering that the branching ratios $\operatorname{BR}(H_{2}^{0} \rightarrow t \bar{t}) = 1.7 (1.5) \ \% $, [@ctrg2013],we will have approximately $ \simeq 6(4)$ events per year.
Taking again the irreducible background for the process $t \bar{t}Z\rightarrow b \bar{b} e^{+} e^{-} \ell^{+} \ell^{-} X$, and using CompHep [@pukhov] we have that a cross section of $1.67 \times 10^{-3}$ pb, which gives $ \simeq 6$ events. So we will have a total of $\simeq 19 (13)$ events per year for the signals for $m_{H_{2}^{0}}= 1100(1300)$ GeV and $v_{\chi}=1000$, which corresponds to have $\simeq 3.80(2.98) \sigma$, then we have an evidence for $\simeq 3.80 \sigma$ discovery in the $b\bar{b} e^{-} e^{+} \ell^{+} \ell^{-} X$ final state. On the other hand, for $v_{\chi}=1500$ we have $\simeq 32(22)$ events for $m_{H_{2}^{0}}= 1600(1800)$ GeV and which corresponds to $\simeq 5.19(4.16) \sigma$, then we have a discovery for $\simeq 5.19 \sigma$ in the $b \bar{b} e^{+} e^{-} \ell^{+} \ell^{-} X$ final state. For $v_{\chi}=2000$ we have $\simeq 6(4)$ events for $m_{H_{2}^{0}}= 2100(2300)$ GeV and which corresponds to $\simeq 1.73(1.27) \sigma$, that is a low probability to detect the signals. We impose the following cuts to improve the statistical significance of a signal, i. e. we isolate a hard lepton from the $W$ decay with $p_{T}^{\ell}>$ 20 GeV, put the cut on the missing transverse momentum ${p\!\!\slash}_{T} >$ 20 GeV and apply the Z window cut $|m_{\ell^{+} \ell^{-}} - m_{Z}| >$ 10 GeV, which removes events where the leptons come from Z decay [@aguila]. However, all this scenarios can only be cleared by a careful Monte Carlo work to determine the size of the signal and background.
We still mention that the initial state radiation (ISR) and beamstrahlung (BS) strongly affects the behaviour of the production cross section around the resonance peaks, modifying as the shape as the size [@nicro], so Fig. $3$ shows the cross section with and without ISR + BS around the resonance point $m_{Z^\prime} = 2561.3$ GeV for CLIC. As can be seen the peak of the resonance shifts to the right and is lowered as a result of the ISR + BS effects.
![Total cross section for the process $e^+ e^- \rightarrow H_2^0 Z$ as a function of center of mass energy ($\sqrt{s}$) with and without ISR + BS (dashed line and solid line respectively) for CLIC at $v_\chi=1.5$ TeV.[]{data-label="fig3"}](Figure3.eps){width="1.1\columnwidth"}
In summary, we showed in this work that in the context of the 3-3-1 model the signatures for neutral Higgs boson $H_2^0$ can be significant in CLIC collider if we take $v_{\chi}=1500$, $m_{H_{2}^{0}}=1600(1800)$ GeV and a luminosity of 3000 $fb^{-1}$, we have $\simeq 5.10(4.70) \sigma$ discovery in the $b\bar{b} b \bar{b} e^{+} e^{-} X$ and $\simeq 5.19(4.16) \sigma$ in the $b \bar{b} e^{+} e^{-} \ell^{+} \ell^{-} X$ final state.
[99]{}
S. Glashow, Nucl. Phys. [**20**]{} (1961) 579; A. Salam, in Elementary Particle Theory, ed. N. Svartholm, (1968); S. Weinberg, Phys. Rev. Lett. [**19**]{} (1967) 1264.
G. Aad [*et al.*]{}, Phys. Lett. B [**710**]{}, 49 (2012).
G. Aad [*et al.*]{}(ATLAS Collaboration), Phys. Lett. B [**716**]{}, 1 (2012).
J. Wess and B. Zumino, Nucl. Phys. [**B70**]{} (1974) 39.
J. Wess and B. Zumino, Phys. Lett. [**49B**]{} (1974) 52; J. Iliopoulos and B. Zumino, Nucl. Phys. [**B76**]{} (1974) 310; S. Ferrara, J. Iliopoulos and B. Zumino, Nucl. Phys. [**B77**]{} (1974) 413; E. Witten, Nucl. Phys. [**B188**]{} (1981) 513.
S. Dimopoulos and H. Georgi, Nucl.Phys. [**B193**]{} (1985) 150; S. Dimopoulos, S. Raby, and F. Wilczek, Phys. Rev. [D24]{} (1981) 1681; L. Ibañez and G. G. Ross, Phys. Lett. [**105B**]{} (1981) 439.
V. Pleitez and M. D. Tonasse, Phys. Rev. D [**48**]{}, 2353 (1993).
P. H. Frampton, P. Q. Hung and M. Sher, Phys. Rep. [**330**]{}, 263 (2000); F. del Aguila and J.A. Aguilar-Saavedra, Nucl. Phys. [**B813**]{}, 22 (2009); J.E. Cieza Montalvo, O. J.P. Eboli and S.F. Novaes, Phys. Rev. D [**46**]{}, 181 (1992).
F. Pisano and V. Pleitez, Phys. Rev. D [**46**]{}, 410 (1992); R. Foot, O. F. Hernandez, F. Pisano and V. Pleitez, [*ibid*]{} [**47**]{}, 4158 (1993).
H. N. Long and D. V. Soa, Nucl. Phys. B [**601**]{}, 361 (2001).
P. Jain and S. D. Joglekar, Phys. Lett. B [**407**]{}, 151 (1997); D. Ng, Phys Rev. D [**49**]{}, 4805 (1994); A. G. Dias, R. Martinez and V. Pleitez, [*Concerning the Landau pole in 3-3-1 models*]{}, Report number [hep-ph/0407141]{}.
M. D. Tonasse, Phys. Lett. B [**381**]{}, 191 (1996).
N. T. Anh, N. A. Ky, and H. N. Long, Int. J. Mod. Phys. A [**16**]{}, 541 (2001); A. Belyaev, M. Drees, O. J. P. Éboli, J. K. Mizukoshi, and S. F. Novaes, Phys. Rev. D [**60**]{}, 075008 (1999 ).
J. E. Cieza Montalvo and M. D. Tonasse, Phys. Rev. D [**71**]{}, 095015 (2005).
J. E. Cieza Montalvo and M. D. Tonasse, Nucl. Phys. [**B623**]{}, 325 (2002).
P. H. Frampton, Phys. Rev. Lett. [**69**]{}, 2889 (1992).
J. Beringer [*et al.*]{} (Particle Data Group), Phys. Rev. D [**86**]{}, 010001 (2012).
J. E. Cieza Montalvo, Nelson V. Cortez and M. D. Tonasse, Phys. Rev. D [**76**]{}, 117703 (2007).
T. Aaltonen et. al. (CDF Collaboration), Phys. Rev. Lett. [**99**]{}, 171802 (2007); V. M. Abazov [*at al.*]{}, ( CDF Collaborations), Phys. Lett. [**695B**]{} (2011) 88-94.
arXiv:1405.4123v2 \[hep-ex\], arXiv:1308.5874v1 \[hep-ex\]
A. Freitas, Phys. Rev. D [**70**]{}, 015008 (2004).
J. E. Cieza Montalvo, R. J. Gil Ramírez, G. H. Ramírez Ulloa, A. I. Rivasplata Mendoza and M. D. Tonasse, Phys. Rev. D [**88**]{}, 095020 (2013).
A. Pukhov [*et. al.*]{}, hep-ph/9908288.
F. del Aguila, J. A. Aguilar-Saavedra, Nucl. Phys. B 813 (2009) 22; A. G. Akeroyd, Cheng-Wei Chiang, Naveen Gaur, JHEP 1011 (2010) 005.
E. A. Kuraev and V. S. Fadin, Sov. J. Nucl. Phys. [**41**]{}, 466 (1985), \[Yad. Fiz. [**41**]{}, 733 (1985)\]; O. Nicrosini and Luca Trentadue, Phys. Lett. B [**196**]{} 551 (1987); Pisin Chen, Phys. Rev. D [**46**]{}, 1186 (1992); Kaoru Yokoya and Pisin Chen, lecture given at the US-CERN Accelerator School, Hilton Head, Report No. KEK 91-2, 1991 (unpublished); Orhan Cakir, New J.Phys.[**8**]{}, 145 (2006).
Elena Accomando, [*et al.*]{}, Phys. Rev. D 83, 075012(2011).
[^1]: Permanent address: Universidade Estadual Paulista, [*Campus*]{} Experimental de Registro, Rua Nelson Brihi Badur 430, 11900-000 Registro, SP, Brazil
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Motivated in part by the several observed anomalies involving CP asymmetries of B and $B_s$ decays, we consider the Standard Model with a 4th sequential family (SM4) which seems to offer a rather simple resolution. We initially assume T-invariance by taking the up and down-quark $4 \times 4$ mass matrix to be real. Following Friedberg and Lee (FL), we then impose a “hidden" symmetry on the unobserved (“hidden") up and down-quark SU(2) states. The hidden symmetry for four generations ensures the existence of two zero-mass eigenstates, which we take to be the $(u,c)$ and $(d,s)$ states in the up and down-quark sectors, respectively. Then, we simultaneously break T-invariance and the hidden symmetry by introducing two phase factors in each sector. This breaking mechanism generates the small quark masses $m_u,~m_c$ and $m_d,~m_s$, which, along with the orientation of the hidden symmetry, determine the size of CP-violation in the SM4. For illustration we choose a specific physical picture for the hidden symmetry and the breaking mechanism that reproduces the observed quark masses, mixing angles and CP-violation, and at the same time allows us to further obtain very interesting relations/predictions for the mixing angles of $t$ and $t^\prime$. For example, with this choice we get $V_{td} \sim (V_{cb}/V_{cd} - V_{ts}/V_{us}) + {\cal O}(\lambda^2)$ and $V_{t^\prime b} \sim V_{t^\prime d} \cdot (V_{cb}/V_{cd})$, $V_{t b^\prime} \sim V_{t^\prime d} \cdot (V_{ts}/V_{us})$, implying that $V_{t^\prime d} > V_{t^\prime b},V_{t b^\prime}$. We furthermore find that the Cabibbo angle is related to the orientation of the hidden symmetry and that the key CP-violating quantity of our model at high-energies, $J_{SM4} \equiv {\rm Im} \left( V_{tb} V_{t^\prime b}^\star V_{t^\prime b^\prime} V_{t b^\prime}^\star \right)$, which is the high-energy analogue of the Jarlskog invariant of the SM, is proportional to the light-quark masses and the measured CKM angles: $| J_{SM4} | \sim A^3 \lambda^5 \times ( \sqrt{ m_u/m_t }
+ \sqrt{ m_c/m_{t^\prime} }
- \sqrt{ m_d/m_b }
+ \sqrt{ m_s/m_{b^\prime}}) \sim
10^{-5}$, where $A \sim 0.81$ and $\lambda=0.2257$ are the Wolfenstein parameters. Other choices for the orientation of the hidden symmetry and/or the breaking mechanism may lead to different physical outcomes. A general solution, obtained numerically, will be presented in a forthcoming paper.
---
[**Extended Friedberg Lee hidden symmetries, quark masses and CP-violation with four generations**]{}
Shaouly Bar-Shalom$^{a}$[^1], David Oaknin$^{a}$[^2], Amarjit Soni$^b$[^3]
*$^a$Physics Department, Technion-Institute of Technology, Haifa 32000, Israel*
*$^b$Theory Group, Brookhaven National Laboratory, Upton, NY 11973, USA*
Introduction
============
In spite of the success of the Standard Model (SM) in explaining almost all of the observed phenomena in particle physics, it does not address some fundamental issues, such as the hierarchy problem, dark matter, the matter anti-matter asymmetry in the universe etc. Also unexplained are the issues in flavor physics, such as the hierarchy of fermion masses and the number of families. There are strong indications, from both the theoretical and experimental points of views, that some of these unresolved questions are related to some new physics, maybe at the near by TeV-scale. It is, therefore, hoped that, with the LHC turning on very soon, we will get a first hand glimpse of the new physics at the TeV scale and new hints from nature to some of these issues and, in particular to the physics of flavor.
In this paper we wish to study some of the fundamental unresolved issues of flavor within a simple extension of the SM, in which a fourth sequential family of fermions is added - the SM4. Indeed, the four generations scenario can play an important role in flavor physics [@oldsoni], and has recently gained some new interest as it might shed new light on baryogenesis and on CP-violation in K in B, $B_s$ decays [@hou2006; @soni4gen; @CPbaryo1; @CPbaryo2; @kribs]. This model, which can be regarded as an effective low energy description of some higher energy and more fundamental underlying theory, retains all the features of the SM with three generations (which from here on we will denote as SM3), except that it brings into existence the new heavy fermionic members $t^\prime$ and $b^\prime$, which form the 4th quark doublet and a similar leptonic doublet, where the “neutrino" of the 4th family must also be rather heavy, with mass $\gsim M_Z/2$. This may well be an important clue that the underlying nature of the 4th family may be quite different from the 1st three families. This line of thinking may in fact lead to a dark matter candidate [@0310006].
The addition of the fourth generation to the SM3 means that the CKM matrix can now potentially have six independent real parameters/angles and three physical CP-violating phases [@jarlskog]. The two additional phases (with respect to the SM3) provide new sources of CP-violation and may, thus, give rise to new CP-violating effects. Indeed, in a recent paper [@soni4gen], it was shown that a fourth family of quarks with $m_{t^\prime}$ in the range of $\sim 400 - 600$ GeV provides a simple and perhaps rather natural explanation for the several indications of new physics [@newsoni] that have been observed involving CP asymmetries in b-quark systems, and this in fact forms an important motivation for our work. Such heavy fermionic states point to the interesting possibility that the 4th family may play a role in dynamical electroweak symmetry breaking (EWSB), since the mechanism of dynamical mass generation seems to require such heavy masses [@dynEWSB; @dynEWSB2]. In addition, as mentioned above, the new CP-violating phases may play an important role in generating the baryon asymmetry in the universe [@CPbaryo1; @CPbaryo2], which is difficult to address within the SM3.
We note in passing that a 4th generation of quarks (and leptons) with such heavy masses is not ruled out by precision electroweak constraints, but rather requires that correspondingly the Higgs has to be heavier, $\gsim$ 300 GeV [@kribs].
In a recent paper [@FL] that also partly motivated the present work, Friedberg and Lee (FL) suggested a very interesting new approach for the generation of CP-violation and quark masses in the SM3: that a weakly broken symmetry which is operational in the SU(2) (weak) fermionic states relates the smallness of CP-violation to the smallness of the light-quark masses $m_d$ and $m_u$. More specifically, they imposed a “hidden" symmetry on the weak states of the quarks (named henceforward as the “hidden" frame), which is then weakly broken by small CP-phases that generate the non-zero masses for the light-quarks u and d. They found a very interesting relation between CP-violation and the light-quark masses: $$J_{SM} \propto \sqrt{\frac{m_d m_s}{m_b^2}} \label{FLrelation}~,$$ where $J_{SM}$ is the Jarlskog invariant responsible for CP-violation in the SM3 [@jarlskog].
The main appealing feature of the FL mechanism is that the CP-violating phases are the small parameters that control the breaking of the hidden symmetry and are, therefore, the generators of the small masses of the first generation quarks. Unlike the conventional SM3 picture, the FL mechanism gives a physical meaning to the rotations of the quark fields (i.e., from the weak basis to the physical mass eigenstates basis) in the up and down quark sector separately, since there is an independent hidden symmetry for each sector.
As we will show in this paper, the idea of FL and their main result in Eq. \[FLrelation\] is extremely interesting when applied to the SM4 case and our extension will lead it to predictive power . In particular, with an appropriate choice of a hidden symmetry, it allows to generate [*all four masses*]{} of the $u,d,c$ and $s$-quarks in terms of the masses of the four heavy quarks $b,t,b^\prime$ and $t^\prime$ and the new CP-phases. It also gives distinct predictions for the 4th generation mixing angles and for the size of CP-violation in this theory, subject to the constraints coming from existing data on the SM3’s $3 \times 3$ CKM matrix and quark masses. Thus, the hidden symmetry framework for the SM4 case can be directly tested in collider experiments. In particular, we give distinct predictions for the new mixing angles and for the size of the new CP-violating quantities associated with the dynamics of the 4th generation quarks.
On the other hand, the construction of a hidden symmetry for the SM4 case, and the generation of the four light-quark masses in conjunction with T-violation, is more challenging and rather intricate and analytically involved than in the case of the SM3. This is mainly due to the fact that the phase-space of the hidden symmetry in the SM4 case is much broader and that, as opposed to the FL mechanism for the SM3 where the CP-phases generate only the masses of the 1st generation fermions, here we use the new CP-phases (of the SM4) as generators of all four light-quark masses $m_d,m_u,m_s,m_c$, which makes it more difficult to find a physical solution. To put it in another way, our hidden symmetry for the SM4 case defines a plane in which the theory is invariant whereas for three families the symmetry is “one-dimensional", i.e., defines a direction/vector.
In order to spell out our notation and the general formalism of the hidden symmetry and its breaking mechanism within the SM4, we first consider the $4 \times 4$ up and down-quark Yukawa terms in the SM4 (after EWSB):
$$\begin{aligned}
{\cal M}(q^{u,d}) =
\left(q^{u,d}_1,~q^{u,d}_2,~q^{u,d}_3,~q^{u,d}_4 \right) M(q^{u,d})
\left( \begin{array}{c}
q^{u,d}_1 \\ q^{u,d}_2 \\ q^{u,d}_3 \\ q^{u,d}_4 \end{array} \right) ~,\end{aligned}$$
where $q^{u,d}_i$, $i = 1-4$, are the hidden SU(2) quark states of the SM4, and $M(q^{u,d})$ are the corresponding mass matrices in the hidden frame basis.
As our zeroth-approximation we assume invariance under time reversal, thus taking $M_0(q^{u,d})$ (the subscript $0$ will henceforward denote the zeroth-order quantities) to be real and symmetric. We can then extend FL’s idea to the case of the SM4 by “doubling" the hidden symmetry in each quark sector (in the following we drop the indices $u$ and $d$, where unless stated otherwise, it is understood that the discussion below applies to both up and down sectors):
$$\begin{aligned}
&&q_1 \to q_1 + \delta^1_z z + \delta^1_t t~, \nonumber \\
&&q_2 \to q_2 + \delta^2_z z + \delta^2_t t ~, \nonumber \\
&&q_3 \to q_3 + \delta^3_z z + \delta^3_t t ~, \nonumber \\
&&q_4 \to q_4 + \delta^4_z z + \delta^4_t t \label{HS} ~,\end{aligned}$$
where $z$ and $t$ are space-time independent constants of Grassmann algebra anticomuting with the Dirac field operators, and $\delta^i_z,~\delta^i_t$ are c-numbers.
Since $M_0(q)$ is a real symmetric $4 \times 4$ matrix, it is characterized in general by 10 real parameters. However, imposing the hidden symmetry in Eq. \[HS\] eliminates 2 of the 10 parameters. The hidden symmetry of Eq. \[HS\] ensures (under the invariance of ${\cal M}_0(q^{u,d})$) the existence of two massless quark states in each sector, which we will identify as $m_u$ and $m_c$ (in the up-quark sector) and as $m_d$ and $m_s$ (in the down-quark sector). The corresponding two massless eigenvectors of $M_0(q)$ are thus identified as the zeroth-order $u$ and $c$ states, $v_u^0$ and $v_c^0$ (with $m_u^0,~m_c^0=0$) and in the down-quark sector as the zeroth-order $d$ and $s$ states, $v_d^0$ and $v_s^0$ (with $m_d^0,~m_s^0=0$). That is, since nature proves to have a large hierarchical mass structure in the quark sector, we will consider the SM4 in the chiral limit for the first two generations of quarks - $m_{u,d,c,s} =0$. Accordingly, the two massive eigenvectors are identified as the zeroth-order $t$ and $t^\prime$ states (or $b$ and $b^\prime$ states) $v_t^0$ and $v_{t^\prime}^0$ , (or $v_b^0$ and $v_{b^\prime}^0$) with masses (i.e., eigenvalues) $m^0_t,~m^0_{t^\prime}$ (or $m^0_b,~m^0_{b^\prime}$). In particular, it is easy to show that in the hidden basis $\{q_1,q_2,q_3,q_4\}$ the massless eigenvectors span a 2-dimensional subspace of the form:
$$\begin{aligned}
v_u^0,v_c^0 \in \left( \begin{array}{c}
\delta^1_z \\ \delta^2_z \\ \delta^3_z \\ \delta^4_z \end{array} \right) ,
\left( \begin{array}{c}
\delta^1_t \\ \delta^2_t \\ \delta^3_t \\ \delta^4_t \end{array} \right) ~,\end{aligned}$$
and similarly in the down-quark sector.
The next step towards establishing the complete physical picture of quark masses and mixings is to simultaneously break T-invariance and the hidden symmetry by inserting two new phase factors into $M_0$, in each sector. In the following we will construct a general framework that defines the hidden symmetry in the SM4 scenario in a form that emphasizes the underlying geometrical picture, and, then, give a concrete physical example for the breaking mechanism.
Hidden symmetry, T-invariance and the zeroth-order spectrum for the SM4
=======================================================================
In a generalization of the FL idea to the case of the SM4, let us assume, at the first stage that the zeroth-order mass matrix $M_0$ is real and invariant under the following translational symmetry (we will denote this symmetry as Hidden Symmetry 1, HS1) $$\begin{aligned}
\nonumber
q_1 & \rightarrow & q_1 + c_\theta z ~,\nonumber \\
q_2 & \rightarrow & q_2 + s_\theta c_\phi z ~,\nonumber \\
q_3 & \rightarrow & q_3 + s_\theta s_\phi c_\omega z ~, \nonumber \\
q_4 & \rightarrow & q_4 + s_\theta s_\phi s_\omega z \label{HS1} ~.\end{aligned}$$ where $c_\theta,s_\theta = \cos\theta, \sin\theta$ etc., and $z$ is a space-time independent constant of Grassmann algebra anticomuting with the Dirac fields.
This symmetry guarantees that the vector $$\begin{aligned}
Q_1 = c_\theta q_1 + s_\theta c_\phi q_2 + s_\theta s_\phi c_\omega q_3 + s_\theta s_\phi s_\omega q_4 \label{Q0} ~,\end{aligned}$$ is a massless eigenstate of the theory, as under the HS1 it transforms as $Q_1 \rightarrow Q_1 + z$. On the other hand, the three orthogonal (to $Q_1$) vectors $$\begin{aligned}
Q_2 &=& -s_\theta q_1 + c_\theta c_\phi q_2 + c_\theta s_\phi c_\omega q_3 + c_\theta s_\phi s_\omega q_4 \nonumber \\
Q_3 &=& -s_\phi q_2 + c_\phi c_\omega q_3 + c_\phi s_\omega q_4 \nonumber \\
Q_4 &=& -s_\omega q_3 + c_\omega q_4 \label{Q123}~,\end{aligned}$$ are invariant under the HS1, i.e., $Q_i \to Q_i$ for $i=2,3,4$. The rotation from the hidden frame $\{q_1,q_2,q_3,q_4\}$ to the HS1 frame $\{Q_1,Q_2,Q_3,Q_4\}$ can be written as $Q_i = R_{ij} q_j$, thus defining the real unitary matrix $R$:
$$\begin{aligned}
R =\left( \begin{array}{cccc}
c_\theta & s_\theta c_\phi & s_\theta s_\phi c_\omega & s_\theta s_\phi s_\omega \\
-s_\theta & c_\theta c_\phi & c_\theta s_\phi c_\omega & c_\theta s_\phi s_\omega \\
0 & -s_\phi & c_\phi c_\omega & c_\phi s_\omega \\
0 & 0 & -s_\omega & c_\omega
\end{array} \right) \label{Rmatrix}~.\end{aligned}$$
Demanding translational invariance under HS1 of Eq. \[HS1\], $M_0$ has only one massless eigenstate (the state $Q_1$). Thus, in order to enforce the chiral limit for the first two generations, we will demand that the zeroth-order mass matrix is invariant under an additional translation operation, which is operational in the HS1 frame $\{Q_1,Q_2,Q_3,Q_4\}$ and which we will name Hidden Symmetry 2 (HS2). Without loss of generality, we assume that HS2 is orthogonal to HS1 as follows:
$$\begin{aligned}
\nonumber
Q_1 & \rightarrow & Q_1 ~, \nonumber \\
Q_2 & \rightarrow & Q_2 + c_\zeta t ~, \nonumber\\
Q_3 & \rightarrow & Q_3 + s_\zeta c_\eta t ~, \nonumber \\
Q_4 & \rightarrow & Q_4 + s_\zeta s_\eta t \label{HS2} ~.\end{aligned}$$
The additional symmetry HS2 guarantees that the vector $$P_1 = c_\zeta Q_2 + s_\zeta c_\eta Q_3 + s_\zeta s_\eta Q_4
\label{P0} ~,$$ which is orthogonal to $Q_1$, is also massless.
The most general form of the Yukawa term ${\cal M}_{0}$ that is invariant under the independent translations in both directions HS1 and HS2, can then be written as: $$\begin{aligned}
{\cal M}_{0} = \alpha | c_\eta Q_4 - s_\eta Q_3 |^2 +
\beta | c_\zeta Q_4 - s_\zeta s_\eta Q_2 |^2 + \gamma | c_\zeta Q_3 - s_\zeta c_\eta Q_2|^2
\label{zeroth_order_matrx}~,\end{aligned}$$ and this defines the quark mass matrix $M_0$. Recall that, since $M_0$ is invariant under HS1 and HS2, two of its four eigenstates, i.e., $Q_1$ and $P_1$, are necessarily massless.
Before deriving the full zeroth-order system (i.e., 2 non-zero masses and 4 states), we wish to point out the mapping of our double hidden symmetry (HS1 and HS2) to the generic parameterizations of the hidden symmetry in Eq. \[HS\]. In particular, using the definition for HS1 and HS2 in Eqs. \[HS1\] and \[HS2\], respectively, and the fact that $q = R^{-1}Q$, we obtain the overall hidden symmetry for the SM4 case: $$\begin{aligned}
\nonumber
q_1 & \rightarrow & q_1 + c_\theta z - s_\theta c_\zeta t ~, \nonumber \\
q_2 & \rightarrow & q_2 + s_\theta c_\phi z + \left[c_\theta c_\phi c_\zeta - s_\phi s_\zeta c_\eta \right] t ~, \nonumber \\
q_3 & \rightarrow & q_3 + s_\theta s_\phi c_\omega z +
\left[c_\theta s_\phi c_\omega c_\zeta + c_\phi c_\omega s_\zeta c_\eta - s_\omega s_\zeta s_\eta \right] t ~, \nonumber \\
q_4 & \rightarrow & q_4 + s_\theta s_\phi s_\omega z +
\left[c_\theta s_\phi s_\omega c_\zeta + c_\phi s_\omega s_\zeta c_\eta + c_\omega s_\zeta s_\eta \right] t \label{fullHS} ~,\end{aligned}$$ from which one can extract the hidden symmetry parameters $\delta_z^i$ and $\delta_t^i$ of Eq. \[HS\], as a function of the angles which define the orientations of HS1 and HS2 with respect to the hidden frame $\{ q_1,q_2,q_3,q_4 \}$.
Note that the expression for ${\cal M}_{0}$ in Eq. \[zeroth\_order\_matrx\] contains five angles: the two (explicit) angles $\zeta,\eta$ associated with the orientation of HS2 with respect to the HS1 frame $\{Q_1, Q_2, Q_3, Q_4 \}$ and the three angles $\theta,\phi,\omega$ associated with the orientation of HS1 with respect to the hidden frame $\{q_1, q_2, q_3, q_4 \}$, which enter through the rotation $Q=Rq$. Thus, along with the parameters $\alpha,\beta$ and $\gamma$, ${\cal M}_{0}$ in Eq. \[zeroth\_order\_matrx\] is parameterized by 8 real parameters (in each sector) as required when imposing the double hidden symmetry (see discussion above). However, there is one non-physical angle in [*each sector*]{} which results from the fact that the two orthogonal states $Q_1,P_1$ are massless at zeroth-order and are, therefore, indistinguishable. This can be easily understood by considering the geometrical interpretation of the hidden symmetry in the SM4 case. In particular, the double hidden symmetry (HS1+HS2) defines a plane in the hidden frame $\{q_1, q_2, q_3, q_4 \}$ under which the theory is invariant. This is the plane spanned by the two orthogonal vectors $Q_1$ and $P_1$. We, therefore, have the freedom to make any unitary transformation in the $Q_1-P_1$ plane/subspace (in both up and down-quark sectors) without affecting the physical picture. This allows us to eliminate one angle in each of the $(v_d^0,v_s^0)$ and $(v_u^0,v_c^0)$ subspaces. Thus, without loss of generality we find it convenient to choose $\omega=\pi/2$ in both sectors, which sets $Q_4=q_3$ and $Q_1,Q_2,Q_3 \perp q_3$. This is analogous to a gauge condition in a vector field theory as also identified in [@FL]. Note that even though at each sector the massless states $(v_d^0,v_s^0)$ and $(v_u^0,v_c^0)$ are indistinguishable at the zeroth-order, as we will see in the next section, after breaking the hidden symmetry this degeneracy is removed, and those (now massive) states become well defined.
We are now ready to derive the mass spectrum and the $4 \times 4$ CKM matrix at zeroth-order, i.e., without T-violation. Recall that, by construction, there are two massless states, given by $Q_1$ and $P_1$. In order to find the 2 massive states we can apply the original FL formulae for three generations to the $\{ Q_2, Q_3, Q_4 \}$ subspace. As in [@FL], we find that the eigensystem of $M_{0}$ depends only on two linear combinations of $\alpha,\beta,\gamma$, so that one of these three parameters can be “gauged away". Following the choice of FL in [@FL], we eliminate the parameter $\gamma$ using the “gauge" condition (i.e., this has no effect on the physical outcome): $$\begin{aligned}
\frac{\beta}{\gamma} = 1 \label{gauge1}~.\end{aligned}$$
Using this condition, we diagonalize the mass matrix $M_0$ and find that the two massive states are: $$\begin{aligned}
P_2 &=& -s_\zeta Q_2 + c_\zeta c_\eta Q_3 + c_\zeta s_\eta Q_4 ~, \nonumber \\
P_3 &=& -s_\eta Q_3 + c_\eta Q_4 \label{P23}~,\end{aligned}$$ with masses: $$\begin{aligned}
m_{P_2}&=& \beta ~, \\
m_{P_3}&=&\alpha + c_\zeta^2 \beta =
\alpha + c_\zeta^2 m_{P_2} \label{M23}~.\end{aligned}$$
Note that, for $m_{P_3} >> m_{P_2}$ and/or $c_\zeta \to 0$ we have $m_{P_3} \approx \alpha$ and $m_{P_2} \approx \beta$ (see below).
Thus the complete set of eigenstates of $M_0$ at zeroth-order becomes quite simple, as it is given by $\{ Q_1, P_1, P_2, P_3 \}$ with masses $\{ 0, 0, m_{P_2}, m_{P_3} \}$, which we hanceforward identify (in each sector) as the zeroth-order quark states: $$\begin{aligned}
\{ v_d^0, v_s^0, v_b^0, v_{b^\prime}^0 \}
&\equiv& \{ Q_1^d, P_1^d, P_2^d, P_3^d \} \label{rel1}~, \\
\{ v_u^0, v_c^0, v_t^0, v_{t^\prime}^0 \}
&\equiv& \{ Q_1^u, P_1^u, P_2^u, P_3^u \} \label{rel2}~,\end{aligned}$$ with masses $m_d^0=m_s^0=m_u^0=m_c^0=0$ and: $$\begin{aligned}
&&m_b^0 =\beta_d,~ m_{b^\prime}^0 \approx \alpha_d ~, \nonumber \\
&&m_t^0 =\beta_u,~ m_{t^\prime}^0 =
\alpha_u + c_{\zeta_u}^2 m_t^0 \label{zeromasses}~,\end{aligned}$$ where the superscripts $d$ and $u$ distinguish between the parameters in the down-quark and up-quark sectors, respectively. Note that since T-violation is responsible for generating the light-quark masses, it is a small perturbation to the T-invariant zeroth-order spectrum. Thus for all practical purposes we can set: $m_b \approx m_b^0$, $m_{b^\prime} \approx m_{b^\prime}^0$, $m_t \approx m_t^0$ and $m_{t^\prime} \approx m_{t^\prime}^0$ (see also below).
Using the orientation of the HS1 frame $Q_i$ with respect to the hidden frame $q_i$, i.e., $Q=Rq$ with $R$ given in Eq. \[Rmatrix\], and the orientation of the states $P_{2},P_{3}$ with respect to the $\{ Q_2, Q_3, Q_4 \}$ subframe (as given in Eq. \[P23\]), we can write the set of four eigenstates in each sector in terms of the weak (hidden) states $q_i$ (as required in order to derive the zeroth-order (real) $4\times 4$ CKM matrix): $$\begin{aligned}
\left( \begin{array}{c}
v_d^0 \\ v_s^0 \\ v_b^0 \\ v_{b^\prime}^0 \end{array} \right) =
\left( \begin{array}{c}
R^d_{1i} \\ A^d_i \\ B^d_i \\ C^d_i \end{array} \right) q^d_i ~,~
\left( \begin{array}{c}
v_u^0 \\ v_c^0 \\ v_t^0 \\ v_{t^\prime}^0 \end{array} \right) =
\left( \begin{array}{c}
R^u_{1i} \\ A^u_i \\ B^u_i \\ C^u_i \end{array} \right) q^u_i \label{rel3} ~,\end{aligned}$$ where the superscripts $u$ and $d$ are again added in order to distinguish between the angles associated with the up and down-quark sectors, respectively. Also, $$\begin{aligned}
A^d_i &\equiv& \cos\zeta_d \cdot R^d_{2i} + \sin\zeta_d\cdot \cos\eta_d\cdot R^d_{3i} + \sin\zeta_d\cdot \sin\eta_d\cdot R^d_{4i} \label{ai} ~ \\
B^d_i &\equiv& -\sin\zeta_d \cdot R^d_{2i} + \cos\zeta_d\cdot\cos\eta_d\cdot R^d_{3i} + \cos\zeta_d\cdot \sin\eta_d\cdot R^d_{4i} \label{bi} ~, \\
C^d_i &\equiv& -\sin\eta_d\cdot R^d_{3i} + \cos\eta_d\cdot R^d_{4i} \label{ci}~,\end{aligned}$$ and similarly for $A^u_i,B^u_i,C^u_i$ using $R^u$ and $\zeta_u,\eta_u$.
Then denoting by $D_0 = (v_d^0, v_s^0, v_b^0, v_{b^\prime}^0 )$ and $U_0 = (v_u^0, v_c^0, v_t^0, v_{t^\prime}^0)$ the unitary matrices that diagonalize the real and symmetric mass matrices in the down and up-quark sectors, respectively: $$\begin{aligned}
D_0^\dagger M_0(q^d) D_0 &=& {\rm diag}(0,0,m_b^0,m_{b^\prime}^0) ~, \\
U_0^\dagger M_0(q^u) U_0 &=& {\rm diag}(0,0,m_t^0,m_{t^\prime}^0) ~,\end{aligned}$$ we can obtain the $4\times 4$ zeroth-order CKM matrix of the SM4 (i.e., without T-violation): $$\begin{aligned}
V^0(CKM) = U_0^\dagger D_0 \label{CKM0}~.\end{aligned}$$ The general expression for $V^0(CKM)$ in terms of the angles that define the hidden symmetry in the up and down-quark sectors is rather complicated to be written here. Let us, therefore, choose a specific physical orientation of the hidden symmetry, where the direction of HS2 is partly fixed by the angle $\zeta$ with the choice $\zeta =\omega=\pi/2$ in each sector (recall that we have fixed the angle $\omega = \pi/2$ in a manner similar to choosing a gauge). This orientation is physically viable in the sense that it reproduces the observed light-quark masses and the measured CKM mixing angles. It will be used in the next sections to demonstrate the general mechanism for breaking the hidden symmetry and T-invariance and the corresponding generation of the light-quark masses.
In particular, using Eqs. \[rel1\]-\[CKM0\] with $\zeta =\omega=\pi/2$ we obtain: $$\begin{aligned}
V_{ud}^0 &=& c_{\theta_u} c_{\theta_d} + s_{\theta_u} s_{\theta_d} \cos(\phi_u-\phi_d) ~,\nonumber\\
V_{us}^0 &=& s_{\theta_u} c_{\eta_d} \sin(\phi_u-\phi_d) ~, \nonumber \\
V_{ub}^0 &=& c_{\theta_u} s_{\theta_d} - s_{\theta_u} c_{\theta_d} \cos(\phi_u-\phi_d) ~,\nonumber\\
V_{ub^\prime}^0 &=& - s_{\theta_u} s_{\eta_d} \sin(\phi_u-\phi_d) ~,\nonumber\\
V_{cd}^0&=&- s_{\theta_d} c_{\eta_u} \sin(\phi_u-\phi_d) ~,\nonumber\\
V_{cs}^0 &=& s_{\eta_u} s_{\eta_d} + c_{\eta_u} c_{\eta_d} \cos(\phi_u-\phi_d) ~,\nonumber\\
V_{cb}^0&=& c_{\eta_u} c_{\theta_d} \sin(\phi_u-\phi_d) ~,\nonumber\\
V_{cb^\prime}^0 &=& s_{\eta_u} c_{\eta_d} - c_{\eta_u} s_{\eta_d} \cos(\phi_u-\phi_d) ~,\nonumber\\
V_{td}^0&=& c_{\theta_d} s_{\theta_u} - s_{\theta_d} c_{\theta_u} \cos(\phi_u-\phi_d) ~,\nonumber\\
V_{ts}^0&=& -c_{\eta_d} c_{\theta_u} \sin(\phi_u-\phi_d) ~,\nonumber\\
V_{tb}^0 &=& s_{\theta_u} s_{\theta_d} + c_{\theta_u} c_{\theta_d} \cos(\phi_u-\phi_d) ~,\nonumber\\
V_{tb^\prime}^0&=& s_{\eta_d} c_{\theta_u} \sin(\phi_u-\phi_d) ~,\nonumber\\
V_{t^\prime d}^0&=& s_{\theta_d} s_{\eta_u} \sin(\phi_u-\phi_d) ~,\nonumber\\
V_{t^\prime s}^0&=&s_{\eta_d} c_{\eta_u} - c_{\eta_d} s_{\eta_u} \cos(\phi_u-\phi_d) ~,\nonumber\\
V_{t^\prime b}^0&=& - s_{\eta_u} c_{\theta_d} \sin(\phi_u-\phi_d) ~,\nonumber\\
V_{t^\prime b^\prime}^0 &=& c_{\eta_d} c_{\eta_u} + s_{\eta_d} s_{\eta_u} \cos(\phi_u-\phi_d) \label{CKMel}~.\end{aligned}$$
From these expressions we can find the size of some of the hidden symmetry angles in terms of the observed $3 \times 3$ CKM elements and, also, several interesting and surprising relations/predictions for the mixing angles of the 4th generation quarks with the first 3 generations: $$\begin{aligned}
- \tan\theta_u &=& \frac{V_{us}}{V_{ts}} = \frac{V_{ub^\prime}}{V_{tb^\prime}} \label{eqV1}~,\\
- \tan\theta_d &=& \frac{V_{cd}}{V_{cb}} = \frac{V_{t^\prime d}}{V_{t^\prime b}} \label{eqV2}~,\\
- \tan\eta_u &=& \frac{V_{t^\prime d}}{V_{cd}} ~,\\
- \tan\eta_d &=& \frac{V_{u b^\prime}}{V_{us}} \label{eqV4} ~,\end{aligned}$$ implying $V_{t^\prime d} > V_{t^\prime b}$ and $V_{ub^\prime} > V_{tb^\prime}$ - opposite to the hierarchical pattern as observed in the SM3’s $3 \times 3$ block.
In addition, taking $V_{ts}^2/V_{us}^2 \sim V_{cb}^2/V_{cd}^2 \ll 1$, $V_{ud} \sim 1 - \lambda^2/2$ and $V_{cs} \sim 1 - \lambda^2/2$, where $\lambda \sim 0.2257$ is the Wolfenstein parameter [@PDG], we find that $\phi_u-\phi_d$ is the Cabibbo angle (i.e., the Wolfenstein parameter) with: $$\begin{aligned}
\sin(\phi_u - \phi_d) &\sim& \lambda \sim 0.2257 ~, \\
\cos(\phi_u - \phi_d) &\sim& V_{ud} - {\cal O}(\lambda^2) ~,\end{aligned}$$ and $$\begin{aligned}
c_{\theta_d} &\sim& \frac{V_{cb}}{V_{cd}} \sim {\cal O}(\lambda) \label{ctetd}~\\
c_{\theta_u} &\sim& \frac{V_{ts}}{V_{us}} \sim {\cal O}(\lambda) \label{ctetu}~\\
\cos(\eta_u - \eta_d) &\sim& V_{cs} - {\cal O}(\lambda^2) \label{ceta}~,\end{aligned}$$ also implying that $\eta_u \sim \eta_d$. This in turn gives: $$\begin{aligned}
V_{t^\prime b^\prime} &\sim& V_{cs} ~\\
V_{u b^\prime} &\sim& V_{t^\prime d} \label{vubprime}~.\end{aligned}$$
Furthermore, for the top-quark mixing angles we get: $$\begin{aligned}
V_{tb} &\sim& 1 - {\cal O}(\lambda^2) ~,\\
V_{td} &\sim& \left( \frac{V_{cb}}{V_{cd}} - \frac{V_{ts}}{V_{us}} \right) + {\cal O}(\lambda^2) \label{eqV6} ~,
$$
In the next sections we will use this physical setup to break T-invariance and derive the CP-violating parameters of the model.
T-violation and hidden symmetry breaking mechanism
==================================================
There are, of course, several ways to break the hidden symmetry without breaking T-invariance. Here we wish to extend the attractive mechanism for the simultaneous breaking of both the hidden symmetry and T-invariance, that was suggested by Friedberg and Lee in [@FL] in the SM3 case, and formulate the general breaking mechanism for the SM4 case.
In particular, when the hidden symmetry and T-invariance are broken simultaneously, the massless states $v_d^0,v_s^0,v_u^0,v_c^0$ (which were protected by the hidden symmetry) acquire a mass which is directly related to the size of the phases responsible for T-violation: two CP-violating phases in the up-quark sector are needed to generate the masses $m_u$ and $m_c$, while two CP-violating phases in the down-quark sector generate the masses $m_d$ and $m_s$. Since we know that $m_{u,c} << m_{t,t^\prime}$ and $m_{d,s} << m_{b,b^\prime}$, we can treat the effect of T-violation as a perturbation to the zeroth-order (T-invariant) approximation in both the down and up-quark sectors.
In what follows we will describe the breaking mechanism using the generic notation outlined in the previous section, which holds for both down and up-quark sectors. The application of the results below to a specific sector is straight forward.
In order to break the hidden symmetry we rewrite the zeroth-order Yukawa term ${\cal M}_0$ in terms of its eigenstates:
$$\begin{aligned}
{\cal M}_0 = \sum_i m_i^0 |v_i^0|^2 = m_{P_2} |P_2|^2 + m_{P_3} |P_3|^2~,\end{aligned}$$
where we have used the fact that $m_{Q_1}=m_{P_1}=0$. This gives (see Eqs. \[rel1\],\[rel2\] and \[rel3\]): $$\begin{aligned}
(M_0)_{ij} = m_{P_2} B_i B_j + m_{P_3} C_i C_j \label{M0bc}~,\end{aligned}$$ where we have dropped the superscripts $d$ or $u$ in the coefficients $B^{d,u}_i$ and $C^{d,u}_i$ (as defined in Eqs. \[bi\] and \[ci\]), so that the expression above applies to both down and up sectors. T-invariance and the hidden symmetry can then be broken by inserting a phase in any one of the non-diagonal entries of $(M_0)_{ij}$ as follows: $$\begin{aligned}
(\Delta M)_{ij} = \left( m_{P_2} B_i B_j + m_{P_3} C_i C_j \right) \cdot
\left( e^{i \delta_{ij}} - 1 \right) ~, j > i ~;~
(\Delta M)_{ji} = (\Delta M)_{ij}^\star
\label{deltaM}~,\end{aligned}$$ such that $$\begin{aligned}
M = M_0 + \Delta M ~.\end{aligned}$$ We assume that $\delta_{ij} \ll 1$, hence, $\Delta M \ll M_0$ so that $\Delta M$ can be treated as a perturbation. As we shall demonstrate in the next section, in the minimal setup, two such phase insertions (in each sector) are required in different locations in $M_0$ in order to break both HS1 and HS2 and to generate the observable masses of the first 2 light generations of quarks. Thus, we can write the overall T-violating term as: $$\begin{aligned}
\Delta M \equiv \Delta M_z + \Delta M_t ~,\end{aligned}$$ where $\Delta M_z$ and $\Delta M_t$ contain the new phases that break HS1 and HS2, respectively, each given by the generic form in Eq. \[deltaM\]. The T-violating mass term $\Delta M$ then shifts the zeroth-order masses and states. Using perturbation theory, these shifts are given in the general case without degeneracies by:
$$\begin{aligned}
\Delta m_q &\equiv& m_q - m_q^0 = (v_q^0)^\dagger \Delta M v_q^0 \label{dmass}~,\end{aligned}$$
$$\begin{aligned}
\Delta v_q &\equiv& v_q- v_q^0 = \sum_{q\neq q^\prime} \frac{(v_{q^\prime}^0)^\dagger
\Delta M v_q^0}{m_q^0-m_{q^\prime}^0} v_{q^\prime}^0 \label{dvec}~,\end{aligned}$$
where $m_q^0$ and $v_q^0$ are the zeroth-order masses and states (i.e., $v_q^0$ and $v_{q^\prime}^0$ stands for any one of the vectors $Q_1,P_1,P_2,P_3$ in either the up or down-sectors), $\Delta m_q$ are the mass shifts due to the breaking of the hidden symmetry and $\Delta v_q$ contains the imaginary terms which are $\propto i \sin\delta_{ij}$ from which the physical T-violating elements of the $4 \times 4$ CKM matrix are constructed.
In our case, however, the states $Q_1$ and $P_1$ are degenerate. Thus, in order to find the physical masses ($m_{\pm}$) and their corresponding physical states ($v_{\pm}$) in the $Q_1 - P_1$ subspace, we need to diagonalize the following $2 \times 2$ perturbation mass matrix in the $Q_1-P_1$ subspace: $$\begin{aligned}
\Delta m(Q_1,P_1) =\left( \begin{array}{cc}
Q_1^\dagger \Delta M Q_1 & Q_1^\dagger \Delta M P_1 \\
P_1^\dagger \Delta M Q_1 & P_1^\dagger \Delta M P_1 \end{array} \right) \equiv
\left( \begin{array}{cc}
\Delta m_{QQ} & \Delta m_{QP} \\
\Delta m_{PQ} & \Delta m_{PP} \end{array} \right)
\label{deltaQP}~,\end{aligned}$$ where $ \Delta m_{QP} = (\Delta m_{PQ})^\dagger$ and $\Delta m_{QQ},~\Delta m_{PP}$ are real. That is, after breaking T-invariance, the physical masses and states of the first two generations are given by: $$\begin{aligned}
m_\pm &=& \frac{\Delta m_{QQ} +\Delta m_{PP}}{2}
\left[ 1 \pm \sqrt{1 - \frac{4 \left( \Delta m_{QQ} \Delta m_{PP} -
\Delta m_{QP} \Delta m_{PQ} \right)}{\left( \Delta m_{QQ} +\Delta m_{PP} \right)^2}} \right] \label{mpm}~,\end{aligned}$$ and $$\begin{aligned}
v_+ &=& \frac{1}{\sqrt{ \left| \Delta m_{QP} \right|^2 + \left(m_+ - \Delta m_{QQ} \right)^2}}
\left[ \Delta m_{QP} Q_1 + \left(m_+ - \Delta m_{QQ} \right) P_1 \right] \nonumber ~\\
v_- &=& \frac{1}{\sqrt{ \left|\Delta m_{PQ} \right|^2 + \left(m_- - \Delta m_{PP} \right)^2}}
\left[ \left(m_- - \Delta m_{PP} \right) Q_1 + \Delta m_{PQ} P_1 \right] \label{vpm}~.\end{aligned}$$ The corresponding corrections/shifts to the physical states are still calculated from Eq. \[dvec\], where now $v_q^0,v_{q^\prime}^0 \in \{v_-,v_+,P_2,P_3\}$. In particular, let us further define the “perturbation matrix": $$\begin{aligned}
(v_q^0)^\dagger \Delta M v_{q^\prime}^0 \equiv i P_{q q^\prime}
\label{pijdef}~,\end{aligned}$$ where $q \neq q^\prime$ and, to ${\cal O}(\delta)$, $P_{q q^\prime}$ are real and $P_{q^\prime q} = - P_{q q^\prime}$. That is, $(v_{q^\prime}^0)^\dagger \Delta M v_q^0 = \left[(v_q^0)^\dagger \Delta M v_{q^\prime}^0 \right]^\dagger = -i P_{q q^\prime}$, where $q,q^\prime \in d,s,b,b^\prime$ in the down-quark sector and $q,q^\prime \in u,c,t,t^\prime$ in the up-quark sector. Also note that the perturbation matrix is diagonal in the $(v_- - v_+)$ subspace to ${\cal O}(\delta)$ (i.e., $P_{d s} = P_{sd} \approx {\cal O}(\delta^2)$ and $P_{uc} = P_{cu} \approx {\cal O}(\delta^2)$).
In the next section, for simplicity we will consider the case where the perturbation is diagonal in the $Q_1-P_1$ subspace, i.e., $\Delta m_{QP} =0$ in Eq. \[deltaQP\], so that $v_-=Q_1$ and $v_+ = P_1$. In this simple case we can use Eqs. \[dvec\] and \[pijdef\] to obtain the ${\cal O}(\delta)$ shifts, $\Delta v_q$, to the zeroth-order states $(v_d^0, v_s^0, v_b^0, v_{b^\prime}^0 )$ and $(v_u^0, v_c^0, v_t^0, v_{t^\prime}^0)$ (as defined in Eqs. \[rel3\]-\[ci\]): $$\begin{aligned}
\begin{array}{cc}
\Delta v_d = i \left( \frac{P_{d b}}{m_b} v_b^0 +
\frac{P_{d b^\prime}}{m_{b^\prime}} v_{b^\prime}^0 \right) &
\Delta v_u = i \left( \frac{P_{u t}}{m_t} v_t^0 +
\frac{P_{u t^\prime}}{m_{t^\prime}} v_{t^\prime}^0 \right) \\
\Delta v_s = i \left( \frac{P_{s b}}{m_b} v_b^0 +
\frac{P_{s b^\prime}}{m_{b^\prime}} v_{b^\prime}^0 \right) &
\Delta v_c = i \left( \frac{P_{c t}}{m_t} v_t^0 +
\frac{P_{c t^\prime}}{m_{t^\prime}} v_{t^\prime}^0 \right) \\
\Delta v_b = i \left( \frac{P_{d b}}{m_b} v_d^0 +
\frac{P_{s b}}{m_b} v_s^0 +
\frac{P_{b b^\prime}}{m_{b^\prime} - m_b} v_{b^\prime}^0 \right) &
\Delta v_t = i \left( \frac{P_{u t}}{m_t} v_u^0 +
\frac{P_{c t}}{m_t} v_c^0 +
\frac{P_{t t^\prime}}{m_{t^\prime} - m_t} v_{t^\prime}^0 \right) \\
\Delta v_{b^\prime} = i \left( \frac{P_{d b^\prime}}{m_{b^\prime}} v_d^0 +
\frac{P_{s b^\prime}}{m_{b^\prime}} v_s^0 +
\frac{P_{b b^\prime}}{m_{b^\prime} - m_b} v_{b}^0 \right) &
\Delta v_{t^\prime} = i \left( \frac{P_{u t^\prime}}{m_{t^\prime}} v_u^0 +
\frac{P_{c t^\prime}}{m_{t^\prime}} v_c^0 +
\frac{P_{t t^\prime}}{m_{t^\prime} - m_t} v_{t}^0 \right)
\label{deltav}
\end{array}\end{aligned}$$ such that, to ${\cal O}(\delta)$, the physical states are given by $v_q = v_q^0 + \Delta v_q$. The corresponding ${\cal O}(\delta)$ corrections to $v_q^0$ in the general case where the perturbation is not diagonal in the $Q_1-P_1$ subspace, can be easily derived from the expressions for $v_\pm$ in Eq. \[vpm\] and the shifts $\Delta v_q$ in Eq. \[deltav\] above. For example, $$\begin{aligned}
\Delta v_- &=& \frac{1}{\sqrt{ \left|\Delta m_{PQ} \right|^2 + \left(m_- - \Delta m_{PP} \right)^2}}
\left[ \left(m_- - \Delta m_{PP} \right) \cdot \Delta v_d + \Delta m_{PQ} \cdot \Delta v_s \right]\end{aligned}$$ where $\Delta v_{d,s}$ are given in Eq. \[deltav\].
The physical (T-violating) $4 \times 4$ CKM matrix elements are, therefore, given symbolically by ($u$ and $d$ stand for any of the up and down-quark states, respectively): $$\begin{aligned}
V_{ud} = (v_u)^\dagger \cdot v_d = V_{ud}^0 + (\Delta v_u)^\dagger \cdot v_d^0 +
v_u^0 \cdot \Delta v_d ~,\end{aligned}$$ where $V_{ud}^0 = (v_u^0)^T \cdot v_d^0$ is the zeroth-order CKM matrix elements and the terms $[ (\Delta v_u)^\dagger \cdot v_d^0 ],~[v_u^0 \cdot \Delta v_d]$, which are also functions of the zeroth-order CKM elements, are readily obtained from Eq. \[deltav\] above. For example, in the simple case where $v_-=Q_1$ and $v_+ = P_1$, $V_{ud}$ (i.e., now the (11) elements of $V$) is given by $$\begin{aligned}
V_{ud} = V_{ud}^0 + i \left[
\frac{P_{d b}}{m_b} V_{u b}^0 +
\frac{P_{d b^\prime}}{m_{b^\prime}} V_{u b^\prime}^0
-
\frac{P_{u t}}{m_t} V_{td}^0 -
\frac{P_{u t^\prime}}{m_{t^\prime}} V_{t^\prime d}^0
\right] + {\cal O}(\delta^2) \label{Vudfull}~.\end{aligned}$$
Note that the zeroth-order elements $V_{ud}^0$, given in Eq. \[CKMel\], are a good approximation to the magnitude of the physical CKM angles (i.e., up to corrections of ${\cal O}(\delta^2)$, where $\delta$ is any one of the CP-violating phases).
A physical framework for T-violation
====================================
In the previous two sections we have described the general features of the hidden symmetry and the generic mechanism of breaking T-invariance and generating the corresponding light-quark masses in coincidence with the breaking of the hidden symmetry in the case of SM4. In this section we would like to give a concrete physical example (i.e., compatible with all relevant known data) which is relatively simple analytically, therefore, providing insight for the physical picture. Our chosen setup below illustrates the power of this mechanism in predicting the new mixing angles and phases associated with the 4th generation of quarks and the size of CP-violation of the theory.
As in the previous section, here also we consider a specific orientation for the hidden symmetry, where the direction of HS2 is partly fixed by setting $\zeta = \pi/2$ in each sector. The hidden symmetry is then broken by inserting the phases in the $12$ and $34$ elements of the mass matrix $M_0$, such that:
$$\begin{aligned}
\Delta M_z = (\Delta M)_{12} ~,~ \Delta M_t = (\Delta M)_{34} ~,\end{aligned}$$
where $(\Delta M)_{ij}$ is defined in Eq. \[deltaM\]. Note that with $\omega=\zeta=\pi/2$ we have $B_1 = s_\theta$, $B_2 = -c_\theta c_\phi$, $B_3=0$, $B_4 = -c_\theta s_\phi$, $C_1=0$, $C_2 = s_\eta s_\phi$, $C_3 = - c_\eta$ and $C_4 = - s_\eta c_\phi$ (see Eqs. \[bi\] and \[ci\]). Thus, the overall T-violating term, $\Delta M = \Delta M_z + \Delta M_t$, is given by: $$\begin{aligned}
\Delta M =\left( {\tiny{ \begin{array}{cccc}
0 & - m_{P_2} s_\theta c_\theta c_\phi \left( e^{i \delta_{12}} - 1 \right) & 0 & 0 \\
- m_{P_2} s_\theta c_\theta c_\phi \left( e^{-i \delta_{12}} - 1 \right) & 0 & 0 & 0 \\
0 & 0 & 0 & m_{P_3} s_\eta c_\eta c_\phi \left( e^{i \delta_{34}} - 1 \right) \\
0 & 0 & m_{P_3} s_\eta c_\eta c_\phi \left( e^{-i \delta_{34}} - 1 \right) & 0 \end{array} }} \right) \label{deltaMzeta}~.\end{aligned}$$ For simplicity and without loss of generality, we will further take $s_{\phi} \ll 1$ for $\phi =\phi_d \sim \phi_u$ (recall that $\cos(\phi_u - \phi_d) \sim V_{ud} \sim 1$ implying $\phi_u \sim \phi_d$, see previous section), which allows us to obtain a relatively compact analytical picture. In particular, one simplification that arises with this choice, is that the perturbation in the $Q_1-P_1$ subspace, $\Delta m(Q_1,P_1)$ in Eq. \[deltaQP\], is approximately diagonal so that $m_- \approx \Delta m_{QQ}$, $m_+ \approx \Delta m_{PP}$ and the corresponding states are $v_- \approx Q_1$, $v_+ \approx P_1$ in each sector. In particular, $\Delta M$ in Eq. \[deltaMzeta\] generates the following light-quark masses (we now add the superscripts $d$ and $u$ to distinguish between the angles in the down and up-quark sectors): $$\begin{aligned}
m_d &\approx& 2 m_b s_{\theta_d}^2 c_{\theta_d}^2 \left( 1 - \cos\delta_{12}^d \right) \label{md}~,\\
m_s &\approx& 2 m_{b^\prime} s_{\eta_d}^2 c_{\eta_d}^2 \left( 1 - \cos\delta_{34}^d \right) \label{ms}~,\\
m_u &\approx& 2 m_t s_{\theta_u}^2 c_{\theta_u}^2 \left( 1 - \cos\delta_{12}^u \right) \label{mu}~,\\
m_c &\approx& 2 m_{t^\prime} s_{\eta_u}^2 c_{\eta_u}^2 \left( 1 - \cos\delta_{34}^u \right) \label{mc}~.\end{aligned}$$ where (see Eq. \[zeromasses\] and set $\zeta = \pi/2$): $$\begin{aligned}
m_b \approx \beta_d,~m_{b^\prime} \approx \alpha_d,~m_t \approx \beta_u,
~m_{t^\prime} \approx \alpha_u ~.\end{aligned}$$
As expected, we cannot reproduce the physical light-quark mass spectrum if any of the phases $\delta_{ij}$ above vanishes. Note also that, since $\eta_u \sim \eta_d$ and $\theta_u \sim \theta_d$ (see Eqs. \[ctetd\] and \[ceta\]), we can also use the expressions in Eqs. \[md\]-\[mc\] for the light-quark mass terms to relate the phases in one sector to the phases in the other sector: $$\begin{aligned}
\frac{\delta_{12}^d}{\delta_{12}^u} &\sim & \sqrt{\frac{m_d m_t}{m_u m_b}} \sim 10 ~,\\
\frac{\delta_{34}^d}{\delta_{34}^u} &\sim & \sqrt{\frac{m_s m_{t^\prime}}{m_c m_{b^\prime}}} \sim 0.3~,\end{aligned}$$ where we have taken $m_{t^\prime}/m_{b^\prime} \sim 1$.
Finally, for our chosen orientation with $\zeta=\pi/2$ and $\phi << 1$, the $P_{q q^\prime}$ elements required to calculate the imaginary terms of the $4 \times 4$ CKM elements (see Eqs. \[pijdef\]-\[Vudfull\]) are given by (to first order in $\delta_{ij}$): $$\begin{aligned}
P_{d b} &=& m_b s_{\theta_d} c_{\theta_d} \sin\delta_{12}^d ~, \nonumber \\
P_{s b^\prime} &=& m_{b^\prime} s_{\eta_d} c_{\eta_d} \sin\delta_{34}^d ~, \nonumber\\
P_{u t} &=& m_t s_{\theta_u} c_{\theta_u} \sin\delta_{12}^u ~, \nonumber\\
P_{c t^\prime} &=& m_{t^\prime} s_{\eta_u} c_{\eta_u} \sin\delta_{34}^u
\label{pij} ~,\end{aligned}$$ and all other $P_{q q^\prime}$ elements vanish. Using the expressions for the light-quark masses in Eqs. \[md\]-\[mc\], we can re-express the elements of the perturbation matrix $P_{q q^\prime}$ in Eq. \[pij\] above in terms of the CP-phases and the light-quark masses: $$\begin{aligned}
P_{d b} &\approx& \sqrt{m_d m_b} \cos\left( \frac{1}{2} \delta_{12}^d \right) ~, \nonumber\\
P_{s b^\prime} &\approx& \sqrt{m_s m_{b^\prime}} \cos\left( \frac{1}{2} \delta_{34}^d \right) ~, \nonumber\\
P_{u t} &\approx& \sqrt{m_u m_t} \cos\left( \frac{1}{2} \delta_{12}^u \right) ~, \nonumber\\
P_{c t^\prime} &\approx& \sqrt{m_c m_{t^\prime}} \cos\left( \frac{1}{2} \delta_{34}^u \right) \label{pij2}~,\end{aligned}$$
CP-invariants with four generations
===================================
As in the SM3, CP-violation in the SM4 can also be parameterized using CP-invariants a la the Jarlskog invariant $J_{SM}$ of the SM3 [@jarlskog]. Indeed, as was shown in [@jarlskog], the invariant CP-violation measure in the four quark families case can be expressed in terms of four “copies" similar to $J_{SM}$ (out of which only three are independent): $J_{123},~J_{124},~J_{134}$ and $J_{234}$, where the indices indicate the generation number, i.e., in this language one identifies $J_{SM}$ with $J_{123}$ even though these two CP-invariants are not quite the same as $J_{SM}$ is no longer a valid CP-quantity in the SM4.
A generic derivation of the four $J_{ijk}$ copies in terms of the quark masses and CKM mixing angles is quite complicated and we are unable to give it in a compact analytical format. There are several useful general formulations in the literature for the parametrization of CP-violation in the SM4 [@jarlskog; @CPinvariants], but none is at the level of simplification required for an analytical study of CP-violation in our model. A numerical calculation/study of the CP-violating quantities in our model is, however, straight forward following the prescription of the previous sections. This will be presented elsewhere [@our-2nd-FLpaper].
On the other hand, as was observed more than 10 years ago [@branco] and noted again recently in [@CPbaryo1], in the chiral limit $m_{u,d,s,c} \to 0$, CP-violation in the SM4 effectively “shrinks" to the CP-violation picture of a three generation model involving the 4th generation heavy quarks. This chiral limit, which is in the spirit of our current study, is clearly applicable at high-energies of the EW-scale and above. Moreover, it allows us to derive a compact analytical estimate for the expected size of CP-violation in our model.[^4]
As was shown in [@branco], in the chiral limit there is no CP-violation within the three families SM3 and so all CP-violating effects are attributed to the new physics - in our case, to the fourth generation of quarks. The key CP-violating quantity in this limit can be written as [@branco]: $$\begin{aligned}
J_{SM4} = {\rm Im} \left( V_{tb} V_{t^\prime b}^\star V_{t^\prime b^\prime} V_{t b^\prime}^\star \right)
~,
\label{b2} ~.\end{aligned}$$ since this is the only CP-violating quantity that survives when one takes the limit $m_{u,d,s,c} \to 0$.
Thus, in order to get some insight for the expected size of CP-violation in our model, it is sufficient to derive an estimate for $J_{SM4}$. In particular, we will calculate $J_{SM4}$ for the specific orientation used in the previous section, i.e., for the case $\zeta=\pi/2$ and $\phi \ll 1$.
Using the $P_{q q^\prime}$ factors of Eq. \[pij2\] and based on Eq. \[Vudfull\], we can calculate (to ${\cal O}(\delta)$) the relevant complex CKM elements which enter $J_{SM4}$ in Eq. \[b2\]: $$\begin{aligned}
V_{tb} &\approx& V_{tb}^0 + i \left[ V_{td}^0 \sqrt{ \frac{m_d}{m_b} }
\cos\left(\frac{1}{2} \delta_{12}^d \right) -
V_{ub}^0 \sqrt{ \frac{m_u}{m_t} }
\cos\left(\frac{1}{2} \delta_{12}^u \right) \right] ~, \\
V_{t^\prime b} &\approx& V_{t^\prime b}^0 + i \left[ V_{t^\prime d}^0 \sqrt{ \frac{m_d}{m_b} }
\cos\left(\frac{1}{2} \delta_{12}^d \right) -
V_{cb}^0 \sqrt{ \frac{m_c}{m_{t^\prime}} }
\cos\left(\frac{1}{2} \delta_{34}^u \right) \right] ~, \\
V_{t b^\prime} &\approx& V_{t b^\prime}^0 + i \left[ V_{t s}^0 \sqrt{ \frac{m_s}{m_{b^\prime}} }
\cos\left(\frac{1}{2} \delta_{34}^d \right) -
V_{u b^\prime}^0 \sqrt{ \frac{m_u}{m_{t}} }
\cos\left(\frac{1}{2} \delta_{12}^u \right) \right] ~, \\
V_{t^\prime b^\prime} &\approx& V_{t^\prime b^\prime}^0 + i \left[ V_{t^\prime s}^0 \sqrt{ \frac{m_s}{m_{b^\prime}} }
\cos\left(\frac{1}{2} \delta_{34}^d \right) -
V_{c b^\prime}^0 \sqrt{ \frac{m_c}{m_{t^\prime}} }
\cos\left(\frac{1}{2} \delta_{34}^u \right) \right] ~.\end{aligned}$$ We can now estimate the size of CP-violation in our model, which can emanate in high-energy processes involving $t^\prime$ and $b^\prime$ exchanges. In particular, since the zeroth-order CKM elements are a good approximation for the magnitude of physical elements, we set $V_{ij}^0 \sim V_{ij}$ and use the results and relations obtained for the CKM elements in the previous sections (see Eqs. \[eqV1\]-\[eqV6\]): $V_{tb} \sim 1$, $V_{t^\prime b^\prime} \sim V_{cs} \sim 1$, $V_{t^\prime b} \sim V_{u b^\prime} \times (V_{cb}/V_{cd})$ and $V_{t b^\prime} \sim V_{u b^\prime} \times (V_{ts}/V_{us})$. We then obtain: $$\begin{aligned}
J_{SM4} \approx
&& V_{u b^\prime} \frac{V_{ts}}{V_{us}} \times \left[
V_{c b} \sqrt{ \frac{m_c}{m_{t^\prime}} }
\cos\left(\frac{1}{2} \delta_{34}^u \right) -
V_{u b^\prime} \sqrt{ \frac{m_d}{m_{b}} }
\cos\left(\frac{1}{2} \delta_{12}^d \right) \right] + \nonumber \\
&&V_{u b^\prime} \frac{V_{cb}}{V_{cd}} \times \left[
V_{u b^\prime} \sqrt{ \frac{m_u}{m_{t}} }
\cos\left(\frac{1}{2} \delta_{12}^u \right) -
V_{ts} \sqrt{ \frac{m_s}{m_{b^\prime}} }
\cos\left(\frac{1}{2} \delta_{34}^d \right) \right]
\label{moreb2}~.\end{aligned}$$ Setting $V_{cb} \sim - V_{ts} \sim A \lambda^2$ and $V_{ts}/V_{us} \sim V_{cb}/V_{cd} \sim - A \lambda$ and (consistent with their measured values [@PDG], where $A \sim 0.81$ and $\lambda = 0.2257$ is the Wolfenstein parameter), and taking $V_{u b^\prime} \sim V_{cb} \sim A \lambda^2$ and $m_{t^\prime} \sim 2 m_t$, $m_{b^\prime} \sim m_{t^\prime} - 55~{\rm GeV}$, consistent with the electroweak precision tests [@kribs; @0902.4883], we obtain: $$\begin{aligned}
\left| J_{SM4} \right| \sim A^3 \lambda^5 \times
\left[
\sqrt{ \frac{m_u}{m_{t}} }
+
\sqrt{ \frac{m_c}{m_{t^\prime}} }
-
\sqrt{ \frac{m_d}{m_{b}} }
+
\sqrt{ \frac{m_s}{m_{b^\prime}} }
\right] \sim
10^{-5} \label{jsm4} ~,\end{aligned}$$ where we have used $\cos(\delta_{12}^d/2) \sim \cos(\delta_{34}^d/2) \sim \cos(\delta_{12}^u/2) \sim \cos(\delta_{34}^u/2) \sim 1$ for the numerical estimate (see below). Indeed, with the above chosen values for the CKM elements and the 4th generation quark masses, all the four phases are fixed by the requirement that they reproduce the corresponding light-quark masses as given in Eqs. \[md\]-\[mc\]. In particular, according to Eqs.\[md\]-\[mc\] and the relations between the hidden symmetry angles and the CKM elements as given by Eqs. \[eqV1\]-\[eqV4\], we have: $$\begin{aligned}
\cos\left(\delta_{23}^u \right) &\sim& 1 - \frac{m_c}{2 m_{t^\prime} \frac{V_{u b^\prime}^2}{V_{cd}^2}} \sim 0.945 ~, \\
\cos\left(\delta_{12}^d \right) &\sim& 1 -\frac{m_d}{2 m_{b}
\frac{V_{c b}^2}{V_{cd}^2} } \sim 0.98 ~, \\
\cos\left(\delta_{23}^d \right) &\sim& 1 -\frac{m_s}{2 m_{b^\prime}
\frac{V_{u b^\prime}^2}{V_{us}^2} } \sim 0.995 ~, \\
\cos\left(\delta_{12}^u \right) &\sim& 1 -\frac{m_u}{2 m_{t}
\frac{V_{ts}^2}{V_{us}^2} } \sim 0.9998 ~,\end{aligned}$$ consistent with our perturbative description of CP-violation.
From Eq. \[jsm4\] we see that as the CP-violating phases $\delta_{12}^d,\delta_{34}^u \to 0$, both $m_d$ and $m_c$ approach zero and, therefore, also $J_{SM4} \to 0$. Note also that, for our chosen orientation of the hidden symmetry, we have $J_{SM4} \sim 10^{-5} \sim J_{SM}$, i.e., the SM4 analogue of the SM3’s Jarlskog invariant at high-energies and the original measured SM3’s Jarlskog invariant are of similar size. These results demonstrate the highly predictive power of our model for the description of CP-violation and the generation of the light-quark masses in the SM4. In particular, once the magnitude of the mixing angles and the masses of the 4th generation quarks are measured, our model gives a very distinct prediction for the expected size of CP-violation in the SM4, which can be directly confirmed at high-energy collider experiments. In a forthcoming paper [@our-2nd-FLpaper], we will perform a full numerical study and scan the complete range of the free parameter space of our model, subject to the relevant existing data. We will also suggest ways to test our model in the upcoming LHC and the future machines such as a Super-B factory and the International Linear Collider.
Summary
=======
Motivated by the recent hints of CP anomalies in the B-system and by the idea of Friedberg and Lee (FL) in [@FL], we have presented a new framework for CP-violation and the generation of the light-quark masses in the SM with four families - the SM4.
We have applied the basic ingredients of the FL mechanism to the SM4 case, by constructing an extended (double) hidden symmetry suitable for four families which defines the zeroth-order states in the up and down-quarks sectors and which ensures T-invariance. We then outlined the breaking mechanism of both the hidden symmetry and T-invariance in the SM4 case, from which we obtained the CP-violating measure and the physical states in this model. We have shown that this mechanism, when applied to the SM4, can be highly predictive and can be tested in future experiments. In particular, we gave one physically relevant example for the predictive power of our model by choosing a specific orientation of the hidden symmetry. This allowed us to analytically derive the physical (observed) quark states, and to give a prediction for the size of the mixing angles between the 4th generation and the 1st three generations of the SM3 and for the size of CP-violation associated with the 4th generation quarks.
A complete numerical study of our model, which explores the full phase-space of viable hidden symmetries for the SM4 and the corresponding range of the expected size of CP-violation and of the 4th generation mixing angles, is in preparation and will be presented in [@our-2nd-FLpaper].
[***Acknowledgments:***]{} We thank Gad Eilam for discussions. The work of AS is supported in part by the US DOE contract No. DE-AC02-98CH10886.
[99]{}
W.S. Hou, R.S. Willey and A.Soni, Phys. Rev. Lett. [**58**]{}, 1608 (1987) \[Erratum-ibid. [**60**]{}, 2337 (1988); W.S. Hou, A.Soni and H. Steger, Phys. Rev. Lett. [**59**]{}, 1521 (1987); W.S. Hou, A. Soni and H. Steger Phys. Lett. [**B192**]{}, 441 (1987).
Wei-Shu Hou, Makiko Nagashima, Andrea Soddu, Phys. Rev. [**D72**]{}, 115007 (2005); Wei-Shu Hou, Makiko Nagashima, Andrea Soddu, Phys. Rev. Lett. [**95**]{}, 141601 (2005); A. Arhib and W.S. Hou, JHEP [**0607**]{}, 009 (2006); Wei-Shu Hou, Hsiang-nan Li, Satoshi Mishima, Makiko Nagashima, Phys. Rev. Lett. [**98**]{}, 131801 (2007);
A. Soni, A. Kumar Alok, A. Giri, R. Mohanta and S. Nandi, arXiv:0807.1971.
W.S. Hou, arXiv:0803.1234; W.S. Hou, arXiv:0810.3396.
R. Fok and G.D. Kribs, Phys. Rev. [**D78**]{}, 075023 (2008).
Graham D. Kribs, Tilman Plehn, Michael Spannowsky, Timothy M.P. Tait, Phys. Rev. [**D76**]{}, 075016 (2007).
See [*e.g.*]{}, G.E. Volovik, Pisma Zh. Eksp. Teor. Fiz. [**78**]{}, 1203 (2003), JETP Lett. [**78**]{}, 691 (2003).
C. Jarlskog, Phys. Rev. [**D36**]{}, 2128 (1987).
E. Lunghi and A. Soni, arXiv:0903.5059 \[hep-ph\]; E. Lunghi and A. Soni, Phys. Lett. [**B666**]{}, 162 (2008); E. Lunghi and A. Soni, JHEP [**0709**]{}, 053 (2007).
H. Pagels and D. Stoker, Phys. Rev. [**D20**]{}, 2947 (1979); Hong-Jian He, Christopher T. Hill, Timothy M.P. Tait, Phys. Rev. [**D65**]{}, 055006 (2002); B. Holdom, JHEP [**0608**]{}, 076 (2006); G. Burdman and L.D. Rold, JHEP [**0712**]{}, 086 (2007).
For unitarity issues assosiated with such heavy quarks, see: M.S. Chanowitz, Phys. Lett. [**B352**]{}, 376 (1995); M.S. Chanowitz, M. Furman and I. Hinchliffe, Phys. Lett. [**B78**]{}, 285 (1982).
R. Friedberg and T.D. Lee, Ann. of Phys. [**323**]{}, 1087 (2008); R. Friedberg and T.D. Lee, Ann. of Phys. [**323**]{}, 1677 (2008).
C. Amsler [*et al.*]{}, “The Review of Particle Physics" Phys. Lett. [**B667**]{}, 1 (2008).
S. Bar-Shalom, D. Oaknin and A. Soni, in preperation.
C. Jarlskog and R. Stora, Phys. Lett. [**B208**]{}, 268 (1988); M. Gronau, A. Kfir and R. Loewy, Phys. Rev. Lett. [**56**]{}, 1538 (1986); O.W. Greenberg, Phys. Rev. [bf D32]{}, 1841 (1985); D.D. Wu, Phys. Rev. [bf D33]{}, 860 (1986); F. del Aguila and J.A. Aguilar-Saavedra, Phys. Lett. [bf B386]{}, 241 (1996).
F. del Aguila, J.A. Aguilar-Saavedra and G.C. Branco, Nucl. Phys. [**B510**]{}, 39 (1998).
M. Bobrowski, A. Lenz, J. Riedl and J. Rohrwild, arXiv:0902.4883 \[hep-ph\].
[^1]: Electronic address: shaouly@physics.technion.ac.il
[^2]: Electronic address: d1306av@gmail.com
[^3]: Electronic address: soni@bnl.gov
[^4]: Note that, although their is no CP-violation in our model in the chiral limit $m_{u,d,s,c} \to 0$ (which is our zeroth-order approximation), we can use the CP-violating quantities obtained in [@branco] in this limit, since those are given in terms of the physical mixing angles. In our model, the imaginary parts of these mixing angles are proportional to the very small light-quark masses.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Over 100 trigonometric parallaxes and proper motions for masers associated with young, high-mass stars have been measured with the Bar and Spiral Structure Legacy Survey, a Very Long Basline Array key science project, the European VLBI Network, and the Japanese VERA project. These measurements provide strong evidence for the existence of spiral arms in the Milky Way, accurately locating many arm segments and yielding spiral pitch angles ranging from about $7^\circ$ to $20^\circ$. The widths of spiral arms increase with distance from the Galactic center. Fitting axially symmetric models of the Milky Way with the 3-dimensional position and velocity information and conservative priors for the solar and average source peculiar motions, we estimate the distance to the Galactic center, $\Ro$, to be $8.34\pm0.16$ kpc, a circular rotation speed at the Sun, $\To$, to be $240\pm8$ , and a rotation curve that is nearly flat ( a slope of $-0.2\pm0.4$ ) between Galactocentric radii of $\approx5$ and 16 kpc. Assuming a “universal” spiral galaxy form for the rotation curve, we estimate the thin disk scale length to be $2.44\pm0.16$ kpc. With this large data set, the parameters and are no longer highly correlated and are relatively insensitive to different forms of the rotation curve. If one adopts a theoretically motivated prior that high-mass star forming regions are in nearly circular Galactic orbits, we estimate a global solar motion component in the direction of Galactic rotation, $\V=14.6\pm5.0$ . While and $\V$ are significantly correlated, the sum of these parameters is well constrained, $\To+\V = 255.2\pm5.1$ , as is the angular speed of the Sun in its orbit about the Galactic center, $(\To+\V)/\Ro = 30.57\pm0.43$ . These parameters improve the accuracy of estimates of the accelerations of the Sun and the Hulse-Taylor binary pulsar in their Galactic orbits, significantly reducing the uncertainty in tests of gravitational radiation predicted by general relativity.'
author:
- 'M. J. Reid, K. M. Menten, A. Brunthaler, X. W. Zheng, T. M. Dame, Y. Xu, Y. Wu, B. Zhang, A. Sanna, M. Sato, K. Hachisuka, Y. K. Choi, K. Immer, L. Moscadelli, K. L. J. Rygl, & A. Bartkiewicz'
title: |
TRIGONOMETRIC PARALLAXES OF HIGH MASS STAR FORMING REGIONS:\
THE STRUCTURE AND KINEMATICS OF THE MILKY WAY
---
Introduction
============
Two major projects to map the spiral structure of the Milky Way are providing parallaxes and proper motions for water and methanol masers associated with high-mass star forming regions (HMSFRs) across large portions of the Milky Way. The Bar and Spiral Structure Legacy (BeSSeL) Survey [^1] and the Japanese VLBI Exploration of Radio Astrometry (VERA) [^2] have yielded over 100 parallax measurements with accuracies typically about $\pm20$ , and some as good as $\pm5$ . This accuracy exceeds the target of the European astrometric satellite mission Gaia, launched in December 2013 and scheduled for final results in 2021-2022 [@Eyer:13]. While Gaia aims to measure $\sim10^9$ stars, far more than practical by Very Long Baseline Interferometry (VLBI), Gaia will be limited by extinction at optical wavelengths and will not be able to freely probe the Galactic plane. In contrast, VLBI at radio wavelengths is not affected by dust extinction and can yield parallaxes for massive young stars that best trace spiral structure in other galaxies, and current parallax accuracy allows measurements for stars across most of the Milky Way.
Given parallax and proper motion measurements (coupled with source coordinates and line-of-sight velocities from Doppler shifts of spectral lines), one has complete phase-space information. This provides direct and powerful constraints on the fundamental parameters of the Galaxy, including the distance to the Galactic center, , and the circular orbital speed at the Sun, . Preliminary models of the structure and dynamics of the Galaxy based on VLBI parallax and proper motions of star forming regions have been published. @Reid:09b fitted results from 16 HMSFRs and determined $\Ro=8.4\pm0.6$ kpc and $\To=254\pm16$ , assuming the solar motion in the direction of Galactic rotation, $\V$, is 5 [@Dehnen:98]. More recently @Honma:12 analyzed results from a larger sample of 52 sources, including both low-mass star forming regions and HMSFRs, and concluded that $\Ro=8.05\pm0.45$ kpc and $\To=238\pm14$ , assuming $\V=12$ [@Schoenrich:10]. Several groups have re-modeled maser parallax and proper motion data [@Bovy:09; @McMillan:10; @Bobylev:10] using different approaches and focusing on effects of parameter correlations and prior assumptions, most notably the values adopted for the solar motion (see §\[sect:priors\] and §\[sect:solar\_motion\]).
With the much larger number and wider distribution of parallaxes and proper motions of HMSFRs now available, we can provide more robust estimates of the fundamental Galactic parameters. In Section \[sect:parallaxes\], we present the combined parallax data sets from the BeSSeL and VERA groups and comment on aspects of spiral structure in Section \[sect:spiral\_structure\]. We model the combined data set to obtain better estimates of and in Section \[sect:modeling\], including discussion of priors, different forms of rotation curves, and parameter correlations. Finally, in Section \[sect:discussion\], we discuss the solar motion, best values for and , and some astrophysical implications.
Parallaxes and Proper Motions {#sect:parallaxes}
=============================
Table \[table:parallaxes\] lists the parallaxes and proper motions of 103 regions of high-mass star formation measured with VLBI techniques, using the National Radio Astronomy Observatory’s Very Long Baseline Array (VLBA), the Japanese VERA project, and the European VLBI Network (EVN). We have include three red supergiants (NML Cyg, S Per, VY CMa) as indicative of HMSFRs, since they are high mass stars that have short lifetimes ($<10^7$ yr) and therefore cannot have migrated far from their birth locations. The locations of these star forming regions in the Galaxy are shown in Figure \[fig:parallaxes\], superposed on a schematic diagram of the Milky Way. Distance errors are indicated with error bars ($1\sigma$), but for many sources the error bars are smaller than the symbols.
[llrrrrrrll]{} G348.70$-$01.04 & &17:20:04.04 &$-$38:58:30.9 & 0.296$\pm$ 0.026 & $-$0.73$\pm$ 0.19 & $-$2.83$\pm$ 0.54 & $-$7$\pm$ 6 &... &1\
G351.44$+$00.65 &NGC 6334 &17:20:54.60 &$-$35:45:08.6 & 0.744$\pm$ 0.074 & 0.40$\pm$ 0.51 & $-$2.24$\pm$ 0.64 & $-$8$\pm$ 3 &Sgr &2\
G000.67$-$00.03 &Sgr B2 &17:47:20.00 &$-$28:22:40.0 & 0.129$\pm$ 0.012 & $-$0.78$\pm$ 0.40 & $-$4.26$\pm$ 0.40 & 62$\pm$ 5 &... &3\
G005.88$-$00.39 & &18:00:30.31 &$-$24:04:04.5 & 0.334$\pm$ 0.020 & 0.18$\pm$ 0.34 & $-$2.26$\pm$ 0.34 & 9$\pm$ 3 &Sct &4\
G009.62$+$00.19 & &18:06:14.66 &$-$20:31:31.7 & 0.194$\pm$ 0.023 & $-$0.58$\pm$ 0.13 & $-$2.49$\pm$ 0.29 & 2$\pm$ 3 &4$-$k &5\
G010.47$+$00.02 & &18:08:38.23 &$-$19:51:50.3 & 0.117$\pm$ 0.008 & $-$3.86$\pm$ 0.19 & $-$6.40$\pm$ 0.14 & 69$\pm$ 5 &Con &7\
G010.62$-$00.38 &W 31 &18:10:28.55 &$-$19:55:48.6 & 0.202$\pm$ 0.019 & $-$0.37$\pm$ 0.50 & $-$0.60$\pm$ 0.25 & $-$3$\pm$ 5 &3$-$k &7\
G011.49$-$01.48 & &18:16:22.13 &$-$19:41:27.2 & 0.800$\pm$ 0.033 & 1.42$\pm$ 0.52 & $-$0.60$\pm$ 0.65 & 11$\pm$ 3 &Sgr &2\
G011.91$-$00.61 & &18:13:58.12 &$-$18:54:20.3 & 0.297$\pm$ 0.031 & 0.66$\pm$ 0.28 & $-$1.36$\pm$ 0.41 & 37$\pm$ 5 &Sct &4\
G012.02$-$00.03 & &18:12:01.84 &$-$18:31:55.8 & 0.106$\pm$ 0.008 & $-$4.11$\pm$ 0.07 & $-$7.76$\pm$ 0.27 & 108$\pm$ 5 &3$-$k &7\
G012.68$-$00.18 & &18:13:54.75 &$-$18:01:46.6 & 0.416$\pm$ 0.028 & $-$1.00$\pm$ 0.95 & $-$2.85$\pm$ 0.95 & 58$\pm$ 10 &Sct &8\
G012.80$-$00.20 & &18:14:14.23 &$-$17:55:40.5 & 0.343$\pm$ 0.037 & $-$0.60$\pm$ 0.70 & $-$0.99$\pm$ 0.70 & 34$\pm$ 5 &Sct &8\
G012.88$+$00.48 &IRAS 18089$-$1732&18:11:51.42 &$-$17:31:29.0 & 0.400$\pm$ 0.040 & 0.15$\pm$ 0.25 & $-$2.30$\pm$ 0.39 & 31$\pm$ 7 &Sct &8,10\
G012.90$-$00.24 & &18:14:34.42 &$-$17:51:51.9 & 0.408$\pm$ 0.025 & 0.19$\pm$ 0.80 & $-$2.52$\pm$ 0.80 & 36$\pm$ 10 &Sct &8\
G012.90$-$00.26 & &18:14:39.57 &$-$17:52:00.4 & 0.396$\pm$ 0.032 & $-$0.36$\pm$ 0.80 & $-$2.22$\pm$ 0.80 & 39$\pm$ 10 &Sct &8\
G013.87$+$00.28 & &18:14:35.83 &$-$16:45:35.9 & 0.254$\pm$ 0.024 & $-$0.25$\pm$ 2.00 & $-$2.49$\pm$ 2.00 & 48$\pm$ 10 &Sct &4\
G014.33$-$00.64 & &18:18:54.67 &$-$16:47:50.3 & 0.893$\pm$ 0.101 & 0.95$\pm$ 1.50 & $-$2.40$\pm$ 1.30 & 22$\pm$ 5 &Sgr &9\
G014.63$-$00.57 & &18:19:15.54 &$-$16:29:45.8 & 0.546$\pm$ 0.022 & 0.22$\pm$ 1.20 & $-$2.07$\pm$ 1.20 & 19$\pm$ 5 &Sgr &2\
G015.03$-$00.67 &M 17 &18:20:24.81 &$-$16:11:35.3 & 0.505$\pm$ 0.033 & 0.68$\pm$ 0.32 & $-$1.42$\pm$ 0.33 & 22$\pm$ 3 &Sgr &10\
G016.58$-$00.05 & &18:21:09.08 &$-$14:31:48.8 & 0.279$\pm$ 0.023 & $-$2.52$\pm$ 0.37 & $-$2.33$\pm$ 0.35 & 60$\pm$ 5 &Sct &4\
G023.00$-$00.41 & &18:34:40.20 &$-$09:00:37.0 & 0.218$\pm$ 0.017 & $-$1.72$\pm$ 0.14 & $-$4.12$\pm$ 0.33 & 80$\pm$ 3 &4$-$k &11\
G023.44$-$00.18 & &18:34:39.19 &$-$08:31:25.4 & 0.170$\pm$ 0.032 & $-$1.93$\pm$ 0.15 & $-$4.11$\pm$ 0.13 & 97$\pm$ 3 &4$-$k &11\
G023.65$-$00.12 & &18:34:51.59 &$-$08:18:21.4 & 0.313$\pm$ 0.039 & $-$1.32$\pm$ 0.20 & $-$2.96$\pm$ 0.20 & 83$\pm$ 3 &... &12\
G023.70$-$00.19 & &18:35:12.36 &$-$08:17:39.5 & 0.161$\pm$ 0.024 & $-$3.17$\pm$ 0.12 & $-$6.38$\pm$ 0.16 & 73$\pm$ 5 &4$-$k &7\
G025.70$+$00.04 & &18:38:03.14 &$-$06:24:15.5 & 0.098$\pm$ 0.029 & $-$2.89$\pm$ 0.07 & $-$6.20$\pm$ 0.36 & 93$\pm$ 5 &Sct &4\
G027.36$-$00.16 & &18:41:51.06 &$-$05:01:43.4 & 0.125$\pm$ 0.042 & $-$1.81$\pm$ 0.11 & $-$4.11$\pm$ 0.27 & 92$\pm$ 3 &Sct &10\
G028.86$+$00.06 & &18:43:46.22 &$-$03:35:29.6 & 0.135$\pm$ 0.018 & $-$4.80$\pm$ 0.30 & $-$5.90$\pm$ 0.30 & 100$\pm$ 10 &Sct &4\
G029.86$-$00.04 & &18:45:59.57 &$-$02:45:06.7 & 0.161$\pm$ 0.020 & $-$2.32$\pm$ 0.11 & $-$5.29$\pm$ 0.16 & 100$\pm$ 3 &Sct &6\
G029.95$-$00.01 &W 43S &18:46:03.74 &$-$02:39:22.3 & 0.190$\pm$ 0.019 & $-$2.30$\pm$ 0.13 & $-$5.34$\pm$ 0.13 & 98$\pm$ 3 &Sct &6\
G031.28$+$00.06 & &18:48:12.39 &$-$01:26:30.7 & 0.234$\pm$ 0.039 & $-$2.09$\pm$ 0.16 & $-$4.37$\pm$ 0.21 & 109$\pm$ 3 &Sct &6\
G031.58$+$00.07 &W 43Main &18:48:41.68 &$-$01:09:59.0 & 0.204$\pm$ 0.030 & $-$1.88$\pm$ 0.40 & $-$4.84$\pm$ 0.40 & 96$\pm$ 5 &Sct &6\
G032.04$+$00.05 & &18:49:36.58 &$-$00:45:46.9 & 0.193$\pm$ 0.008 & $-$2.21$\pm$ 0.40 & $-$4.80$\pm$ 0.40 & 97$\pm$ 5 &Sct &4\
G033.64$-$00.22 & &18:53:32.56 &$+$00:31:39.1 & 0.153$\pm$ 0.017 & $-$3.18$\pm$ 0.10 & $-$6.10$\pm$ 0.10 & 60$\pm$ 3 &... &1\
G034.39$+$00.22 & &18:53:18.77 &$+$01:24:08.8 & 0.643$\pm$ 0.049 & $-$0.90$\pm$ 1.00 & $-$2.75$\pm$ 2.00 & 57$\pm$ 5 &Sgr &13\
G035.02$+$00.34 & &18:54:00.67 &$+$02:01:19.2 & 0.430$\pm$ 0.040 & $-$0.92$\pm$ 0.90 & $-$3.61$\pm$ 0.90 & 52$\pm$ 5 &Sgr &2\
G035.19$-$00.74 & &18:58:13.05 &$+$01:40:35.7 & 0.456$\pm$ 0.045 & $-$0.18$\pm$ 0.50 & $-$3.63$\pm$ 0.50 & 30$\pm$ 7 &Sgr &14\
G035.20$-$01.73 & &19:01:45.54 &$+$01:13:32.5 & 0.306$\pm$ 0.045 & $-$0.71$\pm$ 0.21 & $-$3.61$\pm$ 0.26 & 42$\pm$ 3 &Sgr &14\
G037.43$+$01.51 & &18:54:14.35 &$+$04:41:41.7 & 0.532$\pm$ 0.021 & $-$0.45$\pm$ 0.35 & $-$3.69$\pm$ 0.39 & 41$\pm$ 3 &Sgr &2\
G043.16$+$00.01 &W 49N &19:10:13.41 &$+$09:06:12.8 & 0.090$\pm$ 0.007 & $-$2.88$\pm$ 0.20 & $-$5.41$\pm$ 0.20 & 10$\pm$ 5 &Per &15\
G043.79$-$00.12 &OH 43.8$-$0.1 &19:11:53.99 &$+$09:35:50.3 & 0.166$\pm$ 0.005 & $-$3.02$\pm$ 0.36 & $-$6.20$\pm$ 0.36 & 44$\pm$ 10 &Sgr &2\
G043.89$-$00.78 & &19:14:26.39 &$+$09:22:36.5 & 0.121$\pm$ 0.020 & $-$2.75$\pm$ 0.30 & $-$6.43$\pm$ 0.30 & 54$\pm$ 5 &Sgr &2\
G045.07$+$00.13 & &19:13:22.04 &$+$10:50:53.3 & 0.125$\pm$ 0.005 & $-$2.98$\pm$ 0.45 & $-$6.26$\pm$ 0.45 & 59$\pm$ 5 &Sgr &2\
G045.45$+$00.05 & &19:14:21.27 &$+$11:09:15.9 & 0.119$\pm$ 0.017 & $-$2.34$\pm$ 0.38 & $-$6.00$\pm$ 0.54 & 55$\pm$ 7 &Sgr &2\
G048.60$+$00.02 & &19:20:31.18 &$+$13:55:25.2 & 0.093$\pm$ 0.005 & $-$2.89$\pm$ 0.13 & $-$5.50$\pm$ 0.13 & 18$\pm$ 5 &Per &15\
G049.19$-$00.33 & &19:22:57.77 &$+$14:16:10.0 & 0.189$\pm$ 0.007 & $-$2.99$\pm$ 0.40 & $-$5.71$\pm$ 0.40 & 67$\pm$ 5 &Sgr &2\
G049.48$-$00.36 &W 51 IRS2 &19:23:39.82 &$+$14:31:05.0 & 0.195$\pm$ 0.071 & $-$2.49$\pm$ 0.14 & $-$5.51$\pm$ 0.16 & 56$\pm$ 3 &Sgr &16\
G049.48$-$00.38 &W 51M &19:23:43.87 &$+$14:30:29.5 & 0.185$\pm$ 0.010 & $-$2.64$\pm$ 0.20 & $-$5.11$\pm$ 0.20 & 58$\pm$ 4 &Sgr &17\
G052.10$+$01.04 &IRAS 19213+1723&19:23:37.32 &$+$17:29:10.5 & 0.251$\pm$ 0.060 & $-$2.60$\pm$ 2.00 & $-$6.10$\pm$ 2.00 & 42$\pm$ 5 &Sgr &18\
G059.78$+$00.06 & &19:43:11.25 &$+$23:44:03.3 & 0.463$\pm$ 0.020 & $-$1.65$\pm$ 0.30 & $-$5.12$\pm$ 0.30 & 25$\pm$ 3 &Loc &16\
G069.54$-$00.97 &ON 1 &20:10:09.07 &$+$31:31:36.0 & 0.406$\pm$ 0.013 & $-$3.19$\pm$ 0.40 & $-$5.22$\pm$ 0.40 & 12$\pm$ 5 &Loc &19,20,21\
G074.03$-$01.71 & &20:25:07.11 &$+$34:49:57.6 & 0.629$\pm$ 0.017 & $-$3.79$\pm$ 1.30 & $-$4.88$\pm$ 1.50 & 5$\pm$ 5 &Loc &21\
G075.29$+$01.32 & &20:16:16.01 &$+$37:35:45.8 & 0.108$\pm$ 0.005 & $-$2.37$\pm$ 0.11 & $-$4.48$\pm$ 0.17 & $-$58$\pm$ 5 &Out &22\
G075.76$+$00.33 & &20:21:41.09 &$+$37:25:29.3 & 0.285$\pm$ 0.022 & $-$3.08$\pm$ 0.60 & $-$4.56$\pm$ 0.60 & $-$9$\pm$ 9 &Loc &21\
G075.78$+$00.34 &ON 2N &20:21:44.01 &$+$37:26:37.5 & 0.261$\pm$ 0.030 & $-$2.79$\pm$ 0.55 & $-$4.66$\pm$ 0.55 & 1$\pm$ 5 &Loc &23\
G076.38$-$00.61 & &20:27:25.48 &$+$37:22:48.5 & 0.770$\pm$ 0.053 & $-$3.73$\pm$ 3.00 & $-$3.84$\pm$ 3.00 & $-$2$\pm$ 5 &Loc &21\
G078.12$+$03.63 &IRAS 20126+4104&20:14:26.07 &$+$41:13:32.7 & 0.610$\pm$ 0.030 & $-$2.06$\pm$ 0.50 & 0.98$\pm$ 0.50 & $-$4$\pm$ 5 &Loc &24\
G078.88$+$00.70 &AFGL 2591 &20:29:24.82 &$+$40:11:19.6 & 0.300$\pm$ 0.024 & $-$1.20$\pm$ 0.72 & $-$4.80$\pm$ 0.66 & $-$6$\pm$ 7 &Loc &25\
G079.73$+$00.99 &IRAS 20290+4052&20:30:50.67 &$+$41:02:27.5 & 0.737$\pm$ 0.062 & $-$2.84$\pm$ 0.50 & $-$4.14$\pm$ 0.70 & $-$3$\pm$ 5 &Loc &25\
G079.87$+$01.17 & &20:30:29.14 &$+$41:15:53.6 & 0.620$\pm$ 0.027 & $-$3.23$\pm$ 1.30 & $-$5.19$\pm$ 1.30 & $-$5$\pm$ 10 &Loc &21\
G080.79$-$01.92 &NML Cyg &20:46:25.54 &$+$40:06:59.4 & 0.620$\pm$ 0.047 & $-$1.55$\pm$ 0.57 & $-$4.59$\pm$ 0.57 & $-$3$\pm$ 3 &Loc &26\
G080.86$+$00.38 &DR 20 &20:37:00.96 &$+$41:34:55.7 & 0.687$\pm$ 0.038 & $-$3.29$\pm$ 0.45 & $-$4.83$\pm$ 0.50 & $-$3$\pm$ 5 &Loc &25\
G081.75$+$00.59 &DR 21 &20:39:01.99 &$+$42:24:59.3 & 0.666$\pm$ 0.035 & $-$2.84$\pm$ 0.45 & $-$3.80$\pm$ 0.47 & $-$3$\pm$ 3 &Loc &25\
G081.87$+$00.78 &W 75N &20:38:36.43 &$+$42:37:34.8 & 0.772$\pm$ 0.042 & $-$1.97$\pm$ 0.50 & $-$4.16$\pm$ 0.51 & 7$\pm$ 3 &Loc &25\
G090.21$+$02.32 & &21:02:22.70 &$+$50:03:08.3 & 1.483$\pm$ 0.038 & $-$0.67$\pm$ 1.56 & $-$0.90$\pm$ 1.67 & $-$3$\pm$ 5 &Loc &21\
G092.67$+$03.07 & &21:09:21.73 &$+$52:22:37.1 & 0.613$\pm$ 0.020 & $-$0.69$\pm$ 0.60 & $-$2.25$\pm$ 0.60 & $-$5$\pm$ 10 &Loc &21\
G094.60$-$01.79 &AFGL 2789 &21:39:58.27 &$+$50:14:21.0 & 0.280$\pm$ 0.030 & $-$2.30$\pm$ 0.60 & $-$3.80$\pm$ 0.60 & $-$46$\pm$ 5 &Per &18,28\
G095.29$-$00.93 & &21:39:40.51 &$+$51:20:32.8 & 0.205$\pm$ 0.015 & $-$2.75$\pm$ 0.20 & $-$2.75$\pm$ 0.25 & $-$38$\pm$ 5 &Per &28\
G097.53$+$03.18 & &21:32:12.43 &$+$55:53:49.7 & 0.133$\pm$ 0.017 & $-$2.94$\pm$ 0.29 & $-$2.48$\pm$ 0.29 & $-$73$\pm$ 5 &Out &27\
G100.37$-$03.57 & &22:16:10.37 &$+$52:21:34.1 & 0.291$\pm$ 0.010 & $-$3.77$\pm$ 0.60 & $-$3.12$\pm$ 0.60 & $-$37$\pm$ 10 &Per &28\
G105.41$+$09.87 & &21:43:06.48 &$+$66:06:55.3 & 1.129$\pm$ 0.063 & $-$0.21$\pm$ 1.20 & $-$5.49$\pm$ 1.20 & $-$10$\pm$ 5 &Loc &21\
G107.29$+$05.63 &IRAS 22198+6336&22:21:26.73 &$+$63:51:37.9 & 1.288$\pm$ 0.107 & $-$2.47$\pm$ 1.40 & 0.26$\pm$ 1.40 & $-$11$\pm$ 5 &Loc &29\
G108.18$+$05.51 &L 1206 &22:28:51.41 &$+$64:13:41.3 & 1.289$\pm$ 0.153 & 0.27$\pm$ 0.50 & $-$1.40$\pm$ 1.95 & $-$11$\pm$ 3 &Loc &19\
G108.20$+$00.58 & &22:49:31.48 &$+$59:55:42.0 & 0.229$\pm$ 0.028 & $-$2.25$\pm$ 0.50 & $-$1.00$\pm$ 0.50 & $-$49$\pm$ 5 &Per &28\
G108.47$-$02.81 & &23:02:32.08 &$+$56:57:51.4 & 0.309$\pm$ 0.010 & $-$2.45$\pm$ 1.00 & $-$3.00$\pm$ 0.70 & $-$54$\pm$ 5 &Per &28\
G108.59$+$00.49 & &22:52:38.30 &$+$60:00:52.0 & 0.398$\pm$ 0.031 & $-$5.55$\pm$ 0.40 & $-$3.38$\pm$ 0.40 & $-$52$\pm$ 5 &Per &28\
G109.87$+$02.11 &Cep A &22:56:18.10 &$+$62:01:49.5 & 1.430$\pm$ 0.080 & 0.50$\pm$ 1.50 & $-$3.70$\pm$ 1.00 & $-$7$\pm$ 5 &Loc &30\
G111.23$-$01.23 & &23:17:20.79 &$+$59:28:47.0 & 0.288$\pm$ 0.044 & $-$4.28$\pm$ 0.60 & $-$2.33$\pm$ 0.60 & $-$53$\pm$ 10 &Per &28\
G111.25$-$00.76 & &23:16:10.36 &$+$59:55:28.5 & 0.294$\pm$ 0.016 & $-$2.45$\pm$ 0.60 & $-$2.10$\pm$ 0.60 & $-$43$\pm$ 5 &Per &28\
G111.54$+$00.77 &NGC 7538 &23:13:45.36 &$+$61:28:10.6 & 0.378$\pm$ 0.017 & $-$2.45$\pm$ 0.24 & $-$2.44$\pm$ 0.25 & $-$57$\pm$ 5 &Per &30\
G121.29$+$00.65 &L 1287 &00:36:47.35 &$+$63:29:02.2 & 1.077$\pm$ 0.039 & $-$0.86$\pm$ 0.76 & $-$2.29$\pm$ 0.82 & $-$23$\pm$ 5 &Loc &19\
G122.01$-$07.08 &IRAS 00420+5530&00:44:58.40 &$+$55:46:47.6 & 0.460$\pm$ 0.020 & $-$3.70$\pm$ 0.50 & $-$1.25$\pm$ 0.50 & $-$50$\pm$ 5 &Per &31\
G123.06$-$06.30 &NGC 281 &00:52:24.70 &$+$56:33:50.5 & 0.355$\pm$ 0.030 & $-$2.79$\pm$ 0.62 & $-$2.14$\pm$ 0.70 & $-$30$\pm$ 5 &Per &32\
G123.06$-$06.30 &NGC 281W &00:52:24.20 &$+$56:33:43.2 & 0.421$\pm$ 0.022 & $-$2.69$\pm$ 0.31 & $-$1.77$\pm$ 0.29 & $-$29$\pm$ 3 &Per &19\
G133.94$+$01.06 &W 3OH &02:27:03.82 &$+$61:52:25.2 & 0.512$\pm$ 0.010 & $-$1.20$\pm$ 0.32 & $-$0.15$\pm$ 0.32 & $-$47$\pm$ 3 &Per &33,34\
G134.62$-$02.19 &S Per &02:22:51.71 &$+$58:35:11.4 & 0.413$\pm$ 0.017 & $-$0.49$\pm$ 0.35 & $-$1.19$\pm$ 0.33 & $-$39$\pm$ 5 &Per &35\
G135.27$+$02.79 &WB 89$-$437 &02:43:28.57 &$+$62:57:08.4 & 0.167$\pm$ 0.011 & $-$1.22$\pm$ 0.30 & 0.46$\pm$ 0.36 & $-$72$\pm$ 3 &Out &36\
G160.14$+$03.15 & &05:01:40.24 &$+$47:07:19.0 & 0.244$\pm$ 0.006 & 0.87$\pm$ 0.35 & $-$1.32$\pm$ 0.29 & $-$18$\pm$ 5 &... &1\
G168.06$+$00.82 &IRAS 05137+3919&05:17:13.74 &$+$39:22:19.9 & 0.130$\pm$ 0.040 & 0.50$\pm$ 0.24 & $-$0.85$\pm$ 0.17 & $-$27$\pm$ 5 &Out &37,38\
G176.51$+$00.20 & &05:37:52.14 &$+$32:00:03.9 & 1.038$\pm$ 0.021 & 1.84$\pm$ 1.00 & $-$5.86$\pm$ 1.00 & $-$17$\pm$ 5 &Loc &21\
G182.67$-$03.26 & &05:39:28.42 &$+$24:56:32.1 & 0.149$\pm$ 0.011 & 0.16$\pm$ 0.32 & $-$0.17$\pm$ 0.32 & $-$7$\pm$ 10 &Out &37\
G183.72$-$03.66 & &05:40:24.23 &$+$23:50:54.7 & 0.570$\pm$ 0.013 & 0.13$\pm$ 1.20 & $-$1.40$\pm$ 1.20 & 3$\pm$ 5 &Per &28\
G188.79$+$01.03 &IRAS 06061+2151&06:09:06.97 &$+$21:50:41.4 & 0.496$\pm$ 0.103 & $-$0.10$\pm$ 0.50 & $-$3.91$\pm$ 0.50 & $-$5$\pm$ 5 &Per &39\
G188.94$+$00.88 &S 252 &06:08:53.35 &$+$21:38:28.7 & 0.476$\pm$ 0.006 & 0.02$\pm$ 0.30 & $-$2.02$\pm$ 0.30 & 8$\pm$ 5 &Per &18,40\
G192.16$-$03.81 & &05:58:13.53 &$+$16:31:58.9 & 0.660$\pm$ 0.040 & 0.70$\pm$ 0.78 & $-$1.80$\pm$ 0.86 & 5$\pm$ 5 &Per &41\
G192.60$-$00.04 &S 255 &06:12:54.02 &$+$17:59:23.3 & 0.628$\pm$ 0.027 & $-$0.14$\pm$ 0.67 & $-$0.84$\pm$ 1.80 & 6$\pm$ 5 &Per &19\
G196.45$-$01.67 &S 269 &06:14:37.08 &$+$13:49:36.7 & 0.189$\pm$ 0.012 & $-$0.42$\pm$ 0.20 & $-$0.12$\pm$ 0.20 & 19$\pm$ 5 &Out &42\
G209.00$-$19.38 &Orion Nebula &05:35:15.80 &$-$05:23:14.1 & 2.410$\pm$ 0.030 & 3.30$\pm$ 1.50 & 0.10$\pm$ 1.50 & 3$\pm$ 5 &Loc &43,44,45\
G211.59$+$01.05 & &06:52:45.32 &$+$01:40:23.1 & 0.228$\pm$ 0.007 & $-$0.93$\pm$ 0.24 & 0.71$\pm$ 0.26 & 45$\pm$ 5 &... &1\
G229.57$+$00.15 & &07:23:01.84 &$-$14:41:32.8 & 0.221$\pm$ 0.014 & $-$1.34$\pm$ 0.70 & 0.81$\pm$ 0.70 & 47$\pm$ 10 &Per &28\
G232.62$+$00.99 & &07:32:09.78 &$-$16:58:12.8 & 0.596$\pm$ 0.035 & $-$2.17$\pm$ 0.38 & 2.09$\pm$ 0.60 & 21$\pm$ 3 &Loc &40\
G236.81$+$01.98 & &07:44:28.24 &$-$20:08:30.2 & 0.298$\pm$ 0.018 & $-$3.10$\pm$ 0.63 & 2.12$\pm$ 0.63 & 43$\pm$ 7 &Per &28\
G239.35$-$05.06 &VY CMa &07:22:58.33 &$-$25:46:03.1 & 0.855$\pm$ 0.057 & $-$2.80$\pm$ 0.58 & 2.60$\pm$ 0.58 & 20$\pm$ 3 &Loc &46,47\
G240.31$+$00.07 & &07:44:51.92 &$-$24:07:41.5 & 0.212$\pm$ 0.021 & $-$2.36$\pm$ 0.23 & 2.45$\pm$ 0.30 & 67$\pm$ 5 &Per &28\
\[table:parallaxes\]
Both the proper motion, $\mu_x$ and $\mu_y$, and Local Standard of Rest (LSR) velocity, , values and their uncertainties are meant to apply to the central star (or stars) that excite the masers. (Note that “LSR velocities” are [*defined*]{} based on the Standard Solar Motion values of 20 toward $18^h$ Right Ascension and $30^\circ$ Declination in 1900 coordinates, which translate to Galactic cartesian components of $\Uo=10$, $\Vo=15$ and $\Wo=7$ .) For the values we adopted methanol maser values, when available, or CO emission values from associated giant molecular clouds. Since some of the references reporting parallax and proper motion present only measurement uncertainty, for these we estimated an additional error term associated with the uncertainty in transferring the maser motions to that of the central star. These were added in quadrature with the measurement uncertainties. For methanol masers, which typically have modest motions of $\lax10$ with respect to the central star, we estimated the additional error term to be $\pm5$ for and a corresponding value for the proper motion components at the measured distance. While some water masers have expansion motions comparable to methanol masers, others display much faster outflow motions. High velocity outflows are usually associated with water masers that have spectra rich in features, spread over many tens of . We, therefore, evaluated the richness and spread of the water spectra (with respect to the systemic velocity as indicated by CO emission) and assigned the additional error term for $\mu_x$ and $\mu_y$ values between 5 and 20 .
Spiral Structure {#sect:spiral_structure}
================
Spiral arms in the Milky Way have long been recognized as presenting coherent arcs and loops in Galactic longitude–velocity ($\ell-V$) plots of atomic and molecular emissions. However, transforming velocity to distance (kinematic distances) has been problematic, owing to near-far distance ambiguities in the first and fourth Galactic quadrants and significant distance errors owing to large peculiar motions for some arm material (see, @Xu:06 [@Reid:09b]). While one cannot accurately place spiral arms on a plan view of the Milky Way from $\ell-V$ plots, one can in most cases unambiguously assign HMSFRs to spiral arms by association with CO and emission features. We have done this for the vast majority of the HMSFRs for which parallax and proper motions have been measured [@Hachi:13; @Choi:13; @Zhang:13; @Xu:13; @Wu:13; @Sato:13; @Sanna:13], as indicated in Table \[table:parallaxes\] and Figure \[fig:parallaxes\]. This avoids using the measured distances (parallaxes) and subjective judgment based on spatial location for arm assignments.
There are two avenues for checking that the arm assignments are reliable. Firstly, and most straightforwardly, looking at a plan view of the Milky Way (see Fig. \[fig:parallaxes\]) on which star forming regions with parallax distances are located, one can see that the pattern of sources for any given arm traces a continuous arc that resembles a spiral arm in external galaxies. Also, there are clear inter-arm regions with few, if any, HMSFRs between the Outer, Perseus, Local, Sagittarius, and Scutum arms. However, as one looks to the inner Galaxy, the current parallax data are not adequate to clearly separate arms, presuming significant separations even exist.
Secondly, once sources are assigned to arms based on $\ell-V$ information, one can then attempt to fit their radial and azimuthal locations to log-periodic spiral forms using measured distances. In the papers cited above, we fitted spiral patterns to arm segments, adopting a log-periodic spiral defined by $$\ln{(R/R_{ref})} = -(\beta - \beta_{ref}) \tan{\pa}~~,$$ where $R$ is the Galactocentric radius at a Galactocentric azimuth $\beta$ (defined as 0 toward the Sun and increasing with Galactic longitude) for an arm with a radius $R_{ref}$ at reference azimuth $\beta_{ref}$ and pitch angle $\pa$. We fitted a straight line to ($x,y$)=($\beta,\ln{(R/R_{ref})}$) using a Bayesian Markov chain Monte Carlo (McMC) procedure to estimate the parameters $R_{ref}$ and $\pa$. (The reference azimuth, $\beta_{ref}$, was arbitrarily set near the midpoint of the azimuth values for the sources in an arm). We minimized the “distance” perpendicular to the fitted straight line by rotating ($x,y$) through the angle $\pa$ to ($x_r,y_r$), $$x_r = x~\cos{\pa} + y~\sin{\pa};~~~y_r = y~\cos{\pa} - x~\sin{\pa}~~,$$ such that the best-fitting line lay in the $x_r$ axis.
Uncertainties in the source parallax “map” into both coordinates and were estimated numerically by randomly drawing trial parallax values (consistent with the measured values and uncertainties) and calculating the root-mean-squares for trial $\ln{(R/R_{ref})}$ and $\beta$ values. The locations of the HMSFRs deviated from fitted spirals by more than could be explained by parallax uncertainties. This is expected for spiral arms with intrinsic widths of several hundred parsecs. In order to allow for (and estimate) the scatter in location expected from the width of the spiral arm, before calculating trial $\ln{(R/R_{ref})}$ values, we added random scatter to the trial $R$ values via $R \leftarrow R + g a_w \cos{\pa}$, where $g$ is a random number drawn from a Gaussian distribution with zero mean and unity standard deviation and $a_w$ is an arm-width parameter, adjusted to give a post-fit $\chi^2_\nu$ near unity. The uncertainties in ($\beta,\ln{(R/R_{ref})}$) were then rotated by angle $\pa$ to match the data.
The sum of the squares of the residuals divided by their uncertainties in the $y_r$ direction were minimized. Since preliminary estimates of $\pa$ affect these quantities, we iterated the fitting to convergence. Final parameter values were estimated from marginalized posteriori probability density distribution functions (PDFs) for each parameter based on McMC trials that were accepted or rejected following the Metropolis-Hastings algorithm; the values reported in Table \[table:pitchangles\] assume $\Ro=8.34$ kpc (see §\[sect:modeling\]). Based on the fitted parameter values, we plot the trace of the centers and $1\sigma$ widths of each arm on Fig. \[fig:parallaxes\].
The intrinsic widths of the spiral arms, estimated from the $a_w$ parameters, show an interesting pattern in Fig. \[fig:armwidths\]. The estimated arm widths increase nearly linearly with Galactocentric radius at a rate of 42 pc kpc$^{-1}$ between radii of 5 to 13 kpc. Spiral pitch angles vary between $7^\circ$ and $20^\circ$ as listed in Table \[table:pitchangles\]. The significant range of pitch angles among arms suggests that no single value applies to all arms and, possibly, cannot be applied to the full length of an arm as it winds around the Galaxy [@Savchenko:13]. However, these pitch angles are characteristic of spiral galaxies of Sb to Sc class [@Kennicutt:81], further supporting the identification of $\ell-V$ tracks as spiral arms for the Milky Way.
[lrlrrr]{} Scutum &17 &27.6 ($+3\rightarrow101$) &$5.0\pm0.1$ & $0.17\pm0.02$ &$19.8\pm2.6$\
Sagittarius &18 &25.6 ($-2\rightarrow68$) &$6.6\pm0.1$ & $0.26\pm0.02$ &$6.9\pm1.6$\
Local &25 &8.9 ($-8\rightarrow27$) &$8.4\pm0.1$ & $0.33\pm0.01$ &$12.8\pm2.7$\
Perseus &24 &14.2 ($-21\rightarrow88$) &$9.9\pm0.1$ & $0.38\pm0.01$ &$9.4\pm1.4$\
Outer &6 &18.6 ($-6\rightarrow56$) &$13.0\pm0.3$ & $0.63\pm0.18$ &$13.8\pm3.3$\
\[table:pitchangles\]
The HMSFRs with measured parallaxes are clearly tracing the major spiral arms of the Milky Way (see Fig. \[fig:parallaxes\]), and details of the locations and properties of the individual arms can be found in the primary references [@Hachi:13; @Choi:13; @Zhang:13; @Xu:13; @Wu:13; @Sato:13; @Sanna:13]. Interestingly, some surprising results are already evident. We are finding that the Perseus arm, thought to be one of the major spiral arms of the Milky Way, has little massive star formation over a 6 kpc-long arc between Galactic longitudes of $50^\circ$ and $80^\circ$ [@Choi:13; @Zhang:13]. On the other hand, the Local (Orion) arm, often called a “spur” and considered a minor structure [@Blaauw:85], has comparable massive star formation to its adjacent Sagittarius and Perseus arms [@Xu:13].
Modeling the Galaxy {#sect:modeling}
===================
Given measurements of position, parallax, proper motion and Doppler shift, one has complete three-dimensional location and velocity vectors relative to the Sun. One can then construct a model of the Milky Way and adjust the model parameters to best match the data. As in @Reid:09b, we model the Milky Way as a disk rotating with speed $\Theta(R)=\To+\Tdot~(R-\Ro)$, where is the distance from the Sun to the Galactic center and is the circular rotation speed at this distance. We then evaluate the effects of different forms for the rotation curve. Since all measured motions are relative to the Sun, we need to model the peculiar (non-circular) motion of the Sun, parameterized by toward the Galactic center, in the direction of Galactic rotation, and towards the north Galactic pole (NGP). Table \[table:model\] summarizes these and other parameters.
[ll]{} & Distance of Sun from GC\
& Rotation Speed of Galaxy at\
& Derivative of $\Theta$ with $R$: $\Theta(R)=\To+\Tdot~(R-\Ro)$\
\
& Solar motion toward GC\
& Solar motion in direction of Galactic rotation\
& Solar motion toward NGP\
\
& Average source peculiar motion toward GC\
& Average source peculiar motion in direction of Galactic rotation\
& Average source peculiar motion toward NGP\
\[table:model\]
For each source, we treated the 3-dimensional velocity components (two components of proper motion, $\mu_x$ and $\mu_y$, and the heliocentric Doppler velocity, , as data to be compared to a model. The source coordinates ($\ell,b$) and parallax distance ($1/\pars$) were treated as independent variables. This approach is slightly different than in @Reid:09b, where the parallaxes were also treated as data in the least-squares fitting. While that approach adds some extra information ( for sources near the Galactic tangent points, distance is very sensitive to Doppler velocity, but not vice versa), it brings correlated data into the fitting, which will lead to slightly underestimated parameter uncertainties. We tested the inclusion versus exclusion of parallax with simulated data sets and found little difference and no bias between the methods. However, in order to avoid the need to adjust formal parameter uncertainties, as well as subtle issues associated with resolving the near/far distance ambiguities for sources in the first and fourth Galactic quadrants, we used the more conservative “velocity-only” fitting as done, for example, by others [@Bovy:09; @McMillan:10; @Bobylev:10; @Honma:12].
Bayesian fitting {#sect:Bayesian}
----------------
We adjusted the Galactic parameters so as to best match the data to the spatial-kinematic model using a Bayesian fitting approach. The posteriori PDFs of the parameters were estimated with Markov chain Monte Carlo (McMC) trials that were accepted or rejected by the Metropolis–Hastings algorithm. While a simple axi-symmetric model for the Galaxy may be a reasonable approximation for the majority of sources, a significant minority of outliers are expected for a variety of well known reasons. For example, the gravitational potential of the Galactic bar (or bars), which extend 3 to 4 kpc from the Galactic center [@Liszt:80; @Blitz:91; @Hammersley:00; @Benjamin:05] is expected to induce large non-circular motions for sources in its vicinity. Indeed, some of these sources show large peculiar motions, although based on a nearly flat rotation curve extrapolated inward from measurements outside this region [@Sanna:13]. Therefore, we removed the eight sources within 4 kpc of the Galactic center (excluding G000.67$-$00.03, G009.62$+$00.19, G010.47$+$00.02, G010.62$-$00.38, G012.02$-$00.03, G023.43$-$00.18, G023.70$-$00.19, G027.36$-$00.16) before model fitting.
In the Galaxy’s spiral arms, super-bubbles created by multiple supernovae can accelerate molecular clouds to $\approx20$ [@Sato:08]. It is probably not possible, prior to fitting, to determine which sources have been thus affected and are likely kinematically anomalous. Therefore, we initially used an “outlier-tolerant” Bayesian fitting scheme described by @Sivia:06 as a “conservative formulation,” which minimizes the effects of deviant points on estimates of the fitted parameters. For this approach, one maximizes $$\sum_{i=1}^N~\sum_{j=1}^3~{\ln\bigl(~(1-e^{-R_{i,j}^2/2})/R_{i,j}^2~\bigr)}~~,$$ where the weighted residual $R_{i,j} = (v_{i,j}~-~m_{i,j})/w_{i,j}$ ( the data ($v$) minus model ($m$) divided by the uncertainty ($w$) of the $i^{th}$ of $N$ sources and $j^{th}$ velocity component). For large residuals, this formulation assigns a $1/R^2$ probability, compared to a Gaussian probability of $e^{-R^2/2}$ which vanishes rapidly. Thus, for example, a $5\sigma$ outlier has a reasonable (4%) probability with the outlier-tolerant approach, compared to $\approx10^{-6}$ probability for Gaussian errors in the least-squares method, and will not be given excessive weight when adjusting parameters. Once the outliers were identified and removed, we assumed Gaussian data uncertainties and fitted data by maximizing $$\sum_{i=1}^N~\sum_{j=1}^3~{-R_{i,j}^2/2}~~,$$ essentially least-squares fitting.
Our choice of weights ($w$) for the data in the model fitting process was discussed in detail in @Reid:09b. We include both measurement uncertainty and the effects of random (Virial) motions of a massive young star (with maser emission) with respect to the average motion of the much larger and more massive HMSFR when weighting the differences between observed and modeled components of motion. Specifically the proper motion and Doppler velocity weights were given by $w(\mu) = \sqrt{\sigma^2_\mu + \sigma^2_{Vir}/d^2_s}$ and $w(\vhelio) = \sqrt{\sigma^2_v + \sigma^2_{Vir}}$, where $\sigma^2_{Vir}$ is the expected (1-dimensional) Virial dispersion for stars in a high mass star forming region (HMSFR). We adopted $\sigma_{Vir}=5$ , appropriate for HMSFRs with $\sim10^4$ within a radius of $\sim1$ pc, and did not adjust this value. As will be seen in §\[sect:Bayesian\], the vast majority of the velocity data can be fit with a $\chi^2_\nu$ near unity with these weights. Note that we were fairly conservative when assigning motion uncertainties for individual stars based on the maser data (see §\[sect:parallaxes\]), and this may result in a slightly low $\sigma_{Vir}$ value in order to achieve unity $\chi^2_\nu$ fits.
Priors {#sect:priors}
------
In order to model the observations, one needs prior constraints on the non-circular motion of our measurement “platform” ( the solar motion parameterized by , , ) and/or the average peculiar motion of the sources being measured (parameterized by , , ). Allowing for a non-zero average source peculiar motion can be thought of as a first approximation of the kinematic effects of spiral structure. In @Reid:09b, we assumed the solar motion determined by @Dehnen:98 based on Hipparcos measurements and concluded that HMSFRs lagged circular orbital speeds by 15 ($\Vsbar=-15$ ). The observed orbital lag ($\Vsbar<0$) is insensitive to the value adopted for , but it is strongly correlated with the adopted solar motion component, $\V$ [@Reid:09b; @Honma:12]. Recently, the value of the solar motion component in the direction of Galactic rotation () has become controversial. Motivated in part by the large lag in @Reid:09b, @Schoenrich:10 re-evaluated the standard “asymmetric-drift” approach used by @Dehnen:98 and concluded that it was biased by coupled metallicity/orbital-eccentricity effects. They suggested new solar motion values; specifically they argued for a substantial increase for from 5 to 12 . This change would decrease the average orbital lag of HMSFRs () by $\approx7$ to a more theoretically appealing value near 8 .
Based on the first year of data from the Apache Point Observatory Galactic Evolution Experiment (APOGEE), @Bovy:12 argue that the Sun’s motion relative to a circular orbit in the Galaxy (ie, a “rotational standard of rest”) is 26 in the direction of Galactic rotation, suggesting that the entire Solar Neighborhood, which defines the local standard of rest (LSR), leads a circular orbit by 14 . Taking into account these developments, we considered a conservative prior of $\V = 15\pm10$ , that encompasses the values of from 5 to 26 within approximately the $\pm1\sigma$ range.
One could argue on theoretical grounds that HMSFRs should, on average, lag circular orbits by only a few [@McMillan:10]. We observe masers in HMSFRs that are very young and the gas out of which their exciting stars formed could have responded to magnetic shocks when entering spiral arms, leading to departures from circular speeds by $\lax10$ [@Roberts:70], apportioned between components counter to rotation and toward the Galactic Center. In addition, radial pressure gradients can also reduce orbital speeds of gas slightly [@Burkert:10], contributing to a small lag of $\approx1$ . Allowing for such effects, we consider priors for of $3\pm10$ and of $-3\pm10$ as reasonable and conservative.
Given the current uncertainty in a) the value for the circular () component of solar motion and b) the magnitude of the average peculiar motions of HMSFRs, we tried four sets of priors when fitting the data:
- Adopting a loose prior for the component of solar motion, $\U = 11.1\pm1.2$, $\V = 15\pm10$, $\W = 7.2\pm1.1$ , and for the average peculiar motions for HMSFRs of $\Usbar = 3\pm10$ and $\Vsbar =-3\pm10$ .
- Using no priors for the average peculiar motions of HMSFRs, but tighter priors for the solar motion of $\U = 11.1\pm1.2$, $\V = 12.2\pm2.1$, $\W = 7.2\pm1.1$ from @Schoenrich:10.
- Using no priors for the solar motion, but tighter priors on the average peculiar motions of HMSFRs of $\Usbar = 3\pm5$ and $\Vsbar =-3\pm5$ .
- Using essentially no priors for either the solar or average peculiar motions of HMSFRs, but bounding the and parameters with equal probability within $\pm20$ of the Set-A initial values and zero probability outside that range.
Models A1–A4
------------
Using the 95 sources with Galactocentric radii greater than 4 kpc[^3], the outlier-tolerant Bayesian fitting approach, and the Set-A priors as described above, we obtained the parameter estimates listed in Table \[table:fits\] under fit A1. As expected for a sample with some outliers (see discussion in §\[sect:Bayesian\]), we found a $\chisq=562.6$, greatly exceeded the 277 degrees of freedom, owing to a number of sources with large residuals.
We iteratively removed the sources with the largest residuals. Using the outlier-tolerant Bayesian fitting approach (see §\[sect:Bayesian\]) minimizes potential bias, based on assumed “correct” parameter values, when editing data. However, to further guard against any residual bias, we first removed sources with $>6\sigma$ residuals, followed by re-fitting and removal of those with $>4\sigma$ residuals, and finally re-fitting and removal of those with $>3\sigma$ residuals (fits A2, A3 & A4, not listed here). In total, 15 sources[^4] were removed.
[lccccc]{} Parameter Estimates\
(kpc) &$8.15\pm0.25$ &$8.34\pm0.16$ &$8.33\pm0.16$ &$8.30\pm0.19$ & $8.29\pm0.21$\
() & $238\pm11\q$ &$240\pm8\q$ &$243\pm6\q$ &$239\pm8\q$ &$238\pm15$\
()&$-0.1\pm0.7\q$&$-0.2\pm0.4\q$ &$-0.2\pm0.4\q$ &$-0.1\pm0.4\q$ &$-0.1\pm0.4$\
\
() &$10.4\pm1.8$ &$10.7\pm1.8$ &$10.7\pm1.8$ &$ 9.9\pm3.0$ &$ 9.6\pm3.9$\
() &$15.1\pm7.3$ &$15.6\pm6.8$ &$12.2\pm2.0$ &$14.6\pm5.0$ &$16.1\pm13.5$\
() &$8.2\pm1.2$ &$\p8.9\pm0.9$ &$\p8.7\pm0.9$ &$\p9.3\pm1.0$ &$\p9.3\pm1.0$\
\
()&$\p3.7\pm2.4$ &$\p2.9\pm2.1$ &$\p2.9\pm2.1$ &$\p2.2\pm3.0$ &$\p1.6\pm3.9$\
()&$-2.4\pm7.4$ &$-1.6\pm6.8$ &$-5.0\pm2.1 $ &$-2.4\pm5.0$ &$-1.2\pm13.6$\
\
Fit Statistics\
$\chisq$ & 562.6 & 224.9 & 225.1 & 224.7 & 224.1\
$N_{dof}$ & 277 & 232 & 232 & 232 & 232\
$N_{sources}$ & 95 & 80 & 80 & 80 & 80\
$r_{\Ro,\To}$ &0.61 &0.46 &0.74 &0.66 &0.44\
\[table:fits\]
Model A5
--------
With the resulting “clean” data set of 80 sources, we performed a least-squares fit (assuming a Gaussian PDF for the data uncertainties). We used the same loose priors (Set-A) as for model A1, namely solar motion components $\U = 11.1\pm1.2$, $\V = 15\pm10$, $\W = 7.2\pm1.1$ and average peculiar motions for HMSFRs of $\Usbar = 3\pm10$ and $\Vsbar =-3\pm10$ . This resulted in the parameter estimates listed under fit A5 in Table \[table:fits\]. This model produced a good $\chisq=224.9$ for 232 degrees of freedom and estimates of $\Ro=8.34\pm0.16$ kpc and $\To=240\pm8$ . We find $\Tdot=-0.2\pm0.4$ , indicating a very flat rotation curve for the Milky Way between radii of $\approx5$ and 16 kpc from the Galactic center.
Compared to the preliminary results of @Reid:09b based on 16 sources, where the Pearson product-moment correlation coefficient for and was high, $r_{\Ro,\To}=0.87$, with the larger number of sources and a better distribution across the Galaxy, these parameters are significantly less correlated, $r_{\Ro,\To}=0.46$. The joint and marginalized PDFs for these fundamental Galactic parameters are displayed in Figure \[fig:RoToPDF\].
The circular velocity parameters are still correlated (see §\[sect:correlations\]), but linear combinations of these parameters are well determined: $\To+\V=255.2\pm5.1$ and $\V-\Vsbar=17.1\pm1.0$. Also, the angular rotation rate for the Sun’s orbit about the Galactic center is constrained to $\pm1.4$% accuracy: $(\To+\V)/\Ro=30.57\pm0.43$ . This value is consistent with the reflex of the [*apparent*]{} motion of Sgr A\*, the assumed motionless supermassive black hole at the center of the Galaxy, which gives $30.26\pm0.12$ [@Reid:04].
The component of solar motion in the direction of Galactic rotation, $\V$, estimated to be $15.6\pm6.8$ is better constrained than the prior of $15\pm10$ . It is consistent with the [*local*]{} estimate (relative to Solar Neighborhood stars) of 12 [@Schoenrich:10] and the [*global*]{} estimate of @Bovy:12 of $26\pm3$ (relative to stars across the Milky Way).
Model B1
--------
In order to explore the sensitivity of the modeling to our priors, we fit the clean data set using the Set-B priors: adopting the latest Hipparcos measurement of the solar motion of $\U = 11.1\pm1.2$, $\V = 12.2\pm2.1$, $\W = 7.2\pm1.1$ [@Schoenrich:10] and no prior information on the average peculiar motion of the HMSFRs. This resulted in parameter estimates similar to those of model A5, $\Ro=8.33\pm0.16$ kpc and $\To=243\pm6$ . The quality of fit, as measured by $\chisq=225.1$ for 232 degrees of freedom, was comparably good as for model A5. The average velocity lag of the HMSFRs relative to circular orbits, which was not constrained by priors, was $\Vsbar = -5.0 \pm 2.1$ . This is comparable to that found by @Reid:09b, after correcting for the 7 difference in the adopted solar motion values.
Model C1
--------
Given the current uncertainty in the component of solar motion, we fit the data with the Set-C priors, assuming no prior information for the solar motion, but using a stronger prior than for model A5 for the average peculiar motion of the HMSFRs: $\Usbar = 3\pm5$ and $\Vsbar =-3\pm5$ . As for model B1, we found most parameter estimates to be similar to model A5, eg, $\Ro=8.30\pm0.19$ kpc and $\To=239\pm8$ . For the solar motion, we find $\U = 9.9\pm2.0$, $\V =14.6\pm5.0$, and $\W = 9.3\pm1.0$ . The value is consistent with revised @Schoenrich:10 [12 ] solar motion, but differs by $2\sigma$ from the @Bovy:12 estimate.
Model D1
--------
In order to facilitate the use of the results presented here with other Galactic parameter estimates, we perfomed a fit with essentially no informative priors. We did this by taking the A5 (Set-A) initial parameter values and assuming flat priors for all parameters except for and . For these parameters we assumed equal probability for values within $\pm20$ of the initial values and zero probability outside this range in order to exclude unreasonable parameter values. The parameters that remain well determined include $\Ro=8.29\pm0.21$ kpc, $\To=238\pm15$ , $\Tdot=-0.1\pm0.4$ , $\U=9.6\pm3.9$ , $\W=9.3\pm1.0$ , and $\Usbar=1.6\pm3.9$ . The correlated velocity terms, and displayed nearly flat posteriori PDFs over their allowed ranges. However, linear combinations involving these parameters are very well constrained, $\To+\V = 253.8\pm6.4$ and $\V-\Vsbar = 17.2\pm1.2$ , as well as the angular rotation rate of the Sun about the Galactic center, $(\To+\V)/\Ro=30.64\pm0.41$ .
Rotation Curves {#sect:rotationcurves}
---------------
Next, we investigated the sensitivity of the fundamental Galactic parameters, and , to alternative rotation curves. When fitting, we replaced the simple linear form, $\Theta(R)=\To+\Tdot~(R-\Ro)$, with the empirically determined functions of $\Theta(R)$ of @Clemens:85, the power-law parameterization of @Brand:93, a polynomial, and the “universal” rotation curve of @Persic:96. We adopted the Set-A priors in order to facilitate comparisons with the A5 fit. Table \[table:rotationcurves\] presents the fitting results for these rotation curves.
[lccccc]{} Parameter Estimates\
(kpc) &$8.36\pm0.16$ &$8.12\pm0.14$ &$8.34\pm0.16$ &$8.34\pm0.17$ &$8.31\pm0.16$\
() &$237\pm8\q$ &$221\pm8\q$ &$240\pm9\q$ &$241\pm9\q$ &$241\pm8\q$\
() &$10.1\pm1.8~$ &$10.5\pm1.8\p$&$10.5\pm1.8\p$&$10.7\pm1.7\p$ &$10.5\pm1.7\p$\
() &$19.4\pm6.8~$ &$25.0\pm6.8~$ &$15.5\pm6.8~$ &$14.7\pm6.8~$ &$14.4\pm6.8~$\
() &$8.9\pm1.0$ &$8.9\pm1.0$ &$8.8\pm1.0$ &$8.8\pm0.9$ &$8.9\pm0.9$\
()&$2.4\pm2.1$ &$2.6\pm2.0$ &$2.8\pm2.0$ &$2.8\pm2.0$ &$2.6\pm2.1$\
()&$+3.4\pm6.8\p$ &$+8.5\pm6.8\p$&$-1.5\pm6.8$ &$-1.4\pm6.8\p$ &$-1.4\pm6.8\p$\
() &... &... &$240\pm9~$ &$241\pm9~$ &$241\pm8~$\
&... &... &$~0.00\pm0.02$&$~~0.5\pm3.7$ &$~0.90\pm0.06$\
&... &... &... &$-15.1\pm8.4~$ &$~1.46\pm0.16$\
\
Fit Statistics\
$\chisq$ &229.7 & 248.1 & 225.2 & 221.9 &214.5\
$N_{dof}$ &233 & 233 & 231 & 230 &230\
$N_{sources}$&80 & 80 & 80 & 80 & 80\
$r_{\Ro,\To}$&0.46 &0.36 &0.48 &0.47 &0.47\
\[table:rotationcurves\]
@Clemens:85 supplied two curves with different shapes: one assuming the old IAU constants (C-10) of $\Ro=10$ kpc and $\To=250$ and the other assuming the revised constants (C-8.5) of $\Ro=8.5$ kpc and $\To=220$ currently in widespread use. The C-10 model has rotational speeds that rise faster with radius than the C-8.5 model. For either model, we fitted for different values of (which we used to scale model radii) and (which we used to scale rotation speeds).
@Brand:93 parameterize their rotation curve (BB) as a power law in Galactocentric radius, $R$, with potentially three adjustable parameters: $\Theta(R) = \aone (R/\Ro)^\atwo + \athr$. For a flat rotation curve ($\atwo=0$), parameters and become degenerate. Since the Galaxy’s rotation curve is nearly flat over the range of radii we sample (see, eg, model A5 above), we held at zero, solving only for and . Indeed, we find the power law exponent, $\atwo=-0.01\pm0.01$, essentially flat. For this formulation, $\aone=\To$, and in Table \[table:rotationcurves\] we copy to to facilitate comparison with other models.
As an alternative to a power law rotation curve, we fitted a second-order polynomial (Poly) in $\rho=(R/\Ro)-1$: $\Theta(R) = \aone + \atwo \rho + \athr \rho^2$. The model fit parameters for this form of a rotation curve are similar to those from models C-10, BB and Univ.
The universal (Univ) rotation curve of @Persic:96 includes terms for an exponential disk and a halo. It can have three adjustable parameters: , the circular rotation speed at the radius enclosing 83% of the optical light ($R_{opt}$); $\atwo = R_{opt} / \Ro$; and , a core-radius parameter for the halo contribution, nominally 1.5 for an $L^*$ galaxy. With flat priors for the three rotation curve parameters, the posteriori PDF for $\atwo$ was bimodal, with the dominant peak at $\atwo=0.9$ and a second peak with 50% of the primary’s amplitude at $\atwo=0.1$. Since the secondary peak seems unlikely, we refit the data using a prior for $\atwo$ of $1.2\pm0.5$. We then obtained similar parameter values as other models (see Table \[table:rotationcurves\]), with the three adjustable rotation curve parameters of $\aone = 241\pm8$ , $\atwo = 0.90\pm0.06$, and $\athr = 1.46\pm0.16$.
All but one of the rotation curve models lead to similar values for the fundamental Galactic parameters and as our A5 fit. Only the Clemens “$\Ro=8.5$ kpc; $\To=220$ ” (C-8.5) rotation curve results in a marginally significant change in estimates of and . However, this fit has a significantly poorer quality ($\chi^2=248.1$ for 233 degrees of freedom) than, for example, the A5 fit ($\chi^2=224.9$ for 232 degrees of freedom), and we do not consider this model further. We conclude that the fundamental Galactic parameters and are reasonably insensitive to a wide variety of rotation curve shapes.
With full 3-dimensional location and velocity information, we can transform our heliocentric velocities to a Galactocentric reference frame and calculate the tangential (circular) speed for each HMSFR. Figure \[fig:rotationcurve\] plots these speeds for [*all*]{} sources in Table \[table:parallaxes\]. Most published rotation curves for the Milky Way have come from only one component of velocity (radial), often using kinematic distances and [*assuming*]{} a value for . As such, the data in Fig \[fig:rotationcurve\] represent a considerable advance. See also the analysis of this data set by @Xin:13.
It is important to remember that the transformation from heliocentric to Galactocentric frames requires accurate values of , , and, most importantly, $\To+\V$, since the motion of the Sun has (by definition) been subtracted in the heliocentric frame. For most sources, increasing or decreasing the assumed value of $\To+\V$ would, correspondingly, move each data point up or down by about the same amount. Thus, the level of this, and essentially all published, rotation curves is determined mostly by $\To+\V$. Our results are the first to use fully 3-dimensional data to strongly constrain all three parameters: , and $\To+V$.
The dashed line in Fig \[fig:rotationcurve\] represent the linear rotation curve from the A5 fit, based only on sources with $R>4$ kpc. Sources used in the fit are plotted with filled symbols and the sources not used with open symbols. The dashed line indicates the expected rotation for sources in circular Galactic orbit ( $\Usbar=\Vsbar=0$). There are now sufficient data to clearly indicate that the rotation curve drops at Galactocentric radii $\lax4$ kpc. However, given the likelihood for a significant non-axisymmetric gravitational potential within $\approx4$ kpc of the center, more measurements are needed before extending a rotation curve to this region as azimuthal terms may be needed.
Peculiar Motions of HMSFRs
--------------------------
Figure \[fig:peculiarmotions\] shows the peculiar (non-circular) motions of all sources in Table \[table:parallaxes\] with motion uncertainties less than 20 . Similar results were described in the primary papers presenting the parallaxes and proper motions for each arm [@Sato:13; @Wu:13; @Xu:13; @Choi:13; @Zhang:13; @Hachi:13]. For uniformity, here the motions were calculated using the A5 fit parameters (see Table \[table:fits\]), but with zero correction for the average source peculiar motions. Typical peculiar motions are $\approx10$ , but some sources have much larger values. For example, many sources in the Perseus arm in the Galactic longitude range $\approx100^\circ$ to $\approx135^\circ$ display peculiar motions $\gax20$ . Many sources within $\approx4$ kpc of the Galactic Center display even larger peculiar motions, probably indicating that the rotation curve used here is inadequate to describe their Galactic orbits, especially in the presence of the Galactic bar(s).
Parameter Correlations {#sect:correlations}
----------------------
[lrrrrrrrr]{} $\Ro$ &1.000 &0.465 &0.103 &0.452 &0.023 &$-$0.003&0.517 &$-$0.002 $\To$ &0.465 &1.000 &0.136 &0.243 &$-$0.796&$-$0.009&0.171 &$-$0.809 $\Tdot$ &0.103 &0.136 &1.000 &$-$0.124&$-$0.009&0.025 &$-$0.094 &$-$0.018 $\U$ &0.452 &0.243 &$-$0.124&1.000 &$-$0.014&$-$0.017&0.839 &0.025 $\V$ &0.023 &$-$0.796&$-$0.009&$-$0.014&1.000 &0.011 &$-$0.006 &0.990 $\W$ &$-$0.003&$-$0.009&0.025 &$-$0.017&0.011 &1.000 &$-$0.002 &0.010 $\Usbar$ &0.517 &0.171 &$-$0.094&0.839 &$-$0.006&$-$0.002&1.000 &0.028 $\Vsbar$ &$-$0.002&$-$0.809&$-$0.018&0.025 &0.990 &0.010 &0.028 &1.000 \[table:correlations\]
The Pearson product-moment correlation coefficients, $r$, for all parameters from fit A5 are listed in Table \[table:correlations\]. In the preliminary analysis of 16 HMSFRs with parallaxes and proper motions of @Reid:09b, the estimates of and were strongly correlated ($\rPpm=0.87$). However, with the much larger set of HMSFRs that covers a larger portion of the Galaxy, the correlation between and estimates is now moderate: $\rPpm=0.465$ for our reference A5 fit. However, there remains a significant anti-correlation between and ($r_{\To,\V}=-0.809$), as well as a strong correlation between and ($r_{\V,\Vsbar}=0.990$). As suggested by the fitted parameter values in Table \[table:fits\], our data strongly constrain the following combinations of these correlated parameters: $\To + \V=255.2\pm5.1$ and $\V - \Vsbar=17.1\pm1.0$ . Also, the combination of parameters that yield the angular orbital speed of the Sun about the Galactic center, $(\To+\V)/\Ro = 30.57 \pm 0.43$ , is more tightly constained than the individual parameters. Figure \[fig:3PDFs\] shows the marginalized PDFs for these combinations of parameters.
Comparison with Other Modeling Approaches
-----------------------------------------
Other groups have analyzed parallax and proper motion data sets from the BeSSeL Survey and the VERA project, focusing on different assumptions and results. @Bovy:09 confirmed the counter rotation of HMSFRs (assuming $\V=5$ ) noted by @Reid:09b and argued for a comparable value for $(246\pm30$ ), but with considerably lower significance. Alternatively, @McMillan:10 found that the -component of solar motion of 5 , provided by @Dehnen:98, should be raised to $\approx12$ , thereby reducing the estimated counter-rotation of HMSFRs. @Bobylev:10, using 28 parallaxes available at that time and a Fourier analysis technique, estimated $\To=248\pm14$ and $\V=11.0\pm1.7$ , assuming $\Ro\equiv8.0$ kpc. Finally, @Honma:12, using 52 parallaxes, including some low-mass star forming regions, estimated $\Ro=8.05\pm0.45$ kpc and $\To=238\pm14$ for $\V\equiv12$ .
The @Bovy:09 re-analysis of our preliminary data employed a different approach than that of @Reid:09b. Bovy treat the elements of the velocity dispersion tensor of the HMSFRs as free parameters. These parameters give the expected deviations (variances and covariances) of the velocity data from a smooth, axi-symmetric model of Galactic rotation and are used to adjust the weights applied to the different velocity components when fitting the data. However, while Bovy found a significant trace for the tensor, the velocity dispersion parameters were only marginally constrained; formally none of the diagonal components had $>2.8\sigma$ formal significance. Also, their values for the radial and tangential components were nearly identical, suggesting that little is gained by making these free parameters versus adopting a single physically motivated value ($\sigma_{Vir}$) as we have done. Note that our value for $\sigma_{Vir}$ is comparable to the dispersion parameter ($\Delta_v$) values found by @McMillan:10, which range from about 6 to 10 , but is considerably smaller than those of @Bovy:09 of $\approx20$ . The reason for this difference is unclear, but might reflect different treatments of outlying data and/or increased parameter correlations associated with the 6 extra parameters used in solving for the tensor elements.
Discussion {#sect:discussion}
===========
Solar Motion {#sect:solar_motion}
------------
If one adopts the theoretically motivated prior that HMSFRs have small peculiar motions (Set-C with no prior on the solar motion), then model fit C1 indicates $\V=14.6\pm5.0$ . This is a [*global*]{} measure of the peculiar motion of the Sun and, as such, is relative to a “rotational standard of rest” as opposed to a Local Standard of Rest (LSR), defined relative to stellar motions in the Solar Neighborhood. If Solar Neighborhood stars (extrapolated to a zero-dispersion sample) are, on average, stationary with respect to a circular orbit, then these two solar motion systems will be the same. Our estimate of $\V$ is consistent with the $12$ value of @Schoenrich:10, measured with respect to Solar neighborhood stars, but there is some tension between our global estimate of and that of @Bovy:12 of $26\pm3$ , as these two estimates differ by about $2\sigma$. However, if one drops the prior that HMSFRs have small peculiar motions, then our result loses significance.
The large counter-rotation of HMSFRs, originally suggested by @Reid:09b, was based on the initial Hipparcos result of @Dehnen:98 that $\V=5$ . As the outcome of the @Schoenrich:10 re-analysis of Hipparcos data, which gives $\V=12$ , supersedes that lower $\V$ value, it now appears that any average counter-rotation of HMSFRs is $\lax5$ . Given that we strongly constrain $\V-\Vsbar=17.1\pm1.0$ , were one to independently constrain $\V$ with $\pm2$ accuracy, the issue of HMSFR counter rotation could be clarified.
While our estimate of has a large uncertainty (owing to correlations with and ), we find and are well constrained. In fit D1, in which no informative prior was used for the components of motion either toward the Galactic center or perpendicular to the Galactic plane, we find that $\U=9.6\pm3.9$ and $\W=9.3\pm1.0$ , respectively. Our estimate of the Sun’s motion toward the Galactic center is in agreement with most other estimates, $11.1\pm1.2$ by @Schoenrich:10 and $10\pm1$ by @Bovy:12; see also the compilation of estimates by @Coskunoglu:11.
The solar motion component perpendicular to the Galactic plane, $\W$, is generally considered to be straight forwardly determined and recent estimates typically range between 7.2 [@Schoenrich:10] (relative to local stars within $\sim0.2$ kpc) and 7.6 [@Feast:97] (relative to stars within $\sim3$ kpc), with uncertainties of about $\pm0.5$ . We find a slightly larger value of $\W=9.3\pm1.0$ (for model D1 which used no informative priors for the solar motion), which may be significant; the difference between the locally and our globally measured value ( relative to stars across the Galaxy) is $2.1\pm1.1$ . Note that one might expect a small difference between measurements with respect to a local and a global distribution of stars were the disk of the Galaxy to precess owing to Local Group torques. Simulations of galaxy interactions in a group suggest that a disk galaxy can complete one precession cycle over a Hubble time. Were the Milky Way to do this, one would expect a vertical pecessional motion at a Galactocentric radius of the solar neighborhood of order $\Ro\Ho~\sim0.6$ . It is possible that the differences in the local and global estimates of $\W$ can, in part, be explained in this manner.
Galactic Rotation Curve and Disk Scale Length
---------------------------------------------
Among the various forms of rotation curves that we fit to the data, the universal curve advocated by @Persic:96 to apply to most spiral galaxies yielded the best fit (see discussion in §\[sect:rotationcurves\], Table \[table:rotationcurves\] and Fig. \[fig:rotationcurve\]). This rotation curve matches the flat to slightly declining run of velocity with Galactocentric radius from $R\approx5\rightarrow16$ kpc, as well as reasonably tracing the decline in orbital velocity for $R\lax5$ kpc. However, many of the sources near the Galactic bar(s) cannot be well modeled with any axi-symmetric rotation curve.
The best fit value for our $\atwo$ parameter ($R_{opt}/\Ro$), coupled with our estimate of $\Ro=8.34\pm0.16$, locates $R_{opt}$ at $7.5\pm0.52$ kpc. The $\atwo$ parameter is sensitive to the slope of the rotation curve (near $R_{opt}$) and the radius at which it turns down toward the Galactic center. For example, setting $\atwo=0.7$ steepens the rotation curve at large radii and moves the turn down radius to $\approx3.5$ kpc, while setting $\atwo=1.1$ flattens the rotation curve and increases the turn down radius to $\approx6.5$ kpc. Given that the (thin) disk scale length, $R_D = R_{opt} / 3.2$ [@Persic:96], we estimate $R_D=2.44\pm0.16$ kpc. Estimates of $R_D$ in the literature range from $\approx 1 \rightarrow 6$ kpc [@Kent:91; @Chang:11; @McMillan:11], with most consistent with a value between $2\rightarrow3$ kpc. Our estimate is also consistent with that of @Porcel:98 who modeled the positions and magnitudes of 700,000 stars in the Two Micron Galactic Survey database and found $R_D=2.3\pm0.3$ kpc and, more recently, @Bovy:13, who modeled the dynamics of $\approx16,000$ stars from the SEGUE survey and concluded that $R_D=2.14\pm0.14$ kpc.
The Distance to the Galactic Center:
-------------------------------------
Models A5, B1 and C1, which used different combinations of solar motion and/or average source peculiar motion priors, have comparable $\chi^2$ values and all parameter estimates are statistically consistent. Because the priors for Model A5 are the least restrictive in keeping with current knowledge, we adopt those parameters as representative. Specifically, we find $\Ro=8.34\pm0.16$ kpc, $\To=240\pm8$ and $\Tdot=-0.2\pm0.4$ . As noted in §\[sect:correlations\] and §\[sect:rotationcurves\], with the much larger data set now available, estimates of and are no longer strongly correlated and appear fairly insensitive to the assumed nature of the rotation curve. These parameter estimates are consistent with, but significantly better than, the preliminary values of $\Ro=8.4\pm0.6$ kpc, $\To=254\pm16$ and a nearly flat rotation curve reported in @Reid:09b, based on parallaxes and proper motions of 16 HMSFRs and assuming $\V=5$ , and $\Ro=8.05\pm0.45$ kpc and $\To=238\pm14$ from @Honma:12, based on a sample of 52 sources and assuming $\V=12$ .
While there are numerous estimates of the distance to the Galactic center in the literature ( @Reid:93), here we only compare those based on direct distance measurements. A parallax for the water masers in Sgr B2, a star forming region projected less than 0.1 kpc from the Galactic center, indicates $\Ro=7.9\pm0.8$ kpc [@Reid:09c], consistent with, but considerably less accurate than, our current result. More competitive estimates of come from the orbits of “S-stars” about the supermassive blackhole Sgr A\*. Combining the nearly two decades of data from the ESO NTT/VLT [@Gillessen:09a] and Keck [@Ghez:08] telescopes that trace more than one full orbit for the star S2 (a.k.a. S0-2), @Gillessen:09b conclude that $\Ro=8.28\pm0.33$ kpc. Recently the Keck group, extending their time sequence of observations by only a few years, announced a value of $\Ro=7.7\pm0.4$ kpc [@Morris:12], in mild tension both with the @Gillessen:09b analysis and our parallax-based result. However, in the latest publication of the Keck group, @Do:13 combined modeling of the distribution and space velocities of stars within the central 0.5 pc of the Galactic center with the stellar orbital result for star S0-2 [@Ghez:08] and conclude that $\Ro=8.46^{+0.42}_{-0.38}$ kpc, removing any tension with our estimate and that of the ESO group. We conclude that our estimate of $\Ro=8.34\pm0.16$ kpc is consistent with that from the Galactic center stellar orbits and is likely the most accurate to date.
The Circular Rotation Speed at the Sun:
----------------------------------------
Over the last four decades there have been many estimates of ranging from $\sim170 \rightarrow 270$ [@Kerr:86; @Olling:98]. Focussing the discussion to the more direct measurements, two recent studies favor a lower and one a higher value of than our estimate of $\To=240\pm8$ . @Koposov:10 model the orbit of the GD-1 stream from a tidally disrupted stellar cluster in the Milky Way halo and estimate $\To+\V=221\pm18$, where the @Dehnen:98 solar motion component of $\V=5$ was adopted. Recently, @Bovy:12 modeled line-of-sight velocities of 3365 stars from APOGEE and find $\To=218\pm6$ , but with a large value for the solar motion component in the direction of Galactic rotation, $\V=26\pm3$ . Their full tangential speed $\To+\V=242^{+10}_{-3}$ is consistent with our value of $252.2\pm4.8$ , suggesting the discrepancy between the Bovy and our results are probably caused by differences in the solar motion. However, another recent study by @Carlin:12, modeling the Sagittarius tidal stream, yields $\To$ estimates from $232 \rightarrow 264$ .
Our data also strongly constrain the angular rotation of the Sun about the Galactic center, $(\To+\V)/\Ro=30.57\pm0.43$ . This value can be compared with an independent and direct estimate based on the proper motion of Sgr A\*, interpreted as the reflex motion from the Sun’s Galactic orbit, of $30.24\pm0.12$ [@Reid:04]. For $\Ro=8.34\pm0.16$ kpc, the proper motion of Sgr A\* translates to $\To+\V=252.2\pm4.8$ , in good agreement with the parallax results. We conclude that $\To$ exceeds the IAU recommended value of 220 with $>95$% probability provided that $\V\lax23$ . Clearly, independent [*global*]{} measures of $\V$ are critical to establish $\To$ and $\Vsbar$ with high accuracy.
Changing the value of would have widespread impact in astrophysics. For example, increasing by 20 with respect to the IAU recommended value of 220 reduces kinematic distances by about 10%, leading to a decrease of 20% in estimated young star luminosities, a corresponding decrease in estimated cloud masses, and a change in young stellar object ages. Estimates of the total mass of the dark matter halo of the Milky Way scale as $V_{max}^2~R_{Vir}$. Since the maximum in the rotation curve ($V_{max}$) and the Virial radius ($R_{Vir}$) scale linearly with , the mass of the halo scales as $\Theta_0^3$, leading to a 30% increase in the estimate of the Milky Way’s (dark-matter dominated) mass. This, in turn, affects the expected dark-matter annihilation signal [@Finkbeiner:09], increases the “missing satellite” problem [@Wang:12], and increases the likelihood that the Magellanic Clouds are bound to the Milky Way [@Shattow:09].
The Hulse-Taylor Binary Pulsar and Gravitation Radiation
--------------------------------------------------------
An interesting example of the effects of Galactic parameters on fundamental physics comes from the Hulse-Taylor binary pulsar. The dominant uncertainty in measuring the gravitational radiation damping of the binary’s orbit comes from the need to correct for the effects of the Galactic accelerations of the Sun and the binary [@Damour:91; @Weisberg:10]. These accelerations contribute $\approx1$% to the [*apparent*]{} orbital period decay. In 1993 when the Nobel Prize was awarded in part for this work, the IAU recommended values were $\Ro=8.5\pm1.1$ kpc and $\To=220\pm20$ [@Kerr:86]. Using these Galactic parameters, the formalism of Damour & Taylor, improved pulsar timing data of Weisberg, Nice, & Taylor, and a pulsar distance of 9.9 kpc, the binary’s orbital period decays at a rate of $0.9994\pm0.0023$ times that prediction from general relativity (GR). Using the improved Galactic parameters from the A5 fit ($\Ro=8.34\pm0.16$ kpc and $\To=240\pm8$ ), gives a GR test value of $0.9976\pm0.0008$. This provides a three-fold improvement in accuracy. Both of these examples assumed a distance to the binary pulsar of 9.9 kpc [@Weisberg:08]. Given the improvement in the Galactic parameter values, the dominant uncertainty in the GR test now is the uncertain pulsar distance. A pulsar distance of 7.2 kpc would bring the GR test value to 1.0000 and a trigonometric parallax accurate to $\pm8$%, which is possible with in-beam calibration with the VLBA, would bring the contribution of distance uncertainty down to that of the current Galactic parameter uncertainty. Alternatively, if one assumes GR is correct, the current improvement in Galactic parameters suggests that the Hulse-Taylor binary pulsar’s distance is $7.2\pm0.5$ kpc.
0.5truein This work was partially funded by the ERC Advanced Investigator Grant GLOSTAR (247078). The work was supported in part by the National Science Foundation of China (under grants 10921063, 11073046, 11073054 and 11133008) and the Key Laboratory for Radio Astronomy, Chinese Academy of Sciences. AB acknowledges support by the National Science Centre Poland through grant 2011/03/B/ST9/00627.
0.5truein [*Facilities:*]{} , ,
Ando, K., Nagayama, T., Omodaka, T., 2011, , 63, 45 Asaki, Y., Deguchi, S., Imai, H., Hachisuka, K., Miyoshi, M. & Honma, M. 2010, , 721, 267 Bartkiewicz, A., Brunthaler, A., Szymczak, M. van Langevelde, H. J & Reid, M. J. 2008, , 490, 787 Benjamin, R. A. 2008, in “Massive Star Formation: Observations Confront Theory,” ASP Conference Series, Vol. 387, eds. H. Beuther, H. Linz & Th. Henning, p. 375 Benjamin, R. A. 2005, , 630, L149 Blaauw, A. 1985, in “The Milky Way Galaxy: Proceedings of IAU Symp. 106” H. van Woerden et al. eds., (Dordrecht, D. Reidel Pub. Co.), p. 335 Blitz, L. & Spergel,, D. N. 1991, , 379, 631 Bobylev, V. V. & Bajkova, A. T. 2010, , 408, 1788 Bovy, J., Hogg, D. W. & Rix, H.-W. 2009, , 704, 1704 Bovy, J., Prieto, C. A., Beers, T. C., 2012, , 759, 131 Bovy, J. & Rix, H.-W. 2013, , 779, 115 Brand, J. & Blitz, L. 1993, , 275, 67 Brunthaler, A., Reid, M. J., Menten, K. M., Zheng, X. W., Moscadelli, L. & Xu, Y. 2009, , 693, 424 Burkert, A., Genzel, R., Bouché, N. 2010, , 725, 2324 Carlin, J. L., Majewski, S. R., Casetti-Dinescu, D. I., Law, D. R., Girard, T. M. & Patterson, R. J. 2012, , 744, 25 Chang, C.-K., Ko, C.-M. & Peng, T.-H. 2011, , 740, 34 Choi, Y. K. , 2008, , 60, 1007 Choi, Y. K., Hachisuka, K., Reid, M. J., 2014, submitted to Clemens, D. P. 1985, , 295, 422 Coskunoglu, B, Ak, S., Bilir, S. 2011, , 412, 1237 Damour, T. & Taylor, J. H. 1991, , 366, 501 Dehnen, W., & Binney, J. J., , 1998, 298,387 Do, T., Martinez,, G. D., Yelda, S., 2013, , 779, 6 Eyer, L., Holl, B., Pourbaix, D. 2013, CEAB, 37, 115 Feast, M. & Whitelock, P. 1997, , 291, 683 Finkbeiner, D. P., Slatyer, T. R., Weiner, N., & Yavin, I. 2009, [*JCAP*]{}, 9, 37 Ghez, A. M., Salim, S., Weinberg, N. N., 2008, , 689, 1044 Gillessen, S., Eisenhauer, F. Trippe, S., 2009a, , 692, 1075 Gillessen, S., Eisenhauer, F., Fritz, T. K., 2009b, , 707, L114 Hachisuka, K. 2006, , 645, 337 Hachisuka, K. Brunthaler, A., Menten, K. M., 2009, , 696, 1981 Hachisuka, K., Choi, Y. K., Reid, M. J., 2014, submitted to Hammersley, P. L., Garzón, F., Mahoney, T. J., López-Corredoira, M., Torres, M. A. P. 2000, , 317, 45 Hirota, T, Ando, K., Bushimata, T., 2008, , 60, 961 Honma, M., Bushimata, T., Choi, Y. K., 2007, , 59, 889 Honma, M., Hirota, T., Kan-Ya, Y., 2007, , 63, 17 Honma,M. 2012, , 64, 136 Immer, K., Reid, M. J., Menten, K. M., Brunthaler, A., & Dame, T. M. 2013, , 553, 117 Kennicutt, R. C. Jr. 1981, , 86, 1847 Kent, S. M., Dame, T. M. & Fazio, G. 1991, , 378, 131 Kerr, F. J. & Lynden-Bell, D. 1986, , 221, 1023 Kim, M. K., Hirota, T., Honma, M., 2008, , 60, 991 Koposov, S. E., Rix, H.-W. & Hogg, D. W. 2010, , 712, 260 Kurayama, T., Nakagawa, A., Sawada-Satoh, S, 2011, , 63, 513 Liszt, H. S. & Burton, W. B. 1980, , 236, 779 McMillan, P. J. 2011, , 414, 2446 McMillan, P. J. & Binney J. J. 2010, , 402, 934 Menten, K. M., Reid, M. J., Forbrich J. & Brunthaler, A. 2007, , 474, 515 Moellenbrock, G. A., Claussen, M. J. & Goss, W. M. 2009, , 694, 192 Morris, M. R., Meyer, L & Ghez, A. 2012, [*RAA*]{}, 12, 995 Moscadelli, L., Reid, M. J., Menten, K. M., Brunthaler, A., Zheng, X. W. & Xu, Y. 2009, , 693, 406 Moscadelli, L., Cesaroni, R., Rioja, M. J., Dodson, R., Reid, M. J., 2011, , 526, 66 Nagayama, T., Omodaka, T., Nakagawa, A, 2011, , 63, 23 Niinuma, K. Nagayama, T., Hirota, T., 2011, , 63, 9 Oh, C. S., Kobayashi, H., Honma, M., Hirota, T., Sato, K., Ueno, Y., 2010, , 62, 101 Olling, R. P. & Merrifield, M. R. 1998, , 297,943 Persic, M., Salucci, P. & Stel, F. 1996, , 281, 27 Porcel, C., Garzon, F., Jimenez-Vicente, J. & Battaner, E. 1998, , 330, 136 Reid, M. J. 1993, , 31, 345 Reid, M. J. & Brunthaler, A., 2004, , 616, 872 Reid, M. J., Menten, K. M., Brunthaler, A., Zheng, X. W., Moscadelli, L. & Xu, Y. 2009, , 693, 397 Reid, M. J., Menten, K. M., Zheng, X. W. 2009, , 700, 137 Reid, M. J., Menten, K. M., Zheng, X. W., Brunthaler, A., & Xu, Y. 2009, , 705, 1548 Roberts, W. W. & Yuan, C. 1970, , 161, 877 Rygl, K. L. J., Brunthaler, A., Reid, M. J., Menten, K. M., van Langevelde, H. J., Xu, Y., 2010, , 511, 2 Rygl, K. L. J., Brunthaler, A., Sanna, A., 2012, , 539, 79 Sandstrom, K. M., Peek, J. E. G., Bower, G. C., Bolatto, A. D. & Plambeck, R. L., 2007, , 667, 1161 Sanna, A., Reid, M. J., Moscadelli, L., 2009, , 706, 464 Sanna, A., Reid, M. J., Dame, T., 2012, , 745, 82 Sanna, A., Reid, M. J., Menten, K. M., 2014, , 781, 108 Sato, M. 2008, , 60, 975 Sato, M., Hirota, T., Reid, M., 2010a, , 62, 287 Sato, M., Reid, M. J., Brunthaler, A. & Menten, K. M. 2010b, , 720, 1055 Sato, M., Wu, Y. W., Immer, K., 2014, submitted to Savchenko, S. S. & Reshetnikov, V. P. 2013, , 436, 1074 Schoenrich, R., Binney, J. & Dehnen, W. 2010, , 403, 1829 Shattow, G. & Loeb, A. 2009, , 392, 21 Shiozaki, S., Imai, H., Tafoya, D., 2011, , 63, 1219 Sivia, D. & Skilling, J. 2006, Data Analysis: A Bayesian Tutorial (2nd ed.; New York, Oxford Univ. Press), 168 Wang, J., Frenk, C. S., Navarro, J. F., Gao, L., Sawala, T. 2012, , 424, 2715 Weisberg, J. M., Stanimirović, S., Xilouris, K., 2008, , 674, 286 Weisberg, J. M., Nice, D. J., & Taylor, J. H. 2010, , 722, 1030 Wu, Y. W., Sato, M., Reid, M. J., 2014, submitted to Xin, X.-S. & Zheng, X.-W. 2013, [*Res. Astron. Astroph.*]{}, 13, 849 Xu, Y., Reid, M. J., Zheng, W. W. & Menten, K. M. 2006, [*Science*]{}, 311, 54 Xu, Y., Reid, M. J., Menten, K. M., Brunthaler, A., Zheng, X. W. & Moscadelli, L. 2009, , 693, 413 Xu, Y., Moscadelli, L., Reid, M. J., 2011, , 733, 25 Xu, Y., Li, J. J., Reid, M. J., 2013, , 769,15 Zhang, B., Zheng, X. W., Reid, M. J., 2009, , 693, 419 Zhang, B., Reid, M. J., Menten, K. M., & Zheng, X. W., 2012a, , 744, 23 Zhang, B., Reid, M. J., Menten, K. M., Zheng, X. W., Brunthaler, A., 2012b, , 544, 42 Zhang, B., Reid, M. J., Menten, K. M., 2013a, , 775, 79 Zhang, B., Moscadelli, L., Sato, M., 2014, , 781, 89
[^1]: http://bessel.vlbi-astrometry.org
[^2]: http://veraserver.mtk.nao.ac.jp
[^3]: Removing sources for which $R<4$ kpc: G$000.67-00.03$, G$009.62+00.19$, G$010.47+00.02$, G$010.62-00.38$, G$012.02-00.03$, G$023.43-00.18$, G$023.70-00.19$, G$027.36-00.16$
[^4]: Removing outlying sources: G012.68$-$00.18, G016.58$-$00.05, G023.65$-$00.12, G025.70$+$00.04, G028.86$+$00.06, G029.95$-$00.01, G031.28$+$00.06, G033.64$-$00.22, G034.39$+$00.22, G078.12$+$03.63, G108.59$+$00.49, G111.54+0.77, G122.01$-$07.08, G133.94+01.06, G176.51$+$00.20
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Alexey Mints[^1]'
- Saskia Hekker
bibliography:
- 'sage\_gap.bib'
date: 'XXX/YYY'
title: 'A Unified tool to estimate Distances, Ages, and Masses (UniDAM) from spectrophotometric data.[^2]'
---
.
Introduction {#sec:intro}
============
The Milky Way Galaxy is a unique object to test our understanding of stellar evolution, galaxy formation, and cosmology. For this test a detailed map of our Galaxy, including bulge, disk, halo, spiral structure, and streams formed by recent mergers, is required. Through an analysis of the Galaxy, we can learn how our Galaxy has formed, evolved, and how it interacts with its surroundings. To build such a map we need to find the distribution of stars in their positions, velocities, chemical compositions, and ages throughout the Galaxy. These parameters can be measured with different kinds of observations, such as astrometry, photometry, spectroscopy, and asteroseismology.
Astrometric observations provide stellar positions, proper motions and, through parallaxes, distances. These kind of data have been available for decades . In the nearest future Gaia [@2016arXiv160904172G] will vastly increase the precision and amount of such information. The first Gaia data release [@2016arXiv160904303L] already provides proper motions and parallaxes for about two million stars, although the precision of parallaxes in this sample limits their application [see e.g. @2016arXiv160905390S]. In the next data releases Gaia will provide high-precision parallaxes and proper motions for hundreds of millions of stars, vastly increasing our knowledge of the Galaxy.
Another rich source of data is spectroscopy, which can provide radial velocities, chemical compositions as well as the effective temperature ${\ensuremath{T_{\rm{eff}}}}$ and the surface gravity $\log g$. These data can be used to derive stellar ages and distances (see below). A growing number of large spectroscopic surveys, such as RAdial Velocity Experiment [RAVE; @2016arXiv160903210K], Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) surveys [@2015RAA....15.1095L], Apache Point Observatory Galactic Evolution Experiment [APOGEE; @2014ApJS..211...17A], Sloan Extension for Galactic Understanding and Exploration [SEGUE; @2009AJ....137.4377Y], and Gaia-ESO [@GAIA_ESO] provide rich spectroscopic information for millions of stars.
Photometric surveys can be used in two ways. Stromgren or Washington-DDO51 photometry can be used to estimate stellar parameters such as the effective temperature ${\ensuremath{T_{\rm{eff}}}}$, surface gravity $\log g$, and metallicity ${[\rm{Fe/H}]}$ [see @2011AA...530A.138C], or to separate giants from dwarfs for spectroscopic follow-up [@2000AJ....120.2550M]. Otherwise, broadband photometry is commonly used as a supplement to spectroscopic data to infer stellar distances.
Asteroseismology is a relatively young and very promising method of exploring stars. For low-mass dwarfs, subgiants, and red giant stars asteroseismology can provide a direct measure of mean density and surface gravity. The surface gravities measured by asteroseismic methods have much higher precision than spectroscopic methods. In case the effective temperature ${\ensuremath{T_{\rm{eff}}}}$ or luminosity $L$ are also measured it is possible to obtain stellar mass and radius from the asteroseismic observables. When compared with models stellar ages can also be determined from asteroseismology. COnvection ROtation and planetary Transits [CoRoT; @2006ESASP1306...33B], *Kepler* [@2010Sci...327..977B], and *K2* [@2014PASP..126..398H] observations provide such asteroseismic data. These datasets provide high-precision data on small patches of the sky and also have proved to be a perfect sample for the calibration of large spectroscopic surveys . Transiting Exoplanet Survey Satellite [TESS; @2014SPIE.9143E..20R] and PLAnetary Transits and Oscillations of stars [PLATO; @2014ExA....38..249R] space missions are scheduled to be launched in 2018 and 2024, respectively, and will vastly increase the number of stars with asteroseismic data in the coming years.
Stellar ages and distances remain among the most challenging parameters to measure. A comprehensive list of age determination methods is given in . For a number of stars ages can be derived from asteroseismic observations [@2016AN....337..823S] or from carbon and nitrogen abundances [@2016MNRAS.456.3655M]. When these data are not available, a typical approach is to compare the parameters directly derived from spectroscopic measurements, which we designate as observed parameters, such as the effective temperature ${\ensuremath{T_{\rm{eff}}}}$, surface gravity $\log g$, and metallicity ${[\rm{Fe/H}]}$, to a grid of stellar models. A model or a set of models that have their parameters close to observed parameters give estimates of ages, masses, and absolute magnitudes $M_\lambda$ of a star. Then by comparing the absolute magnitudes to visible magnitudes $m_\lambda$ from photometric surveys we can estimate distances to stars. An overview of this approach is given by and .
Proper application of this approach requires some care. First, the transformation from observed ([${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{}) to stellar parameters (age and mass) and distance is often degenerate, with the same observables giving two or more possible combinations of stellar parameters. This degeneracy can in some rare cases be resolved when additional observables are available, for example from asteroseismology. Second, the interstellar extinction needs to be accounted for in distance estimations. Extinction values can be taken from external sources or can be derived from observables. Both ways have their advantages and disadvantages. We discuss this in Section \[sec:bayes\]. Third, observed parameters have their uncertainties and correlations that have to be propagated to uncertainties in stellar parameters.
In the literature a number of methods based on the comparison of observed parameters from spectroscopic and photometric surveys with models to estimate distances and other stellar parameters were proposed and used recently. Here we briefly discuss some of them, and how they deal with the issues stated above.
#### GCS. {#gcs. .unnumbered}
The Geneva-Copenhagen Survey (GCS) [@2011AA...530A.138C] team exploited the advantage of having *HIPPARCOS* parallaxes for the majority of their objects; this facilitated the calculation of absolute magnitudes for each star. @2011AA...530A.138C used a Bag of Stellar Tracks and Isochrones (BASTI) [@2009ApJ...697..275P and references therein] and PAdova and TRieste Stellar Evolution Code (PARSEC) [@PARSEC] isochrones to select models that have ${\ensuremath{T_{\rm{eff}}}}$, absolute Johnson $V$ magnitude and metallicity close to the observed ones for each star. Applying a Bayesian scheme described in to selected models, Casagrande et al. derived masses and ages of stars. They used a flat prior on ages and a Salpeter initial mass function (IMF) as a prior for masses.
#### RAVE. {#rave. .unnumbered}
There is a series of papers on distance estimations for stars in the RAVE survey [DR5 is described in @2016arXiv160903210K]. proposed a method for distance estimation for RAVE stars based on a comparison of observed ${\ensuremath{T_{\rm{eff}}}}, \log g$, metal abundance ${[\rm{M/H}]},$ and colour $(J-K_s)$ with $Y^2$ models [@2004ApJS..155..667D]. For each star 5000 realisations of observed parameters were sampled from a Gaussian distribution with dispersions equal to the measured uncertainties and for each realisation a closest model was selected. These authors took an average of the model parameters measured in all realisations to derive an absolute $J$ magnitude of the star $M_J$. The difference between the derived absolute magnitude and visible $J$ magnitude from Two Micron All Sky Survey (2MASS) gives a distance. Extinction was ignored in this work. This approach is limited by the fact that it does not take into account the inhomogeneity of models in the ${\ensuremath{T_{\rm{eff}}}}- \log g$ plane, effectively increasing the weight for short evolutionary stages and decreasing it for longer ones. This issue was solved in by weighting models with a weight proportional to age and mass range represented by each model. A likelihood depending on the difference between observed and model ${\ensuremath{T_{\rm{eff}}}}$ and $\log g$ was also added. Other important changes were applied, including a change from $Y^2$ to PARSEC [@PARSEC] isochrones, the addition of a prior on mass (assuming [@2003PASP..115..763C] IMF), and the application of a volume correction. Zwitter et al. calculated an absolute $J$ magnitude as a weighted mean of the absolute magnitudes derived from luminosities of the models. The difference between the visible $J$ magnitude from 2MASS and the absolute magnitude gives the distance modulus for each star. As in , extinction was ignored.
[@2014MNRAS.437..351B] further developed the above method by adding priors from the Galactic structure; they provide priors on age, metallicity, and positions from halo, thin, and thick disk models. A kinematic correction [see @2012MNRAS.420.1281S] was also applied. Extinction was included into distance calculations. An exponential prior on the value of $ln(A_V)$ was imposed with the extinction value at infinity $ A_{V\infty}(b, l)$ taken from [@1998ApJ...500..525S]. The extinction at a given distance was calculated as $A_{Vprior}(b, l, s) = A_{V\infty}(b, l) \int_0^s \rho(s) ds / \int_0^\infty \rho(s) ds$, where $\rho(s)$ is the density of extincting material along the line of sight, taken from the model of the Galaxy [see the Equation 10 in @2014MNRAS.437..351B]. This is so far the most advanced method and it was applied with minor modifications to LAMOST data as well (see below). Distance moduli (but not ages and masses) were recalculated with the same method for RAVE DR5. [@2014MNRAS.437..351B] solved the problem of multimodal probability distribution functions (PDFs) for the distance modulus by fitting a Gaussian mixture model to it with up to three Gaussians. This approach works fine in most cases. However, as we illustrate below in it cannot be applied to mass and log(age) PDFs because they can be skewed or truncated, a shape which is hard to fit with a small set of Gaussians. Truncated shapes of the PDF arise from a limited range of allowed masses and log(age)s. For [@2014MNRAS.437..351B] limits are imposed by an age prior; see their Equations 3, 4, and 5.
#### APOKASC. {#apokasc. .unnumbered}
[@2014MNRAS.445.2758R] applied Bayesian methods to estimate distances and extinctions for approximately 2000 red giant stars from the joint APOGEE and Kepler Asteroseismic Science Consortium (APOKASC) sample [@2014ApJS..215...19P], which is a part of APOGEE [@2014ApJS..211...17A], covering the *Kepler* field of view. They supplemented spectroscopic parameters ${\ensuremath{T_{\rm{eff}}}}$ and ${[\rm{M/H}]}$ with asteroseismic data from *Kepler*. As alluded to before, from asteroseismic values $\Delta \nu$ and $\nu_{\rm{max}}$ and knowing ${\ensuremath{T_{\rm{eff}}}}$, it is possible to derive an estimate of stellar radius $R$ and mass $M$, using scaling relations from , i.e. $$\begin{aligned}
\Delta \nu &\propto& M^{1/2} R^{-3/2} \\
\nu_{\rm{max}} &\propto & M R^{-2} {\ensuremath{T_{\rm{eff}}}}^{-1/2}.\end{aligned}$$ This puts more constraints on stellar models, thus increasing the precision of stellar parameters and distance determinations. Using PARSEC isochrones, [@2014MNRAS.445.2758R] built PDFs for stellar parameters (mass, radius, and surface gravity) and stellar absolute magnitudes. The latter were then combined with photometric data from Sloan Digital Sky Survey (SDSS), 2MASS, and Wide-field Infrared Survey Explorer (WISE) to be converted to the PDFs of distance modulus $\mu_d$ and extinction $A_K$. The mode and 68% confidence intervals of the PDFs were calculated for both distance and extinction. [@2014MNRAS.445.2758R] noted that over one-third of stars in their sample have bimodal PDFs. Bimodal PDFs were treated in the same way as single-peaked PDFs. Using the mode allows one to select the highest peak of the PDF and other peaks only show themselves by broadening of confidence intervals. Only distance estimates were published by [@2014MNRAS.445.2758R].
#### LAMOST. {#lamost. .unnumbered}
The LAMOST team is also working on estimating distances to stars from spectroscopic data. First and second public data releases of the project include spectral properties for about one and two million stars, respectively [@2015RAA....15.1095L].
[@2015AJ....150....4C] used a Bayesian approach to derive distances from LAMOST DR1 data combined with 2MASS photometry. They used the Dartmouth Stellar Evolution Database [@2008ApJS..178...89D] and a Bayesian technique similar to that by to derive the PDF of the absolute magnitude for each star. This was then converted to distances using 2MASS photometry. Interstellar extinction was ignored in this work. [@2015AJ....150....4C] performed a comparison with RAVE distances to test their method. The derived distances are systematically smaller by 12% than those derived by with 16% spread. Given the precision of the LAMOST data, they derived distance uncertainties to be on the order of 40%.
[@2016MNRAS.456..672W] applied the Bayesian approach from [@2014MNRAS.437..351B] to derive parallaxes and extinctions for LAMOST data. Again, 2MASS photometry was used. The reported uncertainty in parallax is about 20% for dwarf stars and 40% for giants. Kinematic correction [see @2012MNRAS.420.1281S] was applied using PPMXL [@2010AJ....139.2440R] and UCAC4 [@2013AJ....145...44Z] data. Data from [@2015AJ....150....4C] and [@2016MNRAS.456..672W] are not yet publicly available.
Distances are provided in the LAMOST Galactic Anti-Centre project data release [@2015MNRAS.448..855Y]. These data include spectroscopic measurements of [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{} and photometry from 2MASS and Xuyi Schmidt Telescope Photometric Survey [@2014RAA....14..456Z]. In their work, [@2015MNRAS.448..855Y] applied two different methods to get distances. In the first method, which they call “empirical”, stars are divided into four groups (OB stars, giants, and two groups of dwarfs). For each group absolute magnitudes were calculated using a third-order polynomial of [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{}. Polynomials were derived by fitting data from the parts of the medium resolution INT Library of Empirical Spectra (MILES) library [@2006MNRAS.371..703S] corresponding to each group. The precision of the obtained distance modulus is about $0.^m65$ for GKM giants and $0.^m3$ for other groups. A second, “isochrone” distance estimate was derived using the isochrones of Dartmouth Stellar Evolution Database [@2008ApJS..178...89D]. For each star a model with closest values of [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{} was selected from the database. A difference between the visible magnitudes of the star and absolute magnitudes for the closest model provides distance. For both methods extinction values were derived from the LAMOST data using the star-pairs method, which is described in [@2014IAUS..298..240Y]. The “isochrone” method provides distances that are about 5 percent lower than those derived by the “empirical” method.
#### {#section .unnumbered}
Studies listed above use similar methods, but the implementation can vary, leading to different results even for the same input data. Moreover, while distances are typically calculated, mass and, most importantly, age estimates are less common. The amount of complementary spectroscopic data available in different surveys calls for a more unified approach. In this paper we present a Unified tool to estimate Distances, Ages, and Masses from spectrophotometric data (UniDAM). There are two major points in which we differ from studies listed above:
First, whereas most of the previously published studies were dedicated to data from a single survey, we processed data from several large surveys with one tool. For some surveys no data on distances, masses, and ages are publicly available to date. For others our results are consistent with previously published studies with the advantage that our catalogue was produced with the same method, isochrones, and priors on parameters for all surveys. Thus all differences in results for different surveys can be attributed to systematic differences in parameters determined in the spectroscopic surveys. We provide more details on spectroscopic surveys used in Section \[sec:catalog\]. Another advantage of using many surveys simultaneously comes from the fact that different surveys probe different parts of the Galaxy because of different observing strategies and locations of telescopes. Therefore we do not simply increase the statistics, but have a more complete coverage of the Galaxy.
Second, we try to lift the degeneracy of the transformation from observed to stellar parameters by representing PDFs as sums of unimodal functions (unimodal sub-PDFs or USPDF) for each evolutionary stage. Thus we separate out physically different solutions. This allows us to increase the precision of stellar parameters for each solution.
Data samples used {#sec:catalog}
=================
We used observable parameters from a set of publicly available spectroscopic surveys in our work. All surveys were cross-matched with 2MASS [@2006AJ....131.1163S] and AllWISE [@2014yCat.2328....0C] to get the infrared photometry. We used only ”clean“ photometry that is only bands that are not affected by low photometric quality, contamination, or confusion. This was achieved by taking only bands with 2MASS quality flag (`Qfl`) set to `’A’` and AllWISE bands with the contamination and confusion flag (`ccf`) set to zero and photometric quality flag (`qph`) set to `’A’`. We also requested that the reported uncertainty in magnitude has a positive value. summarises properties of the spectroscopic surveys from which we extracted our input data. We discuss some of them below, focusing on parameters for each survey, which we added or modified for our purposes.
[lrrd[3.1]{}ccc]{} Survey & N sources & Resolution & [ ]{} & [ ]{} & [ ]{} & Reference\
APOGEE (DR12) & $88\,000$ & $22\,500$ & 91.5 & 0.11 & 0.03 &\
APOGEE (DR13)\* & $89\,000$ & $22\,500$ & 91.5 & 0.11 & 0.03 &\
APOKASC & $2\,000$ & $22\,500$ & 91.5 & 0.11 & 0.03 &\
LAMOST-GAC (Main sample)\* & $368\,000$ & $1\,800$ & 115 & 0.19 & 0.15 &\
LAMOST-GAC (Bright sample)\* & $1\,075\,000$ & $1\,800$ & 100 & 0.15 & 0.13 &\
LAMOST-CANNON\* & $450\,000$ & $1\,800$ & 96.3 & 0.13 & 0.05 &\
RAVE (DR5) & $450\,000$ & $7\,500$ & 92 & 0.20 & 0.10 &\
RAVE-on\* & $450\,000$ & $7\,500$ & 85 & 0.14 & 0.07 &\
GCS\* & $13\,800$ & $20\,000$ & 80 & 0.10 & 0.10 &\
SEGUE\* & $277\,500$ & $2\,000$ & 145 & 0.26 & 0.13 &\
Gaia-ESO (DR2)\* & $7\,000$ & $16\,000$ & 50 & 0.10 & 0.07 &\
AMBRE\* & $3\,400$ & $16\,000$ & 120 & 0.20 & 0.10 &\
GALAH (DR1)\* & $28\,000$ & $10\,700$ & 108 & 0.30 & 0.11 &\
Mock & $4 \times 8\,000$ & - & 100 & 0.10 & 0.10 & -\
\[tbl:catalog\]
APOGEE and APOKASC {#sec:apogee}
------------------
We used APOGEE data from SDSS DR12 [@2015ApJS..219...12A] and DR13 [@2016arXiv160802013S]. We kept only those stars that belong to the Main Survey Targets[^3] and have their temperatures, gravities, and metallicities measured. Both DR12 and DR13 were used, as they differ mainly in spectroscopic calibration, and it is interesting to test how that influences the estimates of age, mass, and distance. We include our results for DR13 data in our final catalogue, whereas results for DR12 data are provided as a separate table.
We use as a separate input survey the APOKASC sample [@2014ApJS..215...19P], although it is in this context just a subset of APOGEE. Therefore the result for this sample is not included in our final catalogue. These data were used to compare the results of [@2014MNRAS.445.2758R] with the prospect of the inclusion of asteroseismic data (see Section \[sec:compare\_apokasc\]).
LAMOST
------
The second public data release of the LAMOST project [@2015RAA....15.1095L] contains spectral parameters for over 2 million stars. However, the uncertainties in the stellar parameters reported, i.e. $170\,$K in ${\ensuremath{T_{\rm{eff}}}}$, $0.5\,$dex in $\log
g$ and $0.2\,$dex in ${[\rm{Fe/H}]}$, are too high for these data to be used reliably for the model fitting. Therefore we decided not to use the main LAMOST dataset. We focused instead on the LAMOST Galactic Anti-Center (LAMOST-GAC) project second data release [@2017arXiv170105409X]. This data release contains spectral parameters for about one-third of a million stars in the direction of the Galactic anti-center in its main sample. The bright sample contains over a million stars from a larger area. A different processing pipeline was used by LAMOST-GAC team, which resulted in substantially lower parameter uncertainties of $115\,$K in ${\ensuremath{T_{\rm{eff}}}}$, $0.2\,$dex in $\log g,$ and $0.13\,$dex in ${[\rm{Fe/H}]}$.
An additional dataset derived from LAMOST DR2 data was prepared with The Cannon tool [@2015ApJ...808...16N; @2016arXiv160200303H]. This tool allows the transfer of parameters from high-resolution APOGEE spectra to LAMOST data using stars observed by both surveys for the calibration. This method transfers APOGEE uncertainties in the measured parameters to the LAMOST data, improving the precision of obtained parameters. To account for calibration uncertainties we added in quadrature the median APOGEE absolute uncertainties to the formal uncertainties reported by *The Cannon*. Another benefit of *The Cannon* tool is that it measures the value of $[\alpha/\rm{Fe}]$, which is not provided by LAMOST. The *The Cannon* tool was only calibrated for giant stars, which are available from APOGEE-LAMOST overlap and therefore the LAMOST-CANNON sample contains only giant stars.
RAVE surveys
------------
The fifth data release of the RAVE project [@2016arXiv160903210K] contains spectral parameters for almost half a million stars. This release contains a flag indicating whether the fitting algorithm has converged, but it turns out that even for stars with this flag set to zero (indicating that the fit converged) there are clear concentrations of values of effective temperatures and gravities towards grid points. This feature is known to the community [see @2014MNRAS.437..351B]. We added in the output catalogue a flag that indicates if $\log {\ensuremath{T_{\rm{eff}}}}$ is within $0.01\,$dex of a grid point or if $\log g$ or ${[\rm{Fe/H}]}$ are on a grid point. About one-third of the stars are affected by clustering around grid points.
The RAVE-on [@2016arXiv160902914C] is a product of processing of original RAVE spectra with *The Cannon* tool. Calibration set was constructed from the overlap of RAVE with APOGEE giants and K2/EPIC survey [@2016ApJS..224....2H]. In addition to [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{},the output of *The Cannon* tool also contains the value of $[\alpha/\rm{Fe}]$ and abundances for several chemical elements. The RAVE-on data refer to exactly the same stars as the main RAVE survey, but the reported stellar parameters might be slightly different and the quoted uncertainties are smaller, therefore we chose to use RAVE-on in our catalogue, providing results for the main RAVE survey in a separate table.
Geneva-Copenhagen survey
------------------------
Geneva-Copenhagen survey (GCS) is the only non-spectroscopic survey used in this work. GCS is a photometric survey, which contains [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{} derived using Stromgren photometry. We used GCS re-analysed data published by [@2011AA...530A.138C]. We exclude $15\%$ of the stars for which no estimates of [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{} are provided or for which no photometry was present. The latter was mainly because a number of GCS sources are too bright for 2MASS.
SEGUE
-----
We used SEGUE data from SDSS DR12 [@2009AJ....137.4377Y] with internal uncertainties from the SDSS database. We add in quadrature the internal and systematic uncertainties derived by [@2008AJ....136.2070A] of $130\,$K in ${\ensuremath{T_{\rm{eff}}}}$, $0.21\,$dex in $\log g$ and $0.11\,$dex in ${[\rm{Fe/H}]}$. The SEGUE survey is based on SDSS photometry, which is deeper than 2MASS, therefore for about one-half of SEGUE targets no 2MASS or AllWISE photometry is available or the photometry is very uncertain. We do not use such stars in our work.
Gaia-ESO
--------
For the Gaia-ESO survey, the data release 2 [@GAIA_ESO] was used. For nearly half of its nearly $15\,000$ spectra ${\ensuremath{T_{\rm{eff}}}}, \log g$ and ${[\rm{Fe/H}]}$ are available. So we used approximately $7\,000$ sources from this survey.
AMBRE
-----
Atmospheric Parameters and Chemical Abundances from Stellar Spectra [AMBRE; @AMBRE] project released parameters extracted from the automatic analysis of the ESO spectral data archives for over $4\,500$ observations (over $2\,000$ sources). No photometry or positional information are provided in the project data, so we attempted to get this information using target names. With the SIMBAD service we obtained positions for nearly $1\,500$ sources, having a total of $3\,400$ observations in the AMBRE survey.
GALAH
-----
[@GALAH] describe the GALactic Archaeology with HERMES (GALAH) survey first data release. Stellar parameters were derived for 2576 GALAH stars with the Spectroscopy Made Easy (SME) tool [@2012ascl.soft02013V]. These data were used as a training sample for *The Cannon* tool, which was then used to derive stellar parameters for the rest of the survey. @GALAH provide typical uncertainties of *The Cannon* tool used to derive spectral parameters and internal precision of the SME. We added them in quadrature to get uncertainties of $108\,$K in ${\ensuremath{T_{\rm{eff}}}}$, $0.3\,$dex in $\log g$ and $0.11\,$dex in ${[\rm{Fe/H}]}$.
Mock survey {#sec:data_mock}
-----------
In addition to real survey data we also created a mock survey to test our UniDAM tool. In this case we have full control on both the input parameters for our tool and the desired output parameters of the star. We produced mock surveys by sampling a number of models from PARSEC isochrones [@PARSEC] (see Section \[sec:iso\]).
We stress that the choice of models was aimed at covering model parameter space. So our mock survey does not resemble observed stellar surveys, which are typically magnitude-limited, nor a physical distribution of stars in masses and ages. We motivate our choice by the need to study the behaviour of our tool over a large parameter range. We chose isochrones with 8 different metallicities and 20 ages, which we selected at random. From each isochrone we randomly selected 20 models (with mass below 4 ${\ensuremath{\rm{M}_\odot}}$). We used ${\ensuremath{T_{\rm{eff}}}}, \log g, {[\rm{Fe/H}]}$ as well as 2MASS and AllWISE magnitudes for each selected model. High-mass stars were excluded because of their rarity. We took absolute magnitudes from PARSEC models as our “observed” magnitudes, thus setting the distance to 10 pc and extinction to zero. Parameter uncertainties were taken to be $100\,$K for ${\ensuremath{T_{\rm{eff}}}}$, $0.1\,$dex for $\log g$, and ${[\rm{Fe/H}]}$, and $0^m.03$ for each magnitude $m_\lambda$, which is similar to uncertainties in real spectroscopic and photometric surveys.
We prepared four mock surveys. In the first survey we took spectral and photometric values as provided by the PARSEC models. In the second survey we perturbed photometric parameters with random Gaussian noise, while keeping original spectroscopic parameters. In the third we perturbed spectral parameters with random Gaussian noise, while keeping original photometry. In the last survey all parameters were perturbed. Perturbation spread was always taken to be equal to the chosen parameter uncertainties. This allows us to control how uncertainties in observations influence our results.
Isochrones {#sec:iso}
==========
We used PARSEC 1.2S isochrones [@PARSEC], which provide a large sample of models covering a wide range of stellar parameters. These data include effective temperatures, surface gravities, radii, and absolute photometric magnitudes for a models covering large ranges in metallicities, ages, and masses. We selected nearly three million models that cover the following ranges:
- $10^{-4} : 0.06\,$dex in metallicity ($Z$), corresponding to $-2.2 : 0.6\,$dex in ${[\rm{Fe/H}]}$
- $6.6 : 10.13$ in log(age) $\tau$, corresponding to $4\cdot 10^6 : 13.5\cdot10^9$ years
- $0.09 : 67\,{\ensuremath{\rm{M}_\odot}}$ in mass
The density of models varies within the ranges indicated above. The reason for this is that isochrones are designed to reproduce details of stellar evolution. Therefore, there are a relatively large number of models covering some rapid stages of evolution and there are a lower number of models for stages of slow evolution. To account for this, we introduced a value $w_j$ that is a measure of the volume of the parameter space (metallicity, age, and mass) represented by each model. Otherwise we would be biased towards rare evolutionary stages.
We calculated $w_j$ for each model as a product of width of the bin in each dimension represented by the model, $$w_j = w_{\rm{age}, j} w_{Z, j} w_{\rm{mass}, j}. \label{eq:weight}$$ The PARSEC isochrone models are calculated for the bin mid-points and they are not equal to the average model in each bin, which makes the binning somewhat arbitrary.
The PARSEC isochrones are equally spaced in log(age)s $\tau$. Therefore the density of models with lower ages is higher than that of models with higher ages. This has to be compensated for to avoid a bias towards lower ages. We took for the age bin width $w_{\rm{age}}$ the time span represented by the isochrone. It is calculated as $w_{\rm{age}, j} = (10^{\tau_{j+1}} - 10^{\tau_{j-1}})/2$, so time span range is defined by mid-points between isochrones in age.
Observations provide ${[\rm{M/H}]}$ or ${[\rm{Fe/H}]}$, which are proportional to the logarithm of $Z$. We created a grid in $Z$, ranging from $10^{-4}$ to $0.05$, such that the spacing between the values of ${[\rm{Fe/H}]}$ is smaller than the mean uncertainty $\sigma_{{[\rm{Fe/H}]}}$ of iron abundance measure at a given ${[\rm{Fe/H}]}$ in the most precise input data, i.e. APOGEE data. Typically, uncertainties in ${[\rm{Fe/H}]}$ are smaller for metal-rich stars than for metal-poor stars with $\sigma_{{[\rm{Fe/H}]}} \propto Z^{-0.15}$. Therefore $\Delta_Z$ – the bin width in $Z$ – is roughly $\Delta_Z \propto Z \sigma_{{[\rm{Fe/H}]}} \propto Z^{1 - 0.15} = Z^{0.85}$. The width of the bin in $Z$ was used for $w_Z$, thus ensuring a flat prior in $Z$. To check the impact of this, we performed tests with $w'_Z$ proportional to the width of the bin in ${[\rm{Fe/H}]}$. These tests showed that this difference has little impact on our results. This is caused by the fact that for given values of ${\ensuremath{T_{\rm{eff}}}}$ and $\log g$, stellar parameters like age, mass, and luminosity are changing slowly with ${[\rm{Fe/H}]}$ and $Z$, so the variations in weights are second-order effects. Thus $w_Z$ is the second-order effect, but we kept it to keep the flat prior in physical quantity $Z$.
Masses for models were selected by the PARSEC algorithm to track the shape of the isochrone as well as possible. This results in more models in more curved parts of the isochrone. Such an approach produces heavily inhomogeneous coverage of the mass range, which has to be corrected for. We used for $w_{\rm{mass}}$ the width of the bin in mass.
One of nine evolutionary stages is assigned to every PARSEC model. We grouped these stages into main-sequence stars and giants ascending the red giant branch (pre-core-helium burning; stage I), core-helium burning stars (stage II), and asymptotic giant branch stars (post-core-helium burning; stage III). These stage labels were used to separate models with different internal structures.
Column Description Unit
------------- --------------------------------------------------- -------------------------------
Z Metallicity -
log(age/yr) Age log(years)
M\_ini Initial mass ${\ensuremath{\rm{M}_\odot}}$
M\_act Actual mass ${\ensuremath{\rm{M}_\odot}}$
logL/Lo Luminosity -
logTe Effective temperature -
logG Gravity -
... Set of absolute magnitudes (see text) mag
int\_IMF Value of the cumulative IMF function $F(M_{ini})$ -
stage Evolutionary stage -
: PARSEC model columns (as named in the output of <http://stev.oapd.inaf.it/cgi-bin/cmd_2.7>).[]{data-label="tbl:parsec"}
For each model we used basic physical information (see ) and 2MASS and AllWISE absolute magnitudes that were derived from the luminosities. Other magnitudes are often available, but we did not use them for the reasons discussed below.
Methodology {#sec:bayes}
===========
The method used in our tool is similar to the Bayesian method described in [@2014MNRAS.445.2758R]. We introduced the vector ${\ensuremath{\textbf{O}}}$ for input (“observed”) parameters and their uncertainties ${\ensuremath{\textbf{O}}}= ({\ensuremath{T_{\rm{eff}}}}, \log g, {[\rm{Fe/H}]}, m_\lambda, \sigma_{{\ensuremath{T_{\rm{eff}}}}}, \sigma_{\log g}, \sigma_{{[\rm{Fe/H}]}}, \sigma_{m_\lambda})$. Here, $m_\lambda$ indicates visible magnitudes in several photometric bands and $\sigma_x$ is the uncertainty of the parameter $x$. These values were taken from surveys listed in . When $\alpha$ element abundances were available, the metallicity ${[\rm{Fe/H}]}$ was corrected with the relation ${[\rm{Fe/H}]}= {[\rm{Fe/H}]}_0 + \log(1. + 0.638[\alpha/\rm{M}])$ [see @1993ApJ...414..580S]. Additional input parameters for each star can be used in ${\ensuremath{\textbf{O}}}$, for example masses and radii derived from asteroseismic data or parallaxes from Gaia.
We used two vectors for output parameters. The first vector, $\textbf{X}_m = (\tau, M, {[\rm{Fe/H}]})$ represents stellar model parameters log(age), mass, and metallicity. These parameters are taken from isochrone models and therefore have discrete values. We always refer to the actual rather than initial stellar mass because this quantity can be measured from other data, for example from asteroseismic quantities or from binary orbital solutions. The second vector, ${\ensuremath{\textbf{X}}}_p = (\mu_d, A_K),$ where subscript $p$ stands for photometry, represents distance modulus and extinction; these parameters can formally have any value, but we set a physically motivated limit $A_K \geq 0$ (see discussion in ). The full output parameter vector is then ${\ensuremath{\textbf{X}}}= {\ensuremath{\textbf{X}}}_m \cup {\ensuremath{\textbf{X}}}_p$.
The probability of having parameters ${\ensuremath{\textbf{X}}}$ with given observables ${\ensuremath{\textbf{O}}}$ can be expressed via Bayesian formula as $$P({\ensuremath{\textbf{X}}}|{\ensuremath{\textbf{O}}}) = \frac{P({\ensuremath{\textbf{X}}}) P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}})}{P({\ensuremath{\textbf{O}}})} \propto P({\ensuremath{\textbf{X}}}) P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}).\label{eq:bayes}$$
We used flat priors on age (in linear scale) and metallicity $Z$, which means a star formation rate that is constant in time and is independent of $Z$. The quantitative effect of different priors is described below in Section \[sec:compare\_rave\]. We used a mass prior based on the IMF $F_\textrm{IMF}$ from @2003ApJ...598.1076K. Therefore, $$P({\ensuremath{\textbf{X}}}) = F_\textrm{IMF}({\ensuremath{M}}).\label{eq:prior}$$
Isochrones give us for each ${\ensuremath{\textbf{X}}}_m$ a new vector $\textbf{O'} = ({\ensuremath{T_{\rm{eff}}}}, \log g, {[\rm{Fe/H}]}, M_\lambda)$, where $M_\lambda$ indicates absolute magnitudes in several photometric bands. So we can define a function $\mathcal{I}$ as $\textbf{O'} = \mathcal{I}({\ensuremath{\textbf{X}}}_m)$. Noticeably, ${[\rm{Fe/H}]}$ is contained in both ${\ensuremath{\textbf{O}}}$ and ${\ensuremath{\textbf{X}}}_m$, so $\mathcal{I}_{{[\rm{Fe/H}]}}({\ensuremath{\textbf{X}}}_m) \equiv {[\rm{Fe/H}]}$. We express $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}})$ using two log-likelihoods $$P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}) = P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_m, {\ensuremath{\textbf{X}}}_p) = e^{-L_{iso}-L_{sed}}.$$
Here, $L_{iso}$ is a measure of the separation between observed spectral parameters [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{} and those predicted by model parameters ${\ensuremath{\textbf{X}}}_m$. Assuming Gaussian uncertainties in ${\ensuremath{\textbf{O}}}$, we can write $$\begin{aligned}
L_{iso} &= \sum_{i \in ({\ensuremath{T_{\rm{eff}}}}, \log g, {[\rm{Fe/H}]})} \frac{(O'_i - O_i)^2}{2 \sigma^2_{O, i}} \nonumber \\
&= \sum_{i \in ({\ensuremath{T_{\rm{eff}}}}, \log g, {[\rm{Fe/H}]})} \frac{(\mathcal{I}_i({\ensuremath{\textbf{X}}}) - O_i)^2}{2 \sigma^2_{O, i}}\label{eq:liso}.\end{aligned}$$
We use for the log-likelihood as in most cases spectroscopic surveys do not provide information about the correlations of the uncertainties on the different parameters. When this information or the PDFs of the spectroscopic parameters are known, this information can be included in $L_{iso}$.
$L_{sed}$ is a measure of similarity of the observed spectral energy distribution (SED) (set of $m_\lambda$ obtained from photometric surveys) and that predicted by isochrone model for ${\ensuremath{\textbf{X}}}$. Visible magnitudes $m_\lambda$ come from 2MASS and AllWISE. These magnitudes are related to the absolute magnitudes $M_\lambda$ in ${\ensuremath{\textbf{O}}}'$ by the following relation: $$m_\lambda = \mu_d + M_\lambda + A_\lambda = \mu_d + M_\lambda + C_\lambda A_K,$$ where the extinction in band $\lambda$ is defined as $A_\lambda = \frac{R_\lambda}{R_K} A_K = C_\lambda A_K$, with the extinction coefficients $R_\lambda$ taken from [@Extinction] and summarised in . Therefore to compare observed visible and model absolute magnitudes we need to know the distance modulus $\mu_d$ and the extinction $A_K$ in the direction of the star. The latter can be obtained from extinction maps or by comparing magnitudes in different bands. We chose the second method and calculated the extinction value for each star using photometric infrared data. This allowed us to take into account variations of extinction that might occur on scales smaller than the typical map resolution. We were also able to calculate extinction for any position on the sky and any distance, whereas a detailed three-dimensional extinction maps are created only for the nearest kiloparsec [@2012AstL...38...87G] or for the Galactic plane [@2014MNRAS.443.2907S], which is not sufficient for our purpose. A more recent three-dimensional map by @2015ApJ...810...25G covers a large fraction of the sky, but a full-sky three-dimensional extinction map is still not available.
We use 2MASS and AllWISE data as infrared bands are much less affected by interstellar extinction. Extinction in optical bands is generally higher than in the infrared and can have higher spectral variations between different points on the sky [see a discussion in @2011ApJ...739...25M]. By using infrared data alone we increased the precision of our distance estimates at a cost of decreasing the precision for the extinction estimate. As far as we focus on distances, this seems a fair trade.
Band Value of $R_\lambda$ Value of $C_\lambda$
------ ---------------------- ----------------------
J 0.720 2.35
H 0.460 1.5
K 0.306 1
W1 0.180 0.59
W2 0.160 0.52
: Values of the extinction coefficients $R_\lambda$ and $C_\lambda$ used for 2MASS and AllWISE photometry. Values were taken from @Extinction[]{data-label="tbl:rlambda"}
For $L_{sed}$ we use the following expression: $$L_{sed} = \sum_\lambda \frac{(m_\lambda - M_\lambda - C_\lambda A_K - \mu_d)^2}{2 \sigma_{m_\lambda}^2} - V_{corr}(\mu_d),$$ where the summation is carried out over all bands, for which photometry is available for a given star and $V_{corr}(\mu_d)$ is a volume correction. We introduce volume correction to compensate for the fact that with a given field of view we probe larger space volume at larger distances than at smaller distances. See a discussion of the effect of volume correction in . Using the relation between distance modulus and distance ($d = 10^{0.2\mu_d + 1}$), we can write $$V_{corr}(\mu_d) = \log d^2 = \log 10^{2 (0.2 \mu_d + 1)} = (0.4 \mu_d + 2) \log 10.$$
We use both $L_{iso}$ and $L_{sed}$ in $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}})$. Therefore on top of the spectroscopic parameters we utilise additional information, namely, the SED of the star. The drawback is that this also brings in the systematic errors of both stellar spectra modelling in PARSEC and possible large errors in photometry in the case of a mismatch between spectroscopic and photometric surveys.
Probability distribution functions {#sec:pdf}
----------------------------------
In order to get the PDF in each parameter, we need to marginalise a multi-dimensional PDF of output parameters, $P({\ensuremath{\textbf{X}}}|{\ensuremath{\textbf{O}}})$, defined in , over all other parameters. For example, for log(age) $\tau$ one has to calculate $$P(\tau) = \iiiint P({\ensuremath{\textbf{X}}}) P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}) dM d{[\rm{Fe/H}]}d{\ensuremath{\textbf{X}}}_p,$$ with the integral taken over the whole parameter space. In practice, we have a discrete sample of models from isochrones. So we can replace $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}})$ with the sum of delta functions $$\
P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}) = \sum_j P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{m,j}, {\ensuremath{\textbf{X}}}_p) {\delta_{\tau,j}}{\delta_{M,j}}{\delta_{{[\rm{Fe/H}]},j}}w_j,\label{eq:pox}$$ where we write for brevity ${\delta_{\tau,j}}= \delta(\tau_j - \tau)$, ${\delta_{{[\rm{Fe/H}]},j}}= \delta({[\rm{Fe/H}]}_j - {[\rm{Fe/H}]})$, ${\delta_{M,j}}= \delta(M_j - M)$. Here we have to use volumes represented by each model $w_j$ from , which reflect the volume of the parameter space represented by the model. The summation is carried out over all models and ${\ensuremath{\textbf{X}}}_{m,j} = (\tau_j, M_j, {[\rm{Fe/H}]}_j)$ is a vector of parameters of the model $j$. Therefore we can write, using equations \[eq:prior\] and \[eq:pox\], $$\begin{aligned}
P(\tau) = \sum_j \iiiint & F_{IMF}(M) P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{m,j}, {\ensuremath{\textbf{X}}}_p) {\delta_{\tau,j}}{\delta_{M,j}}{\delta_{{[\rm{Fe/H}]},j}}\nonumber \\
& \times w_j d M\,d{[\rm{Fe/H}]}\,d{\ensuremath{\textbf{X}}}_p, \label{eq:p_tau}\end{aligned}$$ with again the summation carried out over all models. We need to keep integration over ${\ensuremath{\textbf{X}}}_p$, because both $\mu_d$ and $A_K$ are continuous values.
We can make two important simplifications here. First, there is no need to sum over all models because for most of them $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}})$ is very small. We chose a threshold of $L_{iso} < 8$. In this case [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{} are within 4 sigma uncertainties from observed values. We verified that increasing the threshold from 8 to 12.5 (or going from a combined 4 sigma to 5 sigma uncertainty threshold) leads to marginal changes in the output; parameter estimates change by more than 3% for only less than 2 percent of the stars. Because models are selected from three-dimensional space, this comes at a cost of doubling the number of models to be considered. A decreasing the likelihood threshold however leads to more significant changes in the resulting parameters.
Second, $L_{sed}$ is a quadratic form in $\mu_d$ and $A_K$. Therefore, for a given model $j$, $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{p,j}) = \exp(-L_{sed})$ is a bivariate Gaussian distribution. The location of the maximum of this distribution can be found by solving the system of equations as follows: $$\large{
\begin{cases}
{\ensuremath{\frac{\partial {L_{sed}}}{\partial {\mu_d}}}} &= 0 \\
{\ensuremath{\frac{\partial {L_{sed}}}{\partial {A_K}}}} &= 0
\end{cases}}.$$ This is equivalent to the following set of equations: $$\label{eq:old_system}
\large{
\begin{cases}
\sum_\lambda {\ensuremath{\frac{1}{\sigma_{m_\lambda}^2}}}\mu_d + \sum_\lambda {\ensuremath{\frac{C_\lambda}{\sigma_{m_\lambda}^2}}} A_K &= \sum_\lambda {\ensuremath{\frac{m_\lambda - M_\lambda}{\sigma_{m_\lambda}^2}}} - 0.4 \log 10 \\
\sum_\lambda {\ensuremath{\frac{C_\lambda}{\sigma_{m_\lambda}^2}}} \mu_d + \sum_\lambda {\ensuremath{\frac{C_\lambda^2}{\sigma_{m_\lambda}^2}}} A_K &= \sum_\lambda {\ensuremath{\frac{C_\lambda(m_\lambda - M_\lambda)}{\sigma_{m_\lambda}^2}}},
\end{cases}}$$ which is solved for $\mu_d$ and $A_K$. If $A_K < 0$, which is not physical, or if only one magnitude is available we set $A_K = 0$ and obtained $\mu_d$ from the first part of . By doing so we increase $L_{sed}$ for a given model, which decreases the contribution of this model to the PDFs. In some cases $A_K$ is zero for all models for a star. This indicates either that the extinction for this star is statistically indistinguishable from zero or that there is a mismatch between spectral and photometric data, which results in using visible magnitudes from a different star. In the first case we still produce a reliable distance estimate, while in the second case the obtained $L_{sed}$ is high and the quality of the result is low (see Section \[sec:p\_best\_p\_sed\]).
The covariance matrix of $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{p,j})$ is exactly the inverse Hessian matrix $H$ of $L_{sed}$ , i.e. $$\large{
H =
\begin{vmatrix}
\frac{\partial^2 L_{sed}}{\partial \mu_d^2} & \frac{\partial^2 L_{sed}}{\partial \mu_d \partial A_K} \\
\frac{\partial^2 L_{sed}}{\partial \mu_d \partial A_K} & \frac{\partial^2 L_{sed}}{\partial A_K^2}
\end{vmatrix} =
\begin{vmatrix}
\sum_\lambda{\ensuremath{\frac{1}{\sigma_{m_\lambda}^2}}} & \sum_\lambda{\ensuremath{\frac{C_\lambda}{\sigma_{m_\lambda}^2}}} \\
\sum_\lambda{\ensuremath{\frac{C_\lambda}{\sigma_{m_\lambda}^2}}} & \sum_\lambda{\ensuremath{\frac{C_\lambda^2}{\sigma_{m_\lambda}^2}}}
\end{vmatrix}
}.$$ It is important to note that $H$ depends only on $C_\lambda$, which are constants and photometric uncertainties $\sigma_{m_\lambda}$, thus $H$ has a constant value for a given star.
The width of the $L_{sed}$ distribution in $\mu_d$ for a given model is thus $\Delta_{\mu_d} = \sqrt{H^{-1}_{0,0}}$, which is of the order of $\sigma_m$ and is about an order of magnitude smaller than a typical uncertainties in $\mu_d$ that we derive. This is not true for the extinction, but we are not focused on derivation of high-quality extinctions. Moreover, tests show that the error we bring into mean extinction values by this simplification is small. Furthermore there is an obvious correlation between $\mu_d$ and $A_K$, but we ignore it here. This is justified because $\Delta_{\mu_d}$ is approximately one order of magnitude smaller than a typical uncertainties in $\mu_d$ for a star, so a correlation between derived $\mu_d$ and $A_K$ in a two-dimensional PDF $P(\mu_d, A_K)$ for a given star is dominated by scatter in $\mu_d$ and $A_K$ for models used to build the PDF, rather than by correlation of $\mu_d$ and $A_K$ for each model.
As far as $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_p)$ is a bivariate Gaussian function, the integral over $d{\ensuremath{\textbf{X}}}_p$ in is exactly the value of $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_p)$ at the location of its maximum, derived in . So we can replace the integral in with a delta function $$\begin{aligned}
P(\tau) &= \sum_j \iiiint F_{IMF}(M) P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}) {\delta_{\tau,j}}{\delta_{M,j}}{\delta_{{[\rm{Fe/H}]},j}}\nonumber \\
& \times \delta({\ensuremath{\textbf{X}}}_{p,j} - {\ensuremath{\textbf{X}}}_p) w_j d M d {[\rm{Fe/H}]}d {\ensuremath{\textbf{X}}}_p \label{eq:pdf} \\
& = \sum_j P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{m,j}, {\ensuremath{\textbf{X}}}_{p,j}) F_{IMF}(M_j) \delta(\tau_j - \tau) w_j, \nonumber \end{aligned}$$ where ${\ensuremath{\textbf{X}}}_{p,j}$ is the solution of for model $j$. A similar equation can be written for $P(M)$.
For $\mu_d$ and $A_K$ each model contributes to the PDF a Gaussian summand with a width of $\Delta_{\mu_d}$ and $\Delta_{A_K} = \sqrt{H^{-1}_{1,1}}$, respectively. We can correct for using delta function in place of a bivariate Gaussian for $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_p)$ by adding a Gaussian smoothing multiplier with the corresponding width
$$\begin{aligned}
P(\mu_d) &= \sum_j P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{m,j}, {\ensuremath{\textbf{X}}}_{p,j}) F_{IMF}(M_j) w_j e^{-\frac{(\mu_{d,j} - \mu_d)^2}{2 \Delta_{\mu_d}^2}} \\
P(A_K) &= \sum_j P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{m,j}, {\ensuremath{\textbf{X}}}_{p,j}) F_{IMF}(M_j) w_j e^{-\frac{(A_{K,j} - A_K)^2}{2 \Delta_{A_K}^2}}.\end{aligned}$$
\[eq:pdf\_mu\]
Quality of model fit to the data {#sec:p_best_p_sed}
--------------------------------
It is important to quantify how well a set of models represents the observed parameters of a star. To accomplish this, we used the $\chi^2$ distribution to get the $p$ values from our log-likelihoods. We characterise our model quality by the $p$ value corresponding to the value of $\chi^2 = L_{iso}+L'_{sed}$ for the model with the highest $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{m,j}, {\ensuremath{\textbf{X}}}_{p,j})$. Here we use $L'_{sed} = L_{sed} + V_{corr}(\mu_d)$. We added $V_{corr}(\mu_d)$, thus removing the volume correction. This is necessary because $V_{corr}(\mu_d)$ adds a non-$\chi^2$ summand, that depends on $\mu_d$. Thus, the $\chi^2$ value used to compute the $p$ value is not the lowest possible value, but that corresponding to the model with the highest $P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_{m,j}, {\ensuremath{\textbf{X}}}_{p,j})$, thus the value that is closest to observables. The number of degrees of freedom in this case equals the number of observables, which is the number of available magnitudes plus three (for the temperature, surface gravity, and metallicity dimensions). This $p$ value is designated as $p_{best}$. Low values of $p_{best}$ can be caused either by observables falling out of the range covered by the models or by inconsistencies between observed stellar parameters and the observed SED. We flagged data with $p_{best} < 0.1$ (see Section \[sec:quality\]).
In addition to $p_{best}$, we quantify how good our models are at representing the observed SED. We use the same model used for $p_{best}$ and report as $p_{sed}$ the $p$ value corresponding to the chi-square value $\chi^2 = -L'_{sed}$. The number of degrees of freedom in this case is equal to the number of available magnitudes. Low values of $p_{sed}$ might be caused, for example, by a mismatch between spectral and photometric data, which results in using visible magnitudes from a different star. Another possible reason is a problematic spectral parameter estimation, which makes ${\ensuremath{T_{\rm{eff}}}}$ inconsistent with the SED from photometry. We flagged data with $p_{sed} < 0.1$ (see Section \[sec:quality\]).
Calculating final values {#sec:final_values}
------------------------
From we can define a weight for each model $j$ as $$\label{eq:model_weight}
W_j = P({\ensuremath{\textbf{O}}}|{\ensuremath{\textbf{X}}}_j)F_{IMF}(M_j) w_j = e^{-L_{iso}-L_{sed}} w_j F_{IMF}(M_j).$$
The PDF in each parameter can thus be calculated as a distribution of parameters for models, with weights $W_j$. For the PDF in $\mu_d$ and $A_K$ we smooth histograms with Gaussian kernel of width $\Delta_{\mu_d}$ or $\Delta_A$, respectively (see ).
### Determination of unimodal sub-PDFs
For each combination of stellar mass, age, and metallicity, PARSEC models provide a single combination of effective temperature and surface gravity. The transition from the effective temperature, surface gravity, and metallicity to stellar mass and age is however non-unique. For a given combination of [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{} with their uncertainties it is possible to find more than one corresponding model with different combinations of age and mass. For example, red clump stars and red giant stars can have similar spectral parameters [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{}, but different ages, masses, and internal structures. Therefore, distributions in ages, masses, and absolute magnitudes (and thus distances) are in some cases different from Gaussian. This is illustrated in , where we show typical PDFs in log(age), mass, and distance modulus for two stars. Some of the distributions shown are multimodal with two or more peaks. Reporting mean values and standard deviations do not capture that properly. Mode values, as used by [@2014MNRAS.445.2758R], give the value of the highest peak only. Full distributions can be provided, but they are often considered too complex for further analysis. We suggest an intermediate solution: split PDFs into several USPDFs with each of these described by a unimodal function, assuming that this represents a group of models with similar stellar structure.
We split all models in three evolutionary stages, described in Section \[sec:iso\], i.e. pre-core-helium burning, core-helium burning, and post-core-helium burning stars (plotted in with red, blue, and yellow, respectively). Splitting our results this way has a benefit in case of overlapping isochrones from different evolutionary stages for a given ${\ensuremath{T_{\rm{eff}}}}, \log g, {[\rm{Fe/H}]}$ combination. Without this split, we would combine values for substantially different evolutionary stages that are not physical.
Splitting in evolutionary stages is not enough, as due to curvature of isochrones we can have sub-groups of models within one stage that are physically different. A good example is stage I, which contains both main-sequence and giant stars. This results in multimodal distributions of models in space of stellar parameters. An example of such a situation is given in (see their Figures 1 and 2). To split a multimodal distribution into several unimodal distributions we applied an additional empirically derived routine to our PDFs, which is described in . This routine works in the vast majority of cases. Those cases in which our splitting of the PDFs breaks down typically have too few models to produce a histogram (such cases are given the quality flag “N”, see Section \[sec:quality\]).
The overall weight $V_m$ for a USPDF $m$ is defined as a ratio of the sum of weights of models within the USPDF and the sum of weights of all models $$V_m = \frac{\sum_{j \in m}W_j}{\sum W_j}.$$
![image](PDF_example.png){width="90.00000%"}
### Output values {#sec:outvalues}
We provide output for each USPDF such that it is possible to reproduce the PDF in each parameter we are interested in mass ${\ensuremath{M}}$, logarithm of age $\tau$, distance modulus $\mu_d$, distance $d$, parallax $\pi$, and extinction $A_K$. Even for a unimodal distribution the mean, median, or mode might be a poor estimate for the value of interest in case the distribution is non-symmetric. Values of mode and median might produce less bias but should be used with care as they are not proper moments of the PDF and many statistical methods rely on moments. To provide a simple representation of each USPDF (for each parameter), we fit them with a Gaussian, a skewed Gaussian, a truncated Gaussian, a modified truncated exponential distribution (MTED) and a truncated [Student’s t-distribution]{} (see definitions of these functions in the Appendix \[app:functions\]). For the truncated functions the upper and lower truncation limits were not fitted, but were set to upper and lower limits of the considered USPDF. We selected the function that gives the lowest symmetric Kullback–Leibler divergence value, which is the measure of the information gain $$D_{KL} = \sum_i H_i \log \frac{H_i}{F_i},$$ where $H_i$ are histogram counts and $F_i$ are fitted function values.
Truncated functions were taken because we have a natural upper limit on the age of the star, which is the age of the Universe and therefore there is a lower limit on the mass of a star that left the main sequence, which is approximately $0.7 {\ensuremath{\rm{M}_\odot}}$ if we consider the full range of metallicities. This limit produces sharp cut-offs in histograms, making truncated functions a natural choice.
A modified truncated exponential distribution in log-age is equivalent to a flat distribution in ages, thus such a fit indicates that age is poorly constrained. Such PDFs are typical for main-sequence stars, where, as expected, it is hard to constrain age from spectrophotometric data.
In rare (less than $1\%$) cases in which the fit did not converge for all five fitting functions. This is primarily caused by long tails in the distributions or insufficient data for a proper fit. In this case we reported only mean and standard deviation of the data as fit parameters.
An important property of our result is that values of distance, mass, and log(age) for models are strongly correlated. We used the fact that distance modulus, log(age), and logarithm of mass have a nearly linear correlation within every USPDF in most cases. We report coefficients (slope $a$ and intercept $b$) of a weighted linear fit and a scatter around it for three relations for each USPDF: $\tau = a_1 \mu_d + b_1$ (distance modulus versus logarithm of age; red lines on left panels of ), $\mu_d = a_2 \tau + b_2$ (logarithm of age versus distance modulus; blue lines on left panels of ), and $M = 10^{a_3 \mu_d + b_3}$ (distance modulus versus logarithm of mass; red lines on right panels of ). An illustration of correlations is given in , where we show the two-dimensional PDFs for several stars and our fits to them. From the right-hand side panels it is clear that the relation of distance modulus and logarithm of mass is close to linear; the mass is plotted in linear scale, thus our fits are not straight lines. The relation of distance modulus and log(age) is weak for the main-sequence stars and lower giant branch stars. The shape of two-dimensional PDF can be quite complex, like in panels a and c of . For these cases the scatter is large and our relations does not work. For giant stars these correlations are much more pronounced, as can be seen in panels e and g of . Correlations between distance modulus, log(age), and log(mass) can be used, for example, if the new distance estimate $\mu'_d$ is obtained from some external source (like Gaia) for a star in our catalogue. We verified that our estimates for mass and log(age) can then be corrected for by the value of the slope times the difference between the externally determined distance modulus and our estimate as follows: $$\begin{aligned}
\tau' &=& \tau + a_1 (\mu'_d - \mu_d) \\
M' &=& M \times 10^{a_3 (\mu'_d - \mu_d)}.\end{aligned}$$ In the future work we will show applications of these relations, which are beyond the scope of this work.
![image](TwoD_PDF.png){width="80.00000%"}
Summing up, we chose to provide for each USPDF $m$ and each stellar parameter designated as $Y_{i, m}$, where $Y_i \in ({\ensuremath{M}}, \tau, \mu_d, d, \pi, A_K)$ the following quantities:
- A weighted mean (catalogue column suffix `_mean`) of model values $Y_{i,j}$ for all models, $$Y_{i,m} = \frac{\sum_{j \in m} W_j Y_{i,j}}{\sum_{j \in m}W_j},$$ where the summation is carried out over all models within USPDF $m.$
- A weighted standard deviation (suffix `_err`), $$\sigma_{Y_{i, m}} = \sqrt{\frac{\sum_{j \in m} W_j (Y_{i,j} - Y_{i, m})^2}{\sum_{j \in m}W_j}}
.$$
- A mode of the USPDF (suffix `_mode`).
- A weighted median value (suffix `_median`).
- A character indicating which fitting function was chosen: <<G>> for Gaussian, <<S>> for skewed Gaussian, <<T>> for truncated Gaussian, <<L>> for MTED, <<P>> for truncated [Student’s t-distribution]{}, <<E>> if the fit failed for all five functions, and <<N>> if there was not enough data for a fit (suffix `_fit`).
- Parameters for a chosen fit (suffix `_par`). The first two values are location and shape for the chosen best fitting function. For a Gaussian function, by definition, location parameter is equal to the mean value and shape parameter to the variance. If the chosen function is a skewed Gaussian then the third value is the skew value. If the chosen function is a truncated Gaussian or MTED, then third and fourth values are lower and upper limits. If the chosen function is [Student’s t-distribution]{} then the third value is the number of degrees of freedom and the fourth and fifth values are lower and upper limits.
- One- and three- sigma confidence intervals. These are defined as a region containing 68.27% (for one-sigma uncertainties) or 99.73% (for three-sigma uncertainties) of the USPDF, positioned to minimise its span. By construction, such a confidence interval always includes the mode value. For a Gaussian distribution this is equivalent to a range centred on the mean value with width of one- or three- standard deviations. (suffixes `_low_1sigma, _up_1sigma, _low_3sigma, _up_3sigma`).
We report all USPDFs with weights $V_m$ higher than $0.03$. Integer priority values starting from 0 were assigned to each USPDF in order of decreasing weights $V_m$. We list all measures provided for our catalogue in .
In we show examples of different USPDFs and fits to them. For the first star (left column) three different evolutionary stages are possible. For the evolutionary stage I a truncated Gaussian is required to fit log(age) distribution and for the evolutionary stage II a skewed Gaussian is needed to fit distribution in mass and distance modulus. For the second star (right column) mainly stage II is possible, but the distribution for this stage can be split in two parts. We need a truncated [Student’s t-distribution]{} to fit the histogram for the higher age solution. The small USPDF visible for mass around $1 {\ensuremath{\rm{M}_\odot}}$ and log(age) of $9.3$ for stage I was excluded because its weight $V_m$ is below the accepted $0.03$ threshold.
[lcp[4cm]{}]{}\
Column name & Units & Description\
id & & Unique ID of the star from the input data\
stage & & Stage number (I, II or III)\
uspdf\_priority & & Priority order of a given USPDF (starting from 0)\
uspdf\_weight & & Weight $V_m$ of a given USPDF\
total\_uspdfs & & Number of USPDF with $V_m > 0.03$\
p\_best & & Probability for a best-fitting model (see Section \[sec:p\_best\_p\_sed\])\
p\_sed & & $p$-value from $\chi^2$ SED fit (see Section \[sec:p\_best\_p\_sed\])\
quality & & Quality flag (see Section \[sec:quality\])\
distance\_modulus$\dagger$ & mag & Distance modulus $\mu_d$\
distance$\dagger$ & kpc & Distance $d$\
parallax$\dagger$ & mas & Parallax $\pi$\
extinction$\dagger$ & mag & Extinction $A_K$ in 2MASS K-band\
mass$\dagger$ & ${\ensuremath{\rm{M}_\odot}}$ & Mass\
age$\dagger$ & log(yr) & Logarithm of age $\tau$\
\
dm\_age\_slope & & Slope of the relation\
dm\_age\_intercept & & Intercept of the relation\
dm\_age\_scatter & & Scatter of the relation\
\
age\_dm\_slope & & Slope of the relation\
age\_dm\_intercept & & Intercept of the relation\
age\_dm\_scatter & & Scatter of the relation\
\
dm\_mass\_slope & & Slope of the relation\
dm\_mass\_intercept & & Intercept of the relation\
dm\_mass\_scatter & & Scatter of the relation\
Role of age and extinction cuts {#sec:cuts}
-------------------------------
We chose to impose hard cuts on log(age) $\tau \leq 10.13$ and extinction $A_K \geq 0$. This is not an obvious choice, so we justify it here.
We first consider a star for which two equally good solutions are possible: one with $\tau = 8$ and one with $\tau = 10.7$, where all other parameters are equal (see ). If we do not use the hard cut on ages, both solutions are reported with good quality flags and equal weights $W_{1,2} = 0.5$ (blue lines in ). The unphysical age value for the second solution might be used as a sign of a problem either with data or with models. It is therefore likely that this solution or even both solutions for this star will be dropped from further study. If we, on the other hand, use the cut in log(age) $\tau \leq 10.13$, we will keep both solutions, but their weights will change. Only the tail of the USPDF for the second solution will be retained. Because the sum of USPDF weight still has to be unity, the weight of the first solution will increase. We get for this example $W_1 = 0.89$ and $W_2 = 0.11$ (red lines in ). As a result, we get a “realistic” first solution with $\tau = 8$ and retain a part of the second solution. For the second solution, and, in general, for all cases when part of the USPDF is cut away, the mean of the USPDF is a poor measure of log(age) and it is biased towards lower values. But in such cases the USPDF is fitted with either a MTED, a truncated Gaussian or a truncated [Student’s t-distribution]{}. So instead of a solution with high weight and correct, but unphysical, mean log(age), we get a solution with lower weight and biased value of the mean with a proper fitting function.
![Sketch of PDF in log(age) for a star with two possible ages without (blue lines) and with (red lines) hard upper limit on log(age). See text for details.[]{data-label="fig:cut"}](age_cut_example.png){width="48.00000%"}
We now consider the case when only one solution is available for a given star, which is $\tau = 10.7$. We can use such values of $\tau$ as an indication of a problem in spectroscopic parameters, photometry, or in isochrones. If the cut in log(age) $\tau \leq 10.13$ is applied, there are several possibilities. In some cases the PDF in log(age) follows an exponential distribution, which means that the age is poorly constrained for a given star and extending the range of possible ages does not improve this.
An other possibility is that models that have $\tau \leq 10.13$ represent observables and thus have $p_{best}$ close to unity; see . This means that we still have a reliable log(age) PDF for $\tau \leq 10.13$, but the mean, mode, and median values might be biased. Without the age cut, the mean, mode, and median log(age) values will be above $\tau = 10.13$, and such solution will likely be excluded from further analysis, despite the fact that a fraction of it is reliable.
In yet another case the value for a best-fitting model probability $p_{best}$ will be small, which will indicate potential problems in either the data or with the models. Such cases will be flagged as unreliable with and without age cut.
The same arguments as discussed above are applicable to the cut in extinction. Moreover, a cut in extinction has minor influence on the result, as extinction values are typically very small, i.e. about 10 times smaller than the derived uncertainty in the distance modulus. We verified that negative extinctions typically arise for faint stars, for which photometric uncertainties are large. In the vast majority of cases for which the derived value of extinction is negative, the value is still consistent with zero within uncertainties.
Tests: Comparison with other measurements {#sec:test}
=========================================
We tested our UniDAM tool in two ways. We first applied it to mock surveys (see Section \[sec:data\_mock\]). By doing that we checked the accuracy and performance of the tool and explored the effect of random perturbations added to input values. Then we proceeded with comparing our parameter measurements for real stars with those obtained by other groups and presented in the literature. The aim of these exercises was to check the quality of our estimates compared to results obtained by the consortia of the different surveys and the sensitivity of our results to priors.
Mock survey {#mock-survey}
-----------
We ran our UniDAM tool on all four mock surveys, described in Section \[sec:data\_mock\]. Knowing the input values allowed us to evaluate which of the reported measures, that is mean, median, or mode, is the best proxy. In agreement with we find that the mode is less biased than the mean or median, but produces slightly more outliers.
Mean, mode, and median values show similar qualitative patterns, so we used only mean output values of the highest weight USPDF for comparison. We compared derived mean values $X$ with input values $X_0$. We considered several measures of interest as follows:
- Median fractional uncertainty of derived value ${\ensuremath{\mathcal{M}\!\left( \frac{\sigma_X}{X} \right)}}$ ; this is an internal precision measure.
- Median relative deviation (median bias) of derived value ${\ensuremath{\mathcal{M}\!\left( \frac{\Delta X}{X} \right)}} = {\ensuremath{\mathcal{M}\!\left( \frac{X_0-X}{X} \right)}}$; this shows whether the values that we calculate are systematically offset with respect to the input.
- Median absolute relative deviation of derived value ${\ensuremath{\rm{M.A.D.}}}= {\ensuremath{\mathcal{M}\!\left( \left|\frac{\Delta X}{X} - {\ensuremath{\mathcal{M}\!\left( \frac{\Delta X}{X} \right)}}\right| \right)}}$; this shows how scattered our derived values are with respect to input. Median absolute deviation is a much better estimate of scatter than standard deviation in the presence of outliers [@Leys2013].
- Outlier fraction rate $O$; this is the fraction of stars for which the input value $X_0$ lies outside the three-sigma confidence interval. We use two values: $O_{best}$ is calculated using only highest weight USPDF, whereas $O_{all}$ is calculated using all USPDFs (i.e. $X_0$ lies outside the three-sigma confidence intervals of all reported USPDFs).
Measures for all four mock surveys are listed in . For a normally distributed random variable, median absolute deviation $\sigma_{MAD}$ relates to standard deviation $\sigma$ as $\sigma \approx 1.4826\,\sigma_{MAD}$, therefore we expected median absolute relative deviation to be approximately equal to two-thirds of the median fractional uncertainty. As can be seen from the second and fourth columns of , this is the case for our data, which means that the distribution of the offsets is close to normal.
Outliers are expected for two reasons. First, a model with the “correct” (i.e. input) values might not belong to the highest weight USPDF. This is revealed by $O_{best}$. We detected that in about one to seven percent of the cases input parameters are better recovered with USPDF that has second (or even the third) priority. This happens primarily in the upper part of the giant branch, where isochrone overlap is highest. This is inherent to the method; we seek a model that most likely represents the data. If the mock star was taken to be in some short phase of its evolution (thus having low model weight $w_j$), chances are high that we assign a highest weight USPDF not to this phase but to a much longer phase, which has similar observables. Second, because the three-sigma range includes by definition $99.7\%$ of the data, we expect at least a fraction of $0.003$ of stars for which no USPDF recovers the “correct” values within three-sigma confidence intervals (i.e. $O_{all} \gtrsim 0.003$) due to our random perturbations added to input values. In fact, this fraction is slightly higher, of the order of $0.01-0.02$. We checked that this is caused by a combination of both the perturbations of input values and models selected for the mock sample that is in a very short phase of evolution. In the latter case USPDFs might be pulled away from the “correct” solution by nearby models with higher weights. Another possible case in which there might be no “correct” solution found is when the mock star is located on the edge of the parameter space covered by models.
In cases where perturbations were added, the values of the bias, median absolute relative deviation, and outlier fractions increase. This increase is most prominent in the outlier fractions; this is caused by the fact that sometimes even a small perturbation of input parameters might change the priorities of USPDFs, thus changing the parameters of the highest weight USPDF by a large value.
[lrrrp[0.9cm]{}p[0.9cm]{}]{} Parameter & ${\ensuremath{\mathcal{M}\!\left( \frac{\sigma_X}{X} \right)}}$ & ${\ensuremath{\mathcal{M}\!\left( \frac{\Delta X}{X} \right)}}$ & ${\ensuremath{\rm{M.A.D.}}}$ & $O_{best}$ & $O_{all}$\
\
mass & 0.17 & -0.02 & 0.10 & 0.06 & 0.01\
age & 0.03 & 0.01 & 0.02 & 0.06 & 0.00\
distance & 0.13 & -0.02 & 0.08 & 0.06 & 0.00\
\
mass & 0.17 & -0.00 & 0.10 & 0.04 & 0.01\
age & 0.03 & 0.00 & 0.02 & 0.04 & 0.00\
distance & 0.13 & 0.01 & 0.08 & 0.03 & 0.00\
\
mass & 0.17 & -0.00 & 0.12 & 0.06 & 0.02\
age & 0.03 & 0.00 & 0.02 & 0.06 & 0.01\
distance & 0.13 & 0.00 & 0.11 & 0.05 & 0.01\
\
mass & 0.17 & -0.01 & 0.12 & 0.06 & 0.02\
age & 0.03 & 0.01 & 0.02 & 0.06 & 0.01\
distance & 0.13 & 0.00 & 0.11 & 0.06 & 0.01\
We also tested if distance modulus or parallax provide a better estimate of distance than distance value itself. We find that this is not the case, and all three estimates give very similar precision, with distance itself showing a slightly smaller fraction of outliers. This contradicts the statement of [@2014MNRAS.437..351B] that “the most reliable distance indicator is the expectation of parallax”. This might be because [@2014MNRAS.437..351B] compared their derived values with parallaxes from *HIPPARCOS*, and comparing parallaxes with parallaxes is likely less biased. We nevertheless provide all three estimates for each star in the output catalogue.
Literature {#sec:compare}
----------
We compared our results with values available in the literature. Results of this comparison are shown in and in , and are discussed here. The aim of this comparison is to show that our results are consistent with previous studies. In most cases these previous studies were based on similar data and methods. So the differences that appear are primarily due to different models used and differences in details of the method implementation. The exceptions are GCS parallaxes that are coming from *HIPPARCOS* and APOKASC distances, derived with asteroseismic values of $\log g$, which are more precise than spectroscopic values. In both cases our results are consistent with published data.
We verified that our extinction estimates are consistent with those provided in surveys. In fact, the differences between our extinction estimates and those in surveys are comparable to differences between extinctions derived with different methods for the same survey, for example, in the LAMOST-GAC data [@2017arXiv170105409X]. We do not provide a detailed analysis here, as the derivation of precise extinctions is beyond the scope of this work.
![image](Compare_input.png){height="0.95\textheight"}
[ccd[1.2]{}d[1.2]{}d[1.2]{}c]{} Survey & Value & [ ]{} & [ ]{} & [ ]{} & $O_{best}$\
GCS &$\pi$ & 0.13 & -0.065 & 0.03 & 0.03\
&${\ensuremath{M}}$ & 0.064 & 0.024 & 0.024 & 0.02\
&$\tau$ & 0.02 & -0.000 & 0.005 & 0.05\
APOKASC &$d$ & 0.11 & -0.017 & 0.05 & 0.15\
RAVE (1) & $\mu_{d, \textrm{Z}}$ & 0.044 & -0.011 & 0.050 & 0.20\
RAVE (2) &$\mu_{d, \textrm{B}}$ & 0.044 & -0.03 & 0.055 & 0.25\
&${\ensuremath{M}}$ & 0.15 & -0.016 & 0.12 & 0.17\
&$\tau$ & 0.027 & 0.009 & 0.018 & 0.13\
LAMOST-GAC &$d_{\textrm{emp}}$ & 0.20 & 0.1 & 0.08 & 0.03\
(main sample)&$d_{\textrm{iso}}$ & 0.20 & 0.02 & 0.063 & 0.05\
### GCS parallaxes, masses, and ages
The GCS [@2011AA...530A.138C and references therein] mainly covers nearby main-sequence stars. The big advantage is that for most of them parallaxes were measured by *HIPPARCOS*. The three top rows of and show differences between parallaxes, masses, and log(age)s from GCS and our estimate. We detected a small bias in parallaxes and masses but a negligible bias in log(age)s. Median absolute deviations are three times lower than fractional uncertainties, which means that our method is consistent with GCS results.
### Distances of APOKASC red giants {#sec:compare_apokasc}
[@2014MNRAS.445.2758R] have determined distances for about 2000 red giant stars from the APOKASC sample. Our distances are less precise than those of [@2014MNRAS.445.2758R] because we do not include asteroseismic data. This test therefore helps us estimate the quality of distance estimations we make for the whole APOGEE sample, as stellar parameters in APOGEE DR13 were calibrated with the use of asteroseismic data from APOKASC. We predicted slightly larger distances ($-0.017$ relative offset), but both bias and scatter are well below the mean fractional uncertainty ($0.11$) of our derived distances. The origin of the bias is likely the difference in how the distance value is calculated when distance PDF is multimodal. This is supported by the fact that for stars with unimodal PDFs we got a relative distance bias of less than $0.0025$.
### RAVE stars {#sec:compare_rave}
We ran the UniDAM tool on RAVE DR4 data [@2013AJ....146..134K] and compared these findings with the results of for distance moduli and @2014MNRAS.437..351B for distance moduli, log(age)s, and masses. We used DR4 data here, as these were used by and @2014MNRAS.437..351B. The relative difference between our distance estimates and $\mu_{d, \textrm{Z}}$ by is around $-0.01$. As compared to the @2014MNRAS.437..351B results ($\mu_{d, \textrm{B}}$), our distance moduli have a relative difference of $-0.03$. The median absolute deviations are large in both cases and are comparable to or larger than the mean relative uncertainties of our values.
The reason for a larger difference for $\mu_{d, \textrm{B}}$ is that @2014MNRAS.437..351B use strong priors on distances, metallicities, and ages coming from a model of the Galaxy. These priors are decreasing functions of distance from the Galactic centre and from the Galactic plane. Therefore they decrease with distance from the Sun for the majority of directions probed by RAVE. The prior that decreases with distance results in smaller estimates for stellar distances and thus slightly smaller masses and larger ages as compared to our results. Difference in log(age) are further enhanced due to age priors used by @2014MNRAS.437..351B.
As can be seen in panels *c* and *e* of , distributions of differences between our results for log(age)s and masses and @2014MNRAS.437..351B results are bimodal, with a secondary peak at approximately $-0.75$ in relative mass difference and $0.2$ in relative log(age) difference. The same can be seen in panel *d*. In panel *f* the second peak is out of the plotted range. This second peak contains about 12% of stars and is caused by a difference in the evolutionary stages accepted in @2014MNRAS.437..351B and by our UniDAM tool. A similar pattern but with a much smaller secondary peak can be seen with data from the mock survey (black histogram in panels c and d of ).
We show in the distributions of the median difference between our results and the @2014MNRAS.437..351B results for distance modulus and log(age) on the Hertzsprung-Russell diagram. We chose RAVE because it contains estimates for both distance modulus and log(age) and because it covers both main-sequence and giant stars. There is clearly a good agreement in both distance modulus and log(age) for the main-sequence stars and large fraction of giant branch, including the red clump. A disagreement for pre-main-sequence stars and large and hot (thus most massive) giants is primarily due to difference in the models and priors used. Similar plots can be produced for other datasets, revealing similar patterns.
![Hertzsprung-Russell diagrams of RAVE data showing colour differences between our results and RAVE results [@2014MNRAS.437..351B] for distance moduli (top panel) and log(age)s (bottom panel).[]{data-label="fig:RAVE"}](RAVE_distance_modulus.png){width="48.00000%"}
![Hertzsprung-Russell diagrams of RAVE data showing colour differences between our results and RAVE results [@2014MNRAS.437..351B] for distance moduli (top panel) and log(age)s (bottom panel).[]{data-label="fig:RAVE"}](RAVE_age.png){width="48.00000%"}
### LAMOST-GAC distances
We compared our results with two distance estimates provided in [@2015RAA....15.1095L]. Our values are systematically smaller by a fraction of $0.1$ as compared to their “empirical” estimates based on the MILES library. We have much better agreement with estimates based on isochrones from Dartmouth Stellar Evolution Database ($0.02$ fractional difference). [@2015RAA....15.1095L] do not provide uncertainties for their distance estimates. Relative uncertainties of our distance estimates for LAMOST-GAC are higher than estimates build on data from other surveys due to the higher uncertainties in spectral parameters, which lead to a fractional uncertainties on our distances of $0.2$.
Effect of the volume correction {#sec:priors}
-------------------------------
We ran tests to see how much the use of the volume correction (see ) affects our results. Volume correction can be seen as a distant prior that ensures constant number density. In general, if the distance prior is a decreasing (increasing) function, the resulting distance is smaller (larger) than in the case of a flat prior. The size of this effect depends on the relative variation of the prior function within the uncertainty range of the parameter. We chose two datasets for the test: GCS and APOKASC giants [@2014MNRAS.445.2758R]. The GCS dataset contains primarily main-sequence stars with distances derived from *HIPPARCOS* parallaxes. These parallaxes are in most cases more precise than our distance measurements. Distances of APOKASC giants were derived using asteroseismic data, and therefore should also be more precise than our measurements. We ran our UniDAM tool with and without volume correction. We selected only USPDFs with highest weight for analysis in each case. We then explored how parallaxes, distances, masses, and log(age)s were affected by volume correction. For multimodal cases it is important that the use of the volume correction might change the relative weights of USPDFs, so that priorities might also change. The result of our experiment is that in $7\%$ cases for APOGEE the assigned evolutionary stage changed when we applied the volume correction. This did not happen for GCS as PDFs are unimodal in most cases for main-sequence stars in that survey. By removing volume correction we decreased distance estimations in both datasets by a fraction of $0.032$ if the assigned evolutionary stage did not change. This is well below the median relative distance uncertainties that we have ($\approx 0.13$). The mass estimates are correlated with distance, and decreased by a fraction of $0.03$, again, this is well below relative uncertainties in mass that we find ($\approx 0.15$). The logarithm of age estimates, which are anti-correlated with distance, increased, but only by a fraction of $0.005$ (log(age) fractional uncertainties $\approx 0.03$). So the conclusion here is that the volume correction has a measurable and well understood effect on measured parameters, but this effect is smaller than our typical parameter uncertainties. This effect is systematic and has to be taken into account when comparing with results obtained with distance priors; see for example . However, we expect the contribution from the (unknown) systematic uncertainties of spectroscopic measurements to be at least as high as the influence of the volume correction.
We also compare how the volume correction affects the agreement between our measurements and data from the literature, which is described above in Section \[sec:compare\]. The effect of volume correction is summarised in the . For the GCS sample there is a clear advantage of using the volume correction. Without volume correction our parallax estimates are lower than those in GCS by a fraction $0.091$. If we use the volume correction, our parallax estimates increase on average, which improves the agreement (fractional difference of $0.059$; see the first row of ). The same applies for log(age) and mass estimates. As for the APOKASC sample, there seems to be an opposite result, as we seem to overestimate the distance compared to [@2014MNRAS.445.2758R]. This is likely caused by the fact that distances provided in [@2014MNRAS.445.2758R] are in fact modes of probability density function. If we use modes instead of means for our USPDFs for distances, than we get a relative difference of less than $10^{-3}$ if the volume correction is included and a relative difference around $0.014$ if there is no volume correction used.
The effect of the volume correction increases gradually with increasing distances, where our distance uncertainty is larger. This is caused by an increase in the relative variation of the value of the volume correction within the distance uncertainty. The effect of volume correction is approximately proportional to a square of the uncertainty in distance modulus. If the assigned evolutionary stage changes, the estimates of distance, mass, and log(age) can change by a large amount, sometimes by more than $50\%$.
[ccd[1.3]{}d[1.3]{}]{} Survey & Value & [ ]{} & [ ]{}\
GCS & $\pi$ & -0.091 & -0.059\
& $M$ & 0.027 & 0.015\
& $\tau$ & -0.000 & 0.000\
APOKASC & $d$ & -0.001 & -0.021\
Stellar parameters catalogue
============================
We provide a catalogue of stellar distances, masses, and log(age)s determined with the UniDAM tool described in this manuscript. Our catalogue contains over 3.8 million rows (one row for each USPDF) for over 2.5 million stars. We summarise some properties of this catalogue in . This figure shows medians of different quantities in each bin on the Hertzsprung-Russell (HR) diagram. Data from all input spectroscopic surveys have been used to produce this figure. Quantification of differences between spectroscopic data from different surveys and effects of incompleteness and selection are beyond the scope of this paper and will be addressed in future work. Here, we are interested in a qualitative description of how the quality of our estimates vary in different parts of the HR diagram. Panels *a* and *b* of show uncertainties in measured log(age)s and masses. Log-ages are best constrained in the upper part of the main sequence, where parameters change fast as a star leaves the main sequence. On the contrary, masses are much better constrained on the main sequence.
Panels *c* and *d* show median values of $p_{best}$ (the probability for a best-fitting model) and $p_{sed}$ (a measure of how good we reproduce SED with our model). Patterns on both panels are similar with worse results close to the edges of the region covered by the PARSEC isochrones and additionally for $p_{sed}$ between the main sequence and giant branch.
Panels *e* and *f* show median values for the weights $V_0$ of the highest weight USPDF and total number of USPDFs with $V_i > 0.03$. The patterns are nearly inverse: on the main sequence we typically have only one USPDF with weight equals to unity, whereas on the giant branch the number of USPDFs increases and the weights of the highest weight USPDF decreases. It is important that for the giant branch we typically have two or more USPDFs, therefore using just the one with the highest weight is insufficient; the best solution is to use all USPDFs with their relative weight taken into account.
![image](Catalog.png){width="90.00000%"}
Quality flags {#sec:quality}
-------------
The output catalogue contains a `quality` column, which indicates how reliable data contained in each row are. Values have been assigned as follows:
1
: - single PDF
A
: - highest-weight USPDF has power of 0.9 or more
B
: - 1st and 2nd priority USPDFs together have power of 0.9 or more
C
: - 1st, 2nd, and 3rd priority USPDFs together have power of 0.9 or more
D
: - 1st, 2nd, and 3rd priority USPDFs together have power of less than 0.9
L
: - low power USPDF (between 0.03 and 0.1)
E
: - USPDF has $p_{sed} < 0.1$ (possibly bad photometry)
X
: - highest weight USPDF has $p_{best} < 0.1$ (likely off the model grid)
N
: - USPDF has less than 10 models (unreliable result)
Although `the quality` value provides some information on the quality of the parameter estimation, it is not recommended to select stars based on that value alone (apart from removing unreliable results with values **E, N,** or **X**), because the quality value varies heavily over the HR diagram: for main-sequence stars the quality is in most cases **1** or **A**, whereas for giants quality **B, C,** or even **D** are much more common. This is illustrated by the distribution of the number of USPDFs in panel f of . There are $2\%$ cases where a highest weight USPDF has quality **E**, $4.2\%$ cases with quality **X,** and less than $0.01\%$ cases with quality **N**.
Discussion and conclusions {#sec:discussion}
==========================
We provide a catalogue of distances, log(age)s, and masses for over 2.5 million stars. This number will increase as new data is made available, for example new data releases for surveys already included, or data from new surveys. Gaia data will be of high value and can be used as an independent test of our distances or as a parallax prior. In the latter case it should improve our extinction, mass, and log(age) estimates considerably.
In the current version of our UniDAM tool we use infrared magnitudes, [${\ensuremath{T_{\rm{eff}}}}, \log g$, and ${[\rm{Fe/H}]}$]{} as inputs to derive distances, log(age)s, and masses of stars. The tool was also successfully used to derive temperatures for a APOKASC sample, with inputs being surface gravities, and masses derived from seismic information and spectroscopic metallicities (Tayar et al. 2017, accepted).
An advantage of our approach is that we represent multi-peaked PDFs for parameters with a sum of unimodal distributions. Additionally we provide parameters of fits representing each distribution and the correlations between distance modulus, log(age), and mass. Therefore our catalogue contains not only mean values and uncertainties, but detailed information on PDFs. This allows us to apply more sophisticated analysis to the dataset to reveal both global and local structures in the Galaxy.
The next step will be to add proper motion data, thus obtaining all six dimensions of stellar positions and velocities. Combination of positions and velocities with ages, metallicities, and (where available) chemical abundances will open up new possibilities to study Galactic structure. Furthermore, it is important to get a correct estimate of the selection function, as this might affect results not only quantitatively, but also qualitatively, as was shown by @2012ApJ...751..131B. We intend to produce a selection function for our catalogue and then proceed to study Galactic structure on large and small scales.
Acknowledgements {#acknowledgements .unnumbered}
================
Authors thank the anonymous referee for a detailed report with many useful suggestions. It helped us to improve the manuscript substantially.
The research leading to the presented results has received funding from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007- 2013)/ERC grant agreement (No 338251, StellarAges).
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration.
Funding for RAVE has been provided by the Australian Astronomical Observatory; the Leibniz-Institut fuer Astrophysik Potsdam (AIP); the Australian National University; the Australian Research Council; the French National Research Agency; the German Research Foundation (SPP 1177 and SFB 881); the European Research Council (ERC-StG 240271 Galactica); the Istituto Nazionale di Astrofisica at Padova; The Johns Hopkins University; the National Science Foundation of the USA (AST-0908326); the W. M. Keck foundation; the Macquarie University; the Netherlands Research School for Astronomy; the Natural Sciences and Engineering Research Council of Canada; the Slovenian Research Agency; the Swiss National Science Foundation; the Science and Technology Facilities Council of the UK; Opticon; Strasbourg Observatory; and the Universities of Groningen, Heidelberg and Sydney. The RAVE website is at <https://www.rave-survey.org>.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III website is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France
Splitting the PDF into unimodal sub-PDFs {#sec:split_uspdf}
========================================
Here we describe a method used to split complex PDFs into a set of unimodal sub-PDFs.
First, we produced a histogram for logarithm of stellar masses of models of the same evolutionary stage (weighted by $W_j$, see ). Then, we detected local minima and maxima of this histogram. Local minima (or maxima) are defined as locations of bins that have lower (higher) value $h_i$ than all other bins within the window: i.e. $h_i = min\{h_j, i-n \leq j \leq i+n\}$ for a local minimum and $h_i = max\{h_j, i-n \leq j \leq i+n\}$ for a local maximum. Window size $n$ was taken to be 3 for maxima and 2 for minima. Differences in window sizes are caused by the need to locate minima with high precision and to avoid too many maxima in noisy data. Formally, it is possible to have more than one local minimum between two local maxima; we split only by the lowest of them in this case. We split the sample at positions of local minima that are lower than 0.75 times the value of the smallest of the two enclosing maxima. We thus could have one or several USPDFs, for each evolutionary stage.
We chose the histogram in logarithm of mass to split the multimodal PDFs as the logarithmic scale is close to linear one around $1 {\ensuremath{\rm{M}_\odot}}$, but gives a smoother histogram for high-mass stars. Mass is a better choice to split the PDF because values of log(age)s are quantised by construction of the isochrones, and distances are much less sensitive to evolutionary stage.
Distributions used in the paper {#app:functions}
===============================
Here we give definitions for some functions used in the paper.
We define $\phi(x)=\frac{1}{\sqrt{2 \pi}}\exp\left(-\frac{1}{2}x^2\right)$ as the PDF of the standard normal distribution and $\Phi(x)$ as it’s cumulative distribution. This PDF is designated as a Gaussian in the text.
Introducing $\xi=\frac{x-\alpha}{\sigma}$ and $Z=\Phi(\frac{b-\alpha}{\sigma})-\Phi(\frac{a-\alpha}{\sigma})$, we have for a truncated Gaussian $$f(x,\alpha,\sigma, a,b) = \frac{\phi(\xi)}{\sigma Z},$$ if $a < x < b$ and $f(x;\alpha,\sigma, a,b) = 0$ otherwise. Here $\alpha$ is a location, $\sigma$ - scale, $a, b$ - lower and upper limits.
For skewed Gaussian with a shape parameter $s$ $$f(x,\alpha,\sigma, s) = \frac{2}{\sigma} \phi(\xi) \Phi(s \xi).$$
We use the definition of the truncated [Student’s t-distribution]{}$$f_t(x, \nu) = \frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\,\Gamma \left(\frac{\nu}{2} \right)} \left(1+\frac{x^2}{\nu} \right)^{-\frac{\nu+1}{2}},$$ where $\nu$ is the “number of degrees of freedom” (which can be an arbitrary real number here). Again, $\Phi_t(x, \nu)$ is the cumulative distribution function.
A modified truncated exponential distribution with lower and upper limits $a$ and $b$, respectively is defined as $$f_{\textrm{exp}}(x, \alpha, \sigma, a, b) =
\begin{cases}
C e^{-\frac{|x-\alpha|}{\sigma}}, & \textrm{if}\ a < x < b \\
0,& \textrm{otherwise.}
\end{cases}$$ Here, $C$ is the normalisation constant, so that $\int_a^b f_{\textrm{exp}}(x, \alpha, \sigma, a, b) = 1$.
For a truncated Student’s t-distribution with lower and upper limits $a$ and $b$, respectively, we define $\xi=\frac{x-\alpha}{\sigma}$ and $Z_t=\Phi_t(\frac{b-\alpha}{\sigma}, \nu)-\Phi_t(\frac{a-\alpha}{\sigma}, \nu)$. Then for the PDF we have $$f_{\textrm{truncated-t}}(x, \alpha, \sigma, a, b) = \frac{f_t(\xi, \nu)}{Z_t}.$$
[^1]: email: mints@mps.mpg.de
[^2]: The unified tool source code is available at <https://github.com/minzastro/unidam>, tables with results are available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via <http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/>
[^3]: see APOGEE target selection description at <http://www.sdss.org/dr12/irspec/targets/>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Catalan words are particular growth-restricted words over the set of non-negative integers, and they represent still another combinatorial class counted by the Catalan numbers. We study the distribution of descents on the sets of Catalan words avoiding a pattern of length at most three: for each such a pattern $p$ we provide a bivariate generating function where the coefficient of $x^ny^k$ in its series expansion is the number of length $n$ Catalan words with $k$ descents and avoiding $p$. As a byproduct, we enumerate the set of Catalan words avoiding $p$, and we provide the popularity of descents on this set. Some of the obtained enumerating sequences are not yet recorded in the On-line Encyclopedia of Integer Sequences.'
author:
- |
Jean-Luc [Baril]{}, Sergey Kirgizov and Vincent Vajnovszki\
LE2I, Université de Bourgogne Franche-Comté\
B.P. 47 870, 21078 DIJON-Cedex France\
[e-mail:{barjl,sergey.kirgizov,vvajnov}@u-bourgogne.fr]{}
title: Descent distribution on Catalan words avoiding a pattern of length at most three
---
[**Keywords:**]{} Enumeration, Catalan word, pattern avoidance, descent, popularity.
Introduction and notation
=========================
Combinatorial objects counted by the Catalan numbers are very classical in combinatorics, with a variety of applications in, among others, Biology, Chemistry, and Physics. A length $n$ [*Catalan word*]{} is a word $w_1w_2\ldots w_n$ over the set of non-negative integers with $w_1=0$, and $$0\leq w_i\leq w_{i-1}+1,$$ for $i=2,3,\ldots n$. We denote by $\mathcal{C}_n$ the set of length $n$ Catalan words, and $\mathcal{C}=\cup_{n\geq 0}\mathcal{C}_n$. For example, $\mathcal{C}_2=\{00,01\}$ and $\mathcal{C}_3=\{000,001,010,011,012\}$. It is well known that the cardinality of $\mathcal{C}_n$ is given by the $n$th Catalan number $\frac{1}{n+1} {{2n}\choose{n}}$, see for instance [@Stanley99 exercise 6.19.$u$, p. 222], which is the general term of the sequence [A000108](https://oeis.org/A000108) in the On-line Encyclopedia of Integer Sequences (OEIS) [@Sloa]. See also [@Mansour-Vajnovszki] where Catalan words are considered in the context of the exhaustive generation of Gray codes for growth-restricted words.
A [*pattern*]{} $p$ is a word satisfying the property that if $x$ appears in $p$, then all integers in the interval $[0,x-1]$ also appear in $p$. We say that a word $w_1w_2\ldots w_n$ contains the pattern $p=p_1\ldots p_k$ if there is a subsequence $w_{i_1}w_{i_2}\ldots w_{i_k}$ of $w$, $i_1<i_2< \cdots < i_k$, which is order-isomorphic to $p$. For example, the Catalan word $01012312301$ contains seven occurrences of the pattern $110$ and four occurrences of the pattern $210$. A word [*avoids*]{} the pattern $p$ whenever it does not contain any occurrence of $p$. We denote by $\mathcal{C}_n(p)$ the set of length $n$ Catalan words avoiding the pattern $p$, and $\mathcal{C}(p)=\cup_{n\geq 0}\mathcal{C}_n(p)$. For instance, $\mathcal{C}_4(012)=\{
0 0 0 0,
0 0 0 1,
0 0 1 0,
0 0 1 1,
0 1 0 0,
0 1 0 1,
0 1 1 0,
0 1 1 1\}$, and $\mathcal{C}_4(101)=\{
0 0 0 0,
0 0 0 1,$ $
0 0 1 0,
0 0 1 1,
0 0 1 2,
0 1 0 0,
0 1 1 0,
0 1 1 1,
0 1 1 2,
0 1 2 0,
0 1 2 1,
0 1 2 2,
0 1 2 3
\}$. For a set of words, the [*popularity*]{} of a pattern $p$ is the overall number of occurrences of $p$ within all words of the set, see [@Bona2012] where this notion was introduced, and [@AHP2015; @Homberger; @Rudo; @Bkv2017] for some related results.
A [*descent*]{} in a word $w=w_1w_2\ldots w_n$ is an occurrence $w_iw_{i+1}$ such that $w_i>w_{i+1}$. Alternatively, a descent is an occurrence of the [*consecutive*]{} pattern $10$ ([*i.e.,*]{} the entries corresponding to an occurrence of $10$ are required to be adjacent). We denote by $d(w)$ the number of descents of $w$, thus the popularity of descents on a set $S$ of words is $\sum_{w\in S}d(w)$. The distribution of the number of descents has been widely studied on several classes of combinatorial objects such as permutations and words, since descents have some particular interpretations in fields as Coxeter groups or theory of lattice paths [@Ber; @Ges].
The main goal of this paper is to study the descent distribution on Catalan words (see Table \[tab1\] for some numerical values). More specifically, for each pattern $p$ of length at most three, we give the distribution of descents on the sets $\mathcal{C}_n(p)$ of length $n$ Catalan words avoiding $p$. We denote by $C_p(x,y)=\sum_{n,k\geq 0} c_{n,k}x^ny^k$ the bivariate generating function for the cardinality of words in $\mathcal{C}_n(p)$ with $k$ descents. Plugging $y=1$
- into $C_p(x,y)$, we deduce the generating function $C_p(x)$ for the set $\mathcal{C}_n(p)$, and
- into $\frac{\partial C_p(x,y)}{\partial y }$, we deduce the generating function for the popularity of descents in $\mathcal{C}_n(p)$.
From the definition at the beginning of this section it follows that a Catalan word is either the empty word, or it can uniquely be written as $0 (w'+1)w''$, where $w'$ and $w''$ are in turn Catalan words, and $w'+1$ is obtained from $w'$ by adding one to each of its entries. We call this recursive decomposition [*first return decomposition*]{} of a Catalan word, and it will be crucial in our further study. It follows that $C(x)$, the generating function for the cardinality of $\mathcal C_n$, satisfies: $$C(x)=1+x\cdot C^2(x),$$ which corresponds precisely to the sequence of Catalan numbers.
We conclude this section by explaining how Catalan words are naturally related to two classical combinatorial classes counted by the Catalan numbers.
### Catalan words vs. Dyck words {#catalan-words-vs.-dyck-words .unnumbered}
A [*Dyck word*]{} is a word over $\{u,d\}$ with the same number of $u$’s and $d$’s, and with the property that all of its prefixes contain no more $d$’s than $u$’s. Alternatively, a Dyck word can be represented as a lattice path starting at $(0,0)$, ending at $(2n,0)$, and never going below the $x$-axis, consisting of up steps $u=(1,1)$ and down steps $d=(1,-1)$. There is a direct bijection $\delta\mapsto w$ between the set of Dyck words of semilength $n$ and $\mathcal{C}_n$: the Catalan word $w$ is the sequence of the lowest ordinate of the up steps $u$ in the Dyck word $\delta$, in lattice path representation. For instance, the image through this bijection of the Dyck word $u d uu d uu dd uu ddd$ of semilength $7$ is $0 01 12 12\in \mathcal{C}_7$. Note that the above bijection gives a one-to-one correspondence between occurrences of the consecutive pattern $ddu$ in Dyck words and descents in Catalan words.
### Catalan words vs. binary trees {#catalan-words-vs.-binary-trees .unnumbered}
In [@Makinen] the author introduced an integer sequence representation for binary trees, called [*left-distance sequence*]{}. For a binary tree $T$, let consider the following labeling of its nodes: the root is labeled by $0$, a left child by the label of its parent, and a right child by the label of its parent, plus one. The left-distance sequence of $T$ is obtained by covering $T$ in inorder ([*i.e.*]{}, visit recursively the left subtree, the root and then the right subtree of $T$) and collecting the labels of the nodes. In [@Makinen] it is showed that, for a given length, the set of left-distance sequences is precisely that of same length Catalan words. Moreover, the induced bijection between Catalan words and binary trees gives a one-to-one correspondence between descents in Catalan words and particular nodes (left-child nodes having a right child) in binary trees.
The remainder of the paper is organized as follows. In Section 2, we study the distribution of descents on the set $\mathcal{C}$ of Catalan words. As a byproduct, we deduce the popularity of descents in $\mathcal{C}$. We consider also similar results for the obvious cases of Catalan words avoiding a pattern of length two. In Section 3, we study the distribution and the popularity of descents on Catalan words avoiding each pattern of length three.
The sets $\mathcal{C}$ and $\mathcal{C}(p)$ for $p\in\{00,01,10\}$
==================================================================
Here we consider both unrestricted Catalan words and those avoiding a length two pattern. We denote by $C(x,y)$ the bivariate generating function where the coefficient of $x^ny^k$ of its series expansion is the number of length $n$ Catalan words with $k$ descents. When we restrict to Catalan words avoiding the pattern $p$, the corresponding generating function is denoted by $C_p(x,y)$.
\[th\] We have $$C(x,y)=\frac {1-2x+2xy-\sqrt {1-4x+4x^2-4x^2y}}{2xy}.$$
[Let $w=0(w'+1)w''$ be the first return decomposition of a non-empty Catalan word $w$ with $w',w''\in\mathcal{C}$. If $w'$ (resp. $w''$) is empty then the number $d(w)$ of descents in $w$ is the same as that of $w''$ (resp. $w'$); otherwise, we have $d(w)=d(w')+d(w'')+1$ since there is a descent between $w'+1$ and $w''$. So, we obtain the functional equation $C(x,y) = 1+xC(x,y) + x(C(x,y)-1)+xy(C(x,y)-1)^2$ which gives the desired result. ]{} As expected, $C(x)=C(x,1)=\frac {1-\sqrt {1-4x}}{2x}$ is the generating function for the Catalan numbers, and $\frac{\partial C(x,y)}{\partial y }|_{y=1}$ is the generating function for the descent popularity on $\mathcal{C}$, and we have the next corollary.
\[cor\] The popularity of descents on the set $\mathcal{C}_n$ is ${2n-2}\choose{n-3}$, and its generating function is $\frac {1-4x+2x^2-(1-2x)\sqrt {1-4x}}{2x
\sqrt {1-4x}}$ (sequence [A002694](https://oeis.org/A002694) in [@Sloa]).
Catalan words of odd lengths encompass a smaller size Catalan structure. This result is stated in the next corollary, see the bold entries in Table \[tab1\].
\[corr\] Catalan words of length $2n+1$ with $n$ descents are enumerated by the $n$[th]{} Catalan number $\frac{1}{n+1}{2n\choose n}$.
[Clearly, the maximal number of descents in a length $n$ Catalan word is $\lfloor\frac{n-1}{2}\rfloor$. Let $w$ be a Catalan word of length $2n+1$ with $n$ descents. We necessarily have $w=0(w'+1)w''$ with $w',w''\neq \epsilon$, $d(w')=\lfloor\frac{|w'|-1}{2}\rfloor$, $d(w'')=\lfloor\frac{|w''|-1}{2}\rfloor$ and $d(w)=d(w')+d(w'')+1$. Since the length of $w$ is odd, $|w'|$ and $|w''|$ have the same parity. If $|w'|$ and $|w''|$ are both even, then $d(w)=\frac{|w'|-2}{2}+\frac{|w''|-2}{2}+1=
\frac{|w'|+|w''|-2}{2}
<\lfloor\frac{(|w'|+|w''|+1)-1}{2}\rfloor=
\lfloor\frac{n-1}{2}\rfloor$ which gives a contradiction. So, $|w'|$ and $|w''|$ are both odd, and we have $d(w)=\frac{|w'|-1}{2}+\frac{|w''|-1}{2}+1=\lfloor\frac{(|w'|+|w''|+1)-1}{2}\rfloor=
\lfloor\frac{n-1}{2}\rfloor$. Thus the generating function $A(x)$ where the coefficient of $x^n$ is the number of Catalan words of length $2n+1$ with $n$ descents satisfies $A(x)=1+xA(x)^2$ which is the generating function for the Catalan numbers. ]{}
There are three patterns of length two, namely $00$, $01$ and $10$, and Catalan words avoiding such a pattern do not have descents, thus the corresponding bivariate generating functions collapse into one variable ones.
\[th00\] For $p\in\{00,01\}$, we have $C_{p}(x,y)=\frac{1}{1-x}$.
[If $p=00$ (resp. $p=01$) then $012\ldots n-1$ (resp. $0\ldots 0$) is the unique non-empty Catalan word of length $n$ avoiding $p$, and the statement follows. ]{}
\[th10\] We have $C_{10}(x,y)=\frac{1-x}{1-2x}$, which is the generating function for the sequence $2^{n-1}$ (sequence [A011782](https://oeis.org/A011782) in [@Sloa]).
[A non-empty Catalan word avoiding the pattern $10$ is of the form $0^k(w'+1)$ for $k\geq 1$, and with $w'\in \mathcal{C}(10)$. So, we have the functional equation $C_{10}(x)=1+\frac{x}{1-x}C_{10}(x)$, which gives $C_{10}(x)=\frac{1-x}{1-2x}$. ]{}
The sets $\mathcal{C}(p)$ for a length three pattern $p$
========================================================
Here we turn our attention to patterns of length three. There are thirteen such patterns, and we give the distribution and the popularity of descents on Catalan words avoiding each of them. Some of the obtained results are summarized in Tables \[Tab1\] and \[Tab2\].
\[th012\] For $p\in\{012, 001\}$, we have $$C_p(x,y)=\frac {1-x+x^2-x^2y}{1-2x+x^2-x^2y}.$$
[ A non-empty word $w\in\mathcal{C}(012)$ has its first return decomposition $w=01^kw''$ where $k\geq 0$ and $w''\in \mathcal{C}(012)$. If $k=0$ or $w''=\epsilon$, then the number of descents in $w$ is the same as that of $w''$; otherwise, we have $d(w)=d(w'')+1$ (there is a descent between $1^k$ and $w''$). So, we obtain the functional equation $C_{012}(x,y) = 1+xC_{012}(x,y)+\frac{x^2}{1-x}+\frac{x^2}{1-x}y(C_{012}(x,y)-1)$ which gives the desired result.\
A non-empty word $w\in\mathcal{C}(001)$ has the form $w=0(w'+1)0^k$ where $w'\in \mathcal{C}(001)$ and $k\geq 0$. If $k=0$ or $w'=\epsilon$, then the number of descents in $w$ is the same as that of $w'$; otherwise, we have $d(w)=d(w')+1$. So, we obtain the functional equation $C_{001}(x,y) = 1+x(C_{001}(x,y)-1)+\frac{x}{1-x}+\frac{x^2}{1-x}y(C_{001}(x,y)-1)$ which gives the desired result. ]{}
Considering the previous theorem and the coefficient of $x^n$ in $C_{p}(x,1)=\frac{1-x}{1-2x}$ and in $\frac{\partial C_p(x,y)}{\partial y }|_{y=1}=\frac {x^3}{(1-2x)^2}$, we obtain the next corollary.
\[cor012\] For $p\in\{012, 001\}$, we have $|\mathcal{C}_n(p)|=2^{n-1}$, and the popularity of descents on the set $\mathcal{C}_n(p)$ is $(n-2)\cdot 2^{n-3}$ (sequence [A001787](https://oeis.org/A001787) in [@Sloa]).
As in the case of length two patterns, a Catalan word avoiding $010$ does not have descents, and we have the next theorem.
\[th010\] If $p=010$, then $C_p(x,y)=\frac{1-x}{1-2x}$ which is the generating function for the sequence $2^{n-1}$ (sequence [A011782](https://oeis.org/A011782) in [@Sloa]).
[ A non-empty word $w\in\mathcal{C}(010)$ can be written either as $w=0w'$ with $w'\in \mathcal{C}(10)$, or as $w=0(w'+1)$ with $w'\in \mathcal{C}(010)\setminus\{\epsilon\}$. So, we deduce $C_{010}(x)=1+xC_{10}(x)+x(C_{010}(x)-1)$, and the statement holds. ]{}
\[th021\] For $p=021$, we have $$C_p(x,y)=\frac {1-4x+6x^2-x^2y-4x^3+3x^3y+x^4-x^4y}{(1-x)(1-2x)(1-2x+x^2-x^2y) }.$$
Let $w$ be a non-empty word in $\mathcal{C}(021)$, and let $0(w'+1)w''$ its first return decomposition with $w',w''\in \mathcal{C}(021)$. Note that $w'$ belongs to $\mathcal{C}(10)$. We distinguish two cases: (1) $w'$ does not contain $1$, and (2) otherwise.\
In the case (1), $w'\in \mathcal{C}(01)$ ([*i.e.*]{}, $w'=0^k$ for some $k\geq 0$), and $w''\in \mathcal{C}(021)$. If $w'=\epsilon$ (resp. $w''=\epsilon$), then the number of descents in $w$ is the same as that of $w''$ (resp. $w'$); otherwise, we have $d(w)=d(w')+d(w'')+1$. So, this case contributes to $C_p(x,y)$ with $xC_{01}(x,y)+x(C_{021}(x,y)-1)+xy(C_{01}(x,y)-1)(C_{021}(x,y)-1)$.
In the case (2), $w'\in \mathcal{C}(10)\setminus \mathcal{C}(01)$ and $w''\in \mathcal{C}(01)$. If $w''=\epsilon$ then $w$ and $w'$ have the same number of descents; otherwise, we have $d(w)=d(w')+d(w'')+1$. So, this case contributes to $C_p(x,y)$ with $x(C_{10}(x,y)-C_{01}(x,y))+xy(C_{10}(x,y)-C_{01}(x,y))(C_{01}(x,y)-1)$.
Taking into account these two disjoint cases, and adding the empty word, we deduce the functional equation $C_{021}(x,y)=1+xC_{01}(x,y)+x(C_{021}(x,y)-1)+xy(C_{01}(x,y)-1)(C_{021}(x,y)-1)+x(C_{10}(x,y)-C_{01}(x,y))+xy(C_{10}(x,y)-C_{01}(x,y))(C_{01}(x,y)-1)$, which after calculation gives the result.
\[cor021\] For $p=021$, we have $C_p(x)=\frac {1-4x+5x^2-x^3}{(1-2x)^2(1-x)}$ which is the generating function for the sequence $(n-1)\cdot 2^{n-2}+1$ (sequence [A005183](https://oeis.org/A005183) in [@Sloa]). The popularity of descents on the set $\mathcal{C}_n(p)$ is $(n+1)(n-2)\cdot 2^{n-5}$ with the generating function $\frac {x^3(1-x)}{(1-2x)^3}$ (sequence [A001793](https://oeis.org/A001793) in [@Sloa]).
\[th102\] For $p\in\{102,201\}$, we have $$C_p(x,y)=\frac {1-3x+3x^2-2x^2y-x^3+x^3y}{ \left( 1-x
\right) \left( 1-3x+2x^2-2x^2y \right) }.$$
Let $w$ be a non-empty word in $\mathcal{C}(102)$, and let $0(w'+1)w''$ its first return decomposition with $w',w''\in \mathcal{C}(102)$. If $w'$ is empty, then $w=0w''$ for some $w''\in \mathcal{C}(102)$ and we have $d(w)=d(w'')$. If $w''$ is empty, then $w=0(w'+1)$ for some $w'\in \mathcal{C}(102)$ and we have $d(w)=d(w')$. If $w'$ and $w''$ are both non-empty, then $w'\in \mathcal{C}(102)\setminus\{\epsilon\}$ and $w''\in \mathcal{C}(012)\setminus\{\epsilon\}$. We deduce the functional equation $C_{102}(x,y)=1+xC_{102}(x,y)+x(C_{102}(x,y)-1)+xy(C_{102}(x,y)-1)(C_{012}(x,y)-1)$. Finally, by Theorem \[th012\] we obtain the desired result.
Let $w$ be a non-empty word in $\mathcal{C}(201)$, and let $0(w'+1)w''$ its first return decomposition with $w',w''\in \mathcal{C}(201)$. If $w'$ is empty, then $w=0w''$ for some $w''\in \mathcal{C}(201)$ and we have $d(w)=d(w'')$. If $w''$ is empty, then $w=0(w'+1)$ for some $w'\in \mathcal{C}(201)$ and we have $d(w)=d(w')$. If $w'$ and $w''$ are both non-empty, then $d(w)=d(w')+d(w'')+1$ and we distinguish two cases: (1) $w'$ does not contain $1$, and (2) otherwise. In the case (1), we have $w'\in \mathcal{C}(01)\setminus\{\epsilon\}$ and $w''\in \mathcal{C}(201)\setminus\{\epsilon\}$; in the case (2), $w'$ contains 1 and $w'\in \mathcal{C}(201)\setminus \mathcal{C}(01)$ and $w''\in \mathcal{C}(01)\setminus\{\epsilon\}$. Combining the previous cases, the functional equation becomes $C_{201}(x,y)=1+xC_{201}(x,y)+x(C_{201}(x,y)-1)+xy(C_{01}(x,y)-1)(C_{201}(x,y)-1)+xy(C_{201}(x,y)-C_{01}(x,y))(C_{01}(x,y)-1)$, which gives the desired result.
\[cor102\]For $p\in\{102,201\}$, we have $C_p(x)=\frac {1-3x+x^2}{(1-x)(1-3x)}$ which is the generating function of the sequence $\frac{3^{n-1}+1}{2}$ (sequence [A007051](https://oeis.org/A007051) in [@Sloa]). The popularity of descents on the set $\mathcal{C}_n(p)$ is $(n-2)\cdot 3^{n-3}$ with the generating function $\frac{x^3}{(1-3x)^2}
$ (sequence [A027471](https://oeis.org/A027471) in [@Sloa]).
\[th120\] For $p\in\{120,101\}$, we have $$C_p(x,y)=\frac {1-2x+x^2-x^2y}{1-3x+2x^2-x^2y}.$$
Let $w$ be a non-empty word in $\mathcal{C}(120)$, and let $0(w'+1)w''$ be its first return decomposition where $w',w''\in \mathcal{C}(120)$. If $w''$ is empty, then $w=0(w'+1)$ for some $w'\in \mathcal{C}(120)$ and we have $d(w)=d(w')$; if $w'$ is empty, then $w=0w''$ for some $w''\in \mathcal{C}(120)$ and we have $d(w)=d(w')$; if $w'$ and $w''$ are not empty, then $w'\in \mathcal{C}(01)\setminus\{\epsilon\}$, $w''\in \mathcal{C}(120)\setminus\{\epsilon\}$ and $d(w)=d(w')+d(w'')+1$. We deduce the functional equation $C_{120}(x,y)=1+xC_{120}(x,y)++x(C_{120}(x,y)-1)+xy(C_{01}(x,y)-1)(C_{120}(x,y)-1)$ which gives the result.
Let $w$ be a non-empty word in $\mathcal{C}(101)$, and let $0(w'+1)w''$ be its first return decomposition where $w',w''\in \mathcal{C}(101)$. If $w'$ is empty, then $w=0w''$ for some $w''\in \mathcal{C}(101)$ and $d(w)=d(w'')$; if $w''$ is empty, then $w=0(w'+1)$ for some $w'\in \mathcal{C}(101)$ and $d(w)=d(w'')$; if $w'$ and $w''$ are not empty, then $w'\in \mathcal{C}(101)\setminus\{\epsilon\}$ and $w''\in \mathcal{C}(01)\setminus\{\epsilon\}$ and $d(w)=d(w')+d(w'')+1$. We deduce the functional equation $C_{101}(x,y)=1+xC_{101}(x,y)+x(C_{101}(x,y)-1)+ xy(C_{101}(x,y)-1)(C_{01}(x,y)-1)$ which gives the result.
\[cor120\] For $p\in\{120,101\}$, we have $C_p(x)=\frac {1-2x}{1-3x+x^2}$ and the coefficient of $x^n$ in its series expansion is the $(2n-1)$th term of the Fibonacci sequence (see [A001519](https://oeis.org/A001519) in [@Sloa]). The popularity of descents on the set $\mathcal{C}_n(p)$ is given by $\sum_{k=1}^{n-2} k\cdot {{n+k-2}\choose{2k}}$ which is the coefficient of $x^n$ in the series expansion of $\frac {x^3 \left( 1-x \right) }{ \left(1-3x+x^2\right)^2}$ (sequence [A001870](https://oeis.org/A001870) in [@Sloa]).
\[th011\] For $p=011$, we have $$C_p(x,y)=\frac {1-2x+2x^2-x^3+x^3y}{ \left( 1-x \right)^3}.$$
[Let $w$ be a non-empty word in $\mathcal{C}(011)$, and let $0(w'+1)w''$ its first return decomposition where $w',w''\in \mathcal{C}(011)$. If $w'$ (resp. $w''$) is empty, then we have $d(w)=d(w'')$ (resp. $d(w)=d(w')$); if $w'$ and $w''$ are non-empty, then $w'\in \mathcal{C}(00)\setminus \{\epsilon\}$ and $w''\in \mathcal{C}(01)\setminus\{\epsilon\}$. We deduce the functional equation $C_{011}(x,y)=1+xC_{011}(x,y)+x(C_{00}(x,y)-1)+xy(C_{00}(x,y)-1)(C_{01}(x,y)-1)$ which gives the result. ]{}
\[cor011\] For $p=011$, we have $C_p(x)=\frac {1-2x+2x^2}{(1-x)^3}$ and the coefficient of $x^n$ in its series expansion is $1+{{n}\choose {2}}$ (sequence [A000124](https://oeis.org/A000124) in [@Sloa]). The popularity of descents on the set $\mathcal{C}_n(p)$ is given by $\frac{(n-1)(n-2)}{2}$ which is the coefficient of $x^n$ in the series expansion of $\frac {x^3}{(1-x)^3}$ (sequence [A000217](https://oeis.org/A000217) in [@Sloa]).
\[th000\] For $p=000$, we have $$C_p(x,y)=\frac {1-x^2-x^2y}{1-x-2x^2-x^2y+x^3+x^4-x^4y}.$$
Let $w$ be a non-empty word in $\mathcal{C}(000)$, and let $0(w'+1)w''$ its first return decomposition where $w',w''\in \mathcal{C}(000)$. We distinguish two cases: (1) $w''$ is empty, and (2) otherwise.
In the case (1), we have $w=0(w'+1)$ for some $w'\in \mathcal{C}(000)$ and $d(w)=d(w')$. So, the generating function $A(x,y)$ for the Catalan words in this case is $A(x,y)=x C_{000}(x,y)$.
In the case (2), we set $w''=0(w'''+1)$ for some $w'''\in \mathcal{C}(000)$ and we have $w=0(w'+1)0(w'''+1)$. We distinguish three sub-cases: (2.a) $w'$ is empty, (2.b) $w'$ is non-empty and $w'''$ is empty, and (2.c) $w'$ and $w'''$ are both non-empty.
In the case (2.a), we have $w=00(w'''+1)$ with $w'''\in \mathcal{C}(000)$. So, the generating function for the Catalan words belonging to this case is $B_a(x,y)=x^2 C_{000}(x,y)$.
In the case (2.b), we have $w=0 (w'+1)0$ with $w'\in \mathcal{C}(000)\setminus\{\epsilon\}$. So, the generating function for the corresponding Catalan words is $B_b(x,y)=x^2y (C_{000}(x,y)-1)$.
In the case (2.c), we have $w=0(w'+1)0(w'''+1)$ where $w'$ and $w'''$ are non-empty Catalan words such that $w'w'''$ is a Catalan word lying in the case (2). If $w'=0$, then $d(w'w''')=d(w''')=d(w)-1$; if $w'\neq 0$, then $d(w'w''')=d(w')+d(w''')+1=d(w)$. So, the generating function for the corresponding Catalan words is $B_c(x,y)=x^2yB_a(x,y)+ x^2 (B_b(x,y)+ B_c(x,y))$.
Considering $C_{000}(x,y)=1+A(x,y)+B_a(x,y)+B_b(x,y)+B_c(x,y)$, the obtained functional equations give the result.
\[cor000\] For $p=000$, we have $C_p(x)={\frac{1-2x^2}{1-x-3x^2+x^3}}$ and the generating function for the popularity of descents in the sets $\mathcal{C}_n(p)$, $n\geq 0$, is $$\frac {x^3(1-x)(1+2x)(1+x)}{( 1-x-3x^2+x^3)^2}.$$
Note that the sequences defined by the two generating functions in Corollary \[cor000\] do not appear in [@Sloa].
\[th100\] For $p=100$, we have $$C_p(x,y)=\frac {1-2x-x^2y+x^3}{1-3x+x^2-x^2y+2x^3}.$$
For $k\geq 1$, we define $\mathcal{A}_k\subset \mathcal{C}(100)$ as the set of Catalan words avoiding $100$ with exactly $k$ zeros, and let $A_k(x,y)$ be the generating function for $\mathcal{A}_k$.
A Catalan word $w\in\mathcal{A}_1$ is of the form $w=0(w'+1)$ with $w'\in \mathcal{C}(100)$. Since we have $d(w)=d(w')$, the generating function $A_1(x,y)$ for these words satisfies $A_1(x,y)=xC_{100}(x,y)$.
A Catalan word $w\in\mathcal{A}_k$, $k\geq 3$, is of the form $w=0^{k-2}w'$ with $w'\in \mathcal{A}_2$. Since we have $d(w)=d(w')$, the generating function $A_k(x,y)$ for these words satisfies $A_k(x,y)=x^{k-2}A_2(x,y)$.
A Catalan word $w\in\mathcal{A}_2$ has one of the three following forms:
\(1) $w=00(w'+1)$ with $w'\in \mathcal{C}(100)$; we have $d(w)=d(w')$, and the generating function for these Catalan words is $x^2C_{100}(x,y)$.
\(2) $w=0(w'+1)0$ with $w'\in \mathcal{C}(100)\setminus\{\epsilon\}$; we have $d(w)=d(w')+1$, and the generating function for these Catalan words is $x^2y(C_{100}(x,y)-1)$.
\(3) $w=0(w'+1)0(w''+1)$ where $w'$ and $w''$ are non-empty and $w'w''\in \mathcal{A}_k$ for some $k\geq 2$ ([*i.e.*]{}, $w'w''=0^{k-2}0(u+1)0(v+1)$ with $0(u+1)0(v+1)\in \mathcal{A}_2$). So, there are $(k-1)$ possible choices for $w'$, namely $0, 0^2, \ldots, 0^{k-2},$ and $0^{k-2}0(u+1)$. If $w'=0, 0^2, \ldots, 0^{k-2}$, then $d(w)=d(0(u+1)0(v+1))+1$; if $w'=0^{k-2}0(u+1)$ and $u\neq \epsilon$, then $d(w)=d(0(u+1)0(v+1))$; if $w'=0^{k-2}0(u+1)$ and $u= \epsilon$, then $d(w)=d(0(u+1)0(v+1))+1$. So, the generating function for these words is $x^2yA_2(x,y)\sum_{k\geq 2} (k-2)x^{k-2} +x^2(A_2(x,y)-x^2C_{100}(x,y))\sum_{k\geq 2} x^{k-2}+x^4yC_{100}(x,y)\sum_{k\geq 2} x^{k-2}$, which is $\frac{x^3y}{(1-x)^2}A_2(x,y)+\frac{x^2}{1-x}A_2(x,y)+\frac{x^4y-x^2}{1-x}C_{100}(x,y)$.
Taking into account all previous cases, we obtain the following functional equations:
1. $A_1(x,y)=x C_{100}(x,y),$
2. $A_2(x,y)= x^2C_{100}(x,y)+ x^2y(C_{100}(x,y)-1)+\frac{x^3y}{(1-x)^2}A_2(x,y)+\frac{x^2}{1-x}A_2(x,y)+\frac{x^4y-x^2}{1-x}C_{100}(x,y),$
3. $A_k(x,y)=x^{k-2}A_2(x,y) \mbox{ for } k\geq 3,$
4. $C_{100}(x,y)=1+\sum_{k\geq 1}A_k(x,y).$
A simple calculation gives the desired result.
\[cor100\] For $p=100$, we have $C_p(x)=\frac {1-2x-x^2+x^3}{1-3x+2x^3}$, which is the generating function for the sequence $\lceil\frac{(1+\sqrt{3})^{n+1}}{12}\rceil$ (see [A057960](https://oeis.org/A057960) in [@Sloa]), and the generating function for the popularity of descents in the sets $\mathcal{C}_n(p)$, $n\geq 0$, is $$\frac {x^3(1-x-x^2)}{(1-3x+2x^3)^2}.$$
\[th110\] For $p=110$, we have $$C_p(x)={\frac {1-3x+2x^2+x^3-x^4+x^4y}
{ \left( 1-x \right) \left(1-3x+x^2+2x^3-x^3y\right) }}.$$
Let $w$ be a non-empty word in $\mathcal{C}(110)$, and let $0(w'+1)w''$ its first return decomposition where $w',w''\in \mathcal{C}(110)$.
Then, $w$ has one of the following forms:
- $w=0(w'+1)$ where $w'\in \mathcal{C}(110)$; the generating function for these words is $xC_{110}(x,y)$.
- $w=0w'$ where $w'\in \mathcal{C}(110)\setminus\{\epsilon\}$; the generating function for these words is $x(C_{110}(x,y)-1)$.
- $w=0(w'+1)w''$ with $w'\in \mathcal{C}(00)\setminus\{\epsilon\}$ and $w''\in \mathcal{C}(10)\setminus\{\epsilon\}$; the generating function for these words is $xy(C_{00}(x,y)-1)(C_{10}(x,y)-1)$.
- The last form is $w=0(w'+1)w''$ where $w'\in \mathcal{C}(00)\setminus\{\epsilon\}$ and $w''\notin\mathcal{C}(10)$. So, we have $w=012\ldots k0^{a_0}1^{a_1}\ldots (k-1)^{a_k} (w'''+k-1)$ where $k\geq 1$, $a_i\geq 1$ for $0\leq i\leq k$, and $w'''\in\mathcal{C}(110)\setminus\mathcal{C}(10)$; the generating function for these words is $y\sum_{k\geq 1}\frac{x^{2k+1}}{(1-x)^k}(C_{110}(x,y)-C_{10}(x,y))$.
Combining these different cases, we deduce the functional equation:
$$\begin{array}{ll}C_{110}(x,y)=&1+xC_{110}(x,y)+x(C_{110}(x,y)-1)+xy(C_{00}(x,y)-1)(C_{10}(x,y)-1)+\\
&y\sum_{k\geq 1} \frac{x^{2k+1}}{(1-x)^k}(C_{110}(x,y)-C_{10}(x,y)).\end{array}$$
Considering Theorems \[th10\] and \[th00\], the result follows.
\[cor110\] For $p=110$, we have $C_p(x)=\frac{1-3x+2x^2+x^3}{(1-x)^2(1-2x-x^2)}$ and the generating function for the popularity of descents in the sets $\mathcal{C}_n(p)$, $n\geq 0$, is $$\frac {x^3(1-x-x^2)^2}
{(1-x)^3(1-2x-x^2)^2}.$$
\[th210\] For $p=210$, we have $$C_p(x)=\frac {1-5x+8x^2-x^2y-4x^3+3x^3y-x^4y}
{(1-2x)(1-4x+4x^2-x^2y+x^3y)}.$$
Let $w$ be a non-empty word in $\mathcal{C}(210)$, and let $0(w'+1)w''$ be its first return decomposition where $w',w''\in \mathcal{C}(210)$.
Then, $w$ has one of the following forms:
- $w=0(w'+1)$ where $w'\in \mathcal{C}(210)$; the generating function for these words is $xC_{210}(x,y)$.
- $w=0w''$ where $w''\in\mathcal{C}(210)\setminus\{\epsilon\}$; the generating function for these words is $x(C_{210}(x,y)-1)$.
- $w=0(w'+1)w''$ where $w'\in \mathcal{C}(01)\setminus\{\epsilon\}$ and $w''\in \mathcal{C}(210)\setminus\{\epsilon\}$; the generating function for these sets is $xy(C_{01}(x,y)-1)(C_{210}(x,y)-1)$.
- $w=01^{a_1}2^{a_2} \ldots k^{a_k}w ''$ where $k\geq 2$, $a_i\geq 1$ for $1\leq i\leq k$, and $w''\in \mathcal{C}(10)\setminus\{\epsilon\}$; the generating function for these words is $y(C_{10}(x,y)-1)\sum_{k\geq 2} \frac{x^{k+1}}{(1-x)^k}$.
- $w=01^{a_1}2^{a_2} \ldots k^{a_k}0^{b_0}1^{b_1} \ldots (k-2)^{b_{k-2}}(w''+k-2)$ where $k\geq 2$, $a_i\geq 1$ for $1\leq i\leq k$, $b_i\geq 1$ for $1\leq i\leq k-2$, and $w''\in \mathcal{C}(210)\setminus\mathcal{C}(10)$; the generating function for these words is $y(C_{210}(x,y)-C_{10}(x,y))\sum_{k\geq 2} \frac{x^{k+1}}{(1-x)^k}\frac{x^{k-1}}{(1-x)^{k-1}}$.
Combining these different cases, we deduce the functional equation:
$$\begin{array}{ll}C_{210}(x,y)=&1+xC_{210}(x,y)+x(C_{210}(x,y)-1)+ xy(C_{01}(x,y)-1)(C_{210}(x,y)-1)+\\&y(C_{10}(x,y)-1)\sum_{k\geq 2} \frac{x^{k+1}}{(1-x)^k}+
y(C_{210}(x,y)-C_{10}(x,y))\sum_{k\geq 2} \frac{x^{k+1}}{(1-x)^k}\frac{x^{k-1}}{(1-x)^{k-1}}.\end{array}$$
Finally, considering Theorem \[th10\] the desired result follows.
\[cor210\] For $p=210$, we have $C_p(x)=\frac {1-5x+7x^2-x^3-x^4}{(1-2x)
(1-4x+3x^2+x^3)}$ and the generating function for the popularity of descents in the set $\mathcal{C}_n(p)$, $n\geq 0$, is $$\frac{x^3(1-2x)}{(1-4x+3x^2+x^3)^2}.$$
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Pattern $p$ Sequence $|\mathcal{C}_n(p)|$ Generating function OEIS [@Sloa]
--------------------- ------------------------------------------------------------- --------------------------------------------------- -------------------------------------
$012$, $001$, $010$ $2^{n-1}$ $\frac{1-x}{1-2x}$ [A011782](https://oeis.org/A011782)
$021$ $(n-1)\cdot 2^{n-2}+1$ $\frac{1-4x+5x^2-x^3}{(1-x)(1-2x)^2}$ [A005183](https://oeis.org/A005183)
$102$, $201$ $\frac{3^{n-1}+1}{2}$ $\frac {1-3x+x^2}{(1-x)(1-3x)}$ [A007051](https://oeis.org/A007051)
$120$, $101$ $F_{2n-1}$ $\frac{1-2x}{1-3x+x^2}$ [A001519](https://oeis.org/A001519)
$011$ $\frac{n(n-1)}{2}+1$ $\frac{1-2x+2x^2}{(1-x)^3}$ [A000124](https://oeis.org/A000124)
$000$ $\frac {1-2x^2}{1-x-3x^2+x^3}$
$100$ $\lceil\frac{(1+\sqrt{3})^{n+1}}{12}\rceil$ $\frac{1-2x-x^2+x^3}{1-3x+2x^3}$ [A057960](https://oeis.org/A057960)
$110$ $\frac{1}{2}\,\sum _{k=0}^{\lfloor \frac{n}{2}\rfloor }{n+1 $\frac{1-3x+2x^2+x^3}{(1-x)^2(1-2x-x^2)} $
\choose 2\,k+1}{2}^{k}-\frac{n-1}{2}$
$210$ $\frac{1-5x+7x^2-x^3-x^4}{(1-2x)(1-4x+3x^2+x^3)}$
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: \[Tab1\]Catalan words avoiding a pattern of length three.
-------------- ------------------------------------------------ ------------------------------------------------ -------------------------------------
Popularity of descents
on $\mathcal{C}_n(p)$
$012$, $001$ $(n-2)\cdot 2^{n-3}$ $\frac{x^3}{(1-2x)^2}$ [A001787](https://oeis.org/A001787)
$010$ $0$ $0$
$021$ $(n+1)(n-2)\cdot 2^{n-5}$ $\frac {x^3(1-x) }{(1-2x)^3}$ [A001793](https://oeis.org/A001793)
$102$, $201$ $(n-2)\cdot 3^{n-3}$ $\frac {x^3}{(1-3x)^2} [A027471](https://oeis.org/A027471)
$
$120$, $101$ $\sum_{k=1}^{n-2} k\cdot {{n+k-2}\choose{2k}}$ $\frac {x^3( 1-x) } [A001870](https://oeis.org/A001870)
{(1-3x+x^2)^2}$
$011$ $\frac{(n-1)(n-2)}{2}$ $\frac {x^3}{(1-x)^3}$ [A000217](https://oeis.org/A000217)
$000$ $\frac {x^3(1-x)(1+2x)(1+x)}{(1-x-3x^2+x^3)^2}
$
$100$ $\frac{x^3(1-x-x^2)}{(1-3x+ 2x^3)^2}
$
$110$ $\frac {x^3(1-x-x^2)^2}{
(1-x)^3(1-2x-x^2)^2}$
$210$ $\frac{x^3(1-2x)}{(1-4x+3x^2+x^3)^2}$
-------------- ------------------------------------------------ ------------------------------------------------ -------------------------------------
: \[Tab2\]Popularity of descents on Catalan words avoiding a pattern of length three.
Final remarks
=============
At the time of writing this paper, the enumerating sequences $(|\mathcal{C}_n(p)|)_{n\geq 0}$, for $p\in\{000,110,210\}$, are not recorded in [@Sloa], and it will be interesting to explore potential connections of these sequences with other known ones.
According to Theorem \[th012\], for any $k\geq 0$, the set of fixed length Catalan words with $k$ descents avoiding $p=001$ is equinomerous with those avoiding $q=012$, and a natural question that arises is to find a constructive bijection between the two sets; and similarly for $(p,q)=(102,201)$, see Theorem \[th102\], and for $(p,q)=(101, 120)$, see Theorem \[th120\]. In the same vein, some of the enumerating sequences obtained in this paper count classical combinatorial classes (see Tables \[Tab1\] and \[Tab2\]) and these results deserve bijective proofs.
Finally, our initiating study on pattern avoidance on Catalan words can naturally be extended to patterns of length more than three, vincular patterns and/or multiple pattern avoidance. For example, some of the patterns we considered here hide larger length patterns (for instance, an occurrence of $210$ in a Catalan word is a part of an occurrence of $01210$), and some of our results can be restated in this light.
[10]{}
M. Albert, C. Homberger, and J. Pantone. Equipopularity classes in the separable permutations. , 22(2):P2.2, 2015. (electronic).
J.-L. Baril, S. Kirgizov, and V. Vajnovszki. Patterns in treeshelves. , 340(12):2946–2954, 2017.
F. Bergeron, N. Bergeron, R.B. Howlett, and D.E. Taylor. A decomposition of the descent algebra of a finite [C]{}oxeter group. , 1:23–44, 1992.
M. B[ó]{}na. Surprising symmetries in objects counted by [C]{}atalan numbers. , 19(1):P62, 2012. (electronic).
E. Deutsch. Dyck path enumeration. , 204:167–202, 1999.
I. Gessel and G. Viennot. Binomial determinants, paths, and hook length formulae. , 58:300–321, 1985.
C. Homberger. Expected patterns in permutation classe. , 19(3):P43, 2012. (electronic).
E. Mäkinen. Left distance binary tree representations. , 27(2):163–169, 1987.
T. Mansour and V. Vajnovszki. Efficient generation of restricted growth words. , 113:613–616, 2013.
K. Rudolph. Pattern popularity in $132$-avoiding permutations. , 20(1):P8, 2013. (electronic).
N.J.A. Sloane. The on-line encyclopedia of integer sequences. vailable electronically at [http://oeis.org]{}.
R.P. Stanley. , volume 2. Cambridge University Press, 1999.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We trace the evolution of the theory of stochastic partial differential equations from the foundation to its development, until the recent solution of long-standing problems on well-posedness of the KPZ equation and the stochastic quantization in dimension three.'
address: |
Laboratoire de Probabilités, Statistique et Modélisation\
Sorbonne Université, Université de Paris, CNRS\
4 Place Jussieu, 75005 Paris, France
author:
- Lorenzo Zambotti
bibliography:
- 'DCDS.bib'
title: A brief and personal history of stochastic partial differential equations
---
[*Keywords:* Stochastic partial differential equations]{}\
[*MSC classification:* 60H15]{}
Introduction
============
In September 2017 I attended a meeting in Trento in honor of Luciano Tubaro, who was retiring. Mimmo Iannelli gave a humorous and affectionate talk whose title was [*Abstract stochastic equations: when we used to study in Rome’s traffic jams*]{}. He talked about the ’70s, when he and Luciano were the first students of Giuseppe Da Prato’s, who around 1975 proposed them to work on a brand new topic: stochastic partial differential equations. Since I was myself a PhD student of Da Prato’s in the late ’90s, on that day in Trento I was being told the story of the beginning of our scientific family.
Then, a month later, I was at the Fields Institute in Toronto for a conference in honor of Martin Hairer, who had been awarded in 2014 a Fields medal [*“for his outstanding contributions to the theory of stochastic partial differential equations, and in particular for the creation of a theory of regularity structures for such equations”*]{} (the official citation of the International Mathematical Union).
Within a few weeks I was therefore confronted with a vivid representation of the beginning of SPDEs and with a celebration of their culminating point so far. I realised that, because of Hairer’s Fields medal, the mathematical community was suddenly aware of the existence of SPDEs, although very little was commonly known about them.
For example, during his laudatio which introduced Hairer’s talk at the 2014 International Congress of Mathematics in Korea, Ofer Zeitouni felt the need to say to the audience [*“I guess that many of you had never heard about stochastic partial differential equations”*]{}. The other three Fields medals in 2014 were awarded for work in, respectively, dynamical systems, Riemann surfaces and number theory. Certainly there was no need to introduce these topics to the mathematicians attending the ICM. However, after forty years of work, with thousands of published papers and hundreds of contributors, SPDEs were still unknown to a large portion of the mathematical world.
I decided to dedicate my talk in Toronto not just to Hairer’s achievements, but to the whole community that had formed and nurtured him. In the last two years I have given several times this talk in different occasions. This special issue of DCDS gives me the opportunity to write down the few thoughts I have to share about this topic, in the hope that someone else may continue this work and enrich this tale with other points of view. I will make no claim to exhaustivity: the topic is vast and I know only a fraction of the literature. I wish to explain the origin and the development of SPDEs from my personal point of view, and I apologise in advance for the aspects of this story that I will fail to explain properly or even mention. I encourage anyone wishing to see this tale completed or told differently and better to do so and continue the work I am starting.
The beginnings
==============
In principle, a Stochastic Partial Differential Equation (SPDE) is a Partial Differential Equations (PDE) which is perturbed by some random external force. This definition is however too general: if we have a PDE with some random coefficients, where the randomness appears as a parameter and the equation can be set and solved with classical analytic arguments, then one speaks rather of a *random PDE*; this is the case for example of a (deterministic) PDE with a random initial condition.
A SPDE is, more precisely, a PDE which contains some stochastic process (or field) and cannot be defined with standard analytic techniques; typically such equations require some form of stochastic integration. In most of the cases, the equation is a classical PDE perturbed by adding a random external forcing. One of the first examples is the following *stochastic heat equation with additive noise* $$\label{she}
\frac{\partial u}{\partial t} = \Delta u +\xi$$ where $u=(u(t,x))_{t\geq 0,x\in{\mathbb{R}}^d}$ is the unknown solution and $\xi=(\xi(t,x))$ is the random external force. Then one can add non-linearities and, in some cases, multiply the external force by a coefficient which depends on the unknown solution, for example $$\label{she2}
\frac{\partial u}{\partial t} = \Delta u +f(u)+\sigma(u)\,\xi$$ where $f,\sigma:{\mathbb{R}}\to{\mathbb{R}}$ are smooth. The product $\sigma(u)\, \xi$ is not always well-defined, since in many cases of interest $\xi$ is a *generalised function* and $u$ is not expected to be smooth; in this case one writes the equation in an integral form and uses Itô integration to give a sense to the stochastic term.
The idea of associating PDEs and randomness was already present in the physics literature in the ’50s and ’60s, see for example [@spiegel52; @lyon60; @chen64; @gibson67]. In the mathematical literature, several authors extended Itô’s theory of stochastic differential equations (SDE) to a Hilbert space setting, see for example Daleckiĭ [@dalecki66] and Gross [@gross67]. In a paper published in 1969 [@zakai69], Zakai wrote that the unnormalised conditional density in a filtering problem satisfies a linear SPDE.
However, to my knowledge, the first papers which studied explicitly a SPDE as a problem in its own appeared in the ’70s. In 1970 Cabaña [@cabana70] considered a linear wave equation $$\frac{\partial^2 u}{\partial t^2} + 2b\, \frac{\partial u}{\partial t}= \Delta u +\xi$$ with a *space-time white noise* $\xi$ and one-dimensional space variable $x$. This is a very important particular choice for the random external force: it is given by a random generalised function $\xi$ which is *Gaussian* and has very strong independence properties, namely the “values” at different points in space-time are independent.
In 1972 three papers were published on the topic: two in France (Bensoussan-Temam [@bt72] and Pardoux [@pardoux72]) and one in Canada (Dawson [@dawson72]). The French school was strongly influenced by the PDE methods of the time, championed by Jacques-Louis Lions and his collaborators. Bensoussan and Temam [@bt72] considered an evolution equation driven by a monotone non-linear operator $A_t$ $$\frac{{\rm d}y}{{\rm d}t} + A_t(y)=\xi$$ and with an external forcing $\xi$ which we can call now *white in time and coloured in space*; this means that values of the noise on points with different time-coordinate are independent, but there is a non-trivial correlation in space. In [@pardoux72] Pardoux considered a similar problem with multiplicative noise $$\frac{{\rm d}y}{{\rm d}t} + A_t(y)=B_t(y)\,\xi$$ where $B_t$ is a non-linear operator and the stochastic term is treated with the Itô integration theory. In 1975 Pardoux defended his PhD thesis written under the supervision of Bensoussan and Temam, which is considered the first extended work on the topic.
Dawson’s paper [@dawson72] has a more probabilistic flavour. It treats the stochastic heat equations and with one-dimensional space variable $x$ and space-time white noise $\xi$; it shows that the solution $u$ to the linear equation is almost-surely continuous in $(t,x)$ (this is false in higher dimension, as we are going to see below); moreover, it introduces the non-linear equation with the coefficient $\sigma(u)=\sqrt{u}$, which will soon become famous as the equation of the Super-Brownian motion (for $f=0$).
In the following years more and more researchers got interested in SPDEs. In particular, the Italian and Russian schools were founded, respectively, in 1976 with Da Prato’s first paper [@dpit76] on the topic (together with his students Iannelli and Tubaro) and between 1974 and 1977 with Rozovskiĭ’s papers [@rozovski74; @rozovski75] and Krylov-Rozovskiĭ’s [@kr77].
The physical models
===================
In the ’80s some theoretical physicists published a few very influential papers based on applications of SPDEs to several important physical problems: Parisi-Wu’s [@pw81] and Jona Lasinio-Mitter’s [@jlm85] on the *stochastic quantization*, and the Kardar-Parisi-Zhang model for the *dynamical scaling of a growing interface* [@kpz86]. All these papers would be, thirty years later, an important motivation for the theory of regularity structures, see below.
The Stochastic Quantization
---------------------------
The 1981 paper [@pw81] by Parisi and Wu proposed a dynamical approach to the construction of probability measures which arise in Euclidean Quantum Field Theory. The difficulty with such measures is that they are supposed to be supported by spaces of *distributions* (generalised functions) on ${\mathbb{R}}^d$, which makes the definition of *non-linear* densities problematic. For example one would like to consider a measure on the space of distributions ${\mathcal D}'([0,1]^d)$ of the form $$\mu({\rm d}\phi)=\frac1Z \exp\left(-\int_{[0,1]^d} V(\phi(x))\d x\right) {\mathcal N}(0,(1-\Delta)^{-1})({\rm d}\phi)$$ where ${\mathcal N}(0,(1-\Delta)^{-1})$ is a Gaussian measure with covariance operator $(1-\Delta)^{-1}$, with $\Delta$ the Laplace operator on $[0,1]^d$ with suitable boundary conditions, and $V:{\mathbb{R}}\to{\mathbb{R}}$ is some potential. If $d>1$ then ${\mathcal N}(0,(1-\Delta)^{-1})$-a.s. $\phi$ is a distribution and not a function, and the non-linearity $V(\phi)$ is therefore ill-defined. Parisi-Wu introduce a stochastic partial differential equation $$\label{eq:pw}
\frac{\partial u}{\partial t}= \Delta u -u-\frac12\,V'(u) + \xi, \qquad x\in[0,1]^d$$ which has $\mu$ as invariant measure, namely if $u(0,\cdot)$ has law $\mu$, then so has $u(t,\cdot)$ for all $t\geq 0$. This is an infinite-dimensional analog of the classical *Langevin dynamics*. By the ergodic theorem, for a generic initial condition $u(0,\cdot)$, the distribution of $u(t,\cdot)$ converges to $\mu$ as $t\to+\infty$. Therefore one can use the stochastic dynamical system $(u(t,\cdot))_{t\geq 0}$ in order to obtain useful information on $\mu$.
We note however that, for $d>1$, the solution to is expected to be again a distribution on space-time, at least this is the case for the linear equation with $V'\equiv 0$. Therefore a rigorous study of this equation is also problematic, since $V'(u)$ is again ill-defined.
The first rigorous paper on the Parisi-Wu programme was by Jona Lasinio-Mitter [@jlm85], where the authors chose the non-linearity $V(\phi)=\phi^4$ and the space dimension $d=2$, in order to construct the continuum $\phi^4_2$ model of Euclidean Quantum Field Theory [@simon74; @gj87], and called this equation the *stochastic quantization*. Jona Lasinio-Mitter studied a modified version of equation and obtained probabilistically weak solutions via a Girsanov transformation; strong solutions to were obtained in a later paper by Da Prato-Debussche [@dpd03], see below. The case of space dimension $d=3$ remained however open until the inception of regularity structures.
The KPZ equation
----------------
The Kardar-Parisi-Zhang (KPZ) equation [@kpz86] is the following SPDE $$\label{eq:kpz}
\frac{\partial h}{\partial t}= \nu\Delta h +\lambda|\nabla h|^2 + \xi, \qquad x\in{\mathbb{R}}^d$$ and describes the fluctuations around a deterministic profile of a randomly growing interface, where $\nabla$ is the gradient with respect to the space variable $x$.
From an analytic point of view, even if $d=1$ the KPZ equation is very problematic: if we consider the case $\lambda=0$ then we are back to the stochastic heat equation with additive white noise , for which it is known that the solution $u$ is not better than Hölder-continuous in $(t,x)$ and certainly not differentiable; we expect $h$ in to have at best the same regularity as $u$. In particular the gradient in space $\nabla h$ is defined only as a distribution and the term $(\nabla h)^2$ is ill-defined. We restrict ourselves for simplicity to the case $\nu=\lambda=1/2$.
In the original KPZ paper [@kpz86] it was noticed that one can *linearize* by means of the *Cole-Hopf transformation*: if we define $\psi=(\psi(t,x))_{t\geq 0,x\in{\mathbb{R}}}$ as the unique solution to the equation $$\label{eq:colehopf}
\frac{\partial \psi}{\partial t}= \frac12\frac{\partial^2 \psi}{\partial x^2} +\psi \, \xi, \qquad x\in{\mathbb{R}},$$ which is called the *stochastic heat equation with multiplicative noise*, then $h:=\log\psi$ (formally) solves .
In the first mathematical paper on KPZ, Bertini-Cancrini [@bc95] studied in 1995 the stochastic heat equation in the Itô sense for $d=1$. Since Mueller [@mueller91] had proved that a.s. $\psi(t,x)>0$ for all $t>0$ and $x\in{\mathbb{R}}$, then the Cole-Hopf solution $h=\log\psi$ is indeed well-defined. Bertini-Giacomin [@bg97] proved in 1997 that the stationary Cole-Hopf solution is the scaling limit of a particle system, the weakly-asymmetric simple exclusion process (WASEP); this celebrated result was the first example of the *KPZ universality class*, see below.
Since is to be interpreted in the Itô sense, one can apply the Itô formula to $h=\log\psi$ and the result is, at least formally, that $h$ solves $$\label{eq:kpz2}
\frac{\partial h}{\partial t}= \frac12\,\frac{\partial^2 h}{\partial x^2} +\frac12\left[(\partial_x h)^2-\infty\right] + \xi, \qquad x\in{\mathbb{R}},$$ which is almost , apart from the appearance of the famous infinite constant which is supposed to *renormalize* the ill-defined term $(\partial_x h)^2$. Making sense of this renormalization and constructing a well-posedness theory for such equations were however open problems for over 15 years until Hairer’s breakthrough [@hairer13], see below.
We note that the KPZ equation, and in particular its *universality class*, has been one of the most fertile topics in probability theory of the last decade, with connections to particle systems, random matrices, integrable probability, random polymers and much else. See the surveys by Quastel [@quastel12] and Corwin [@corwin16] for more details.
Superprocesses
--------------
SPDEs have also been applied to *biological systems*, in particular in the context of the so-called *superprocesses* introduced by Watanabe and Dawson in the ’70s. Superprocesses are limits of discrete population models of the following type: particles evolve in a ${\mathbb{R}}^d$ space following some Markovian dynamic, typically Brownian motion, independently of each other; at random exponential times each particle dies and is replaced by a random number of identical particles, which become new elements of the population and behave as all other particles. We refer to the Saint-Flour lecture notes by Dawson [@dawson93] and Perkins [@perkins02] for pedagogical introductions to this topic.
The total number of members of the population which are alive at time $t\geq 0$ follows a standard branching process and is independent of the motion of the particles. Therefore there are three situations, depending on the value $m$ of the average number of descendants that a particle has when it dies: if $m>1$ the population grows at an exponential rate, if $m<1$ it dies after a finite and integrable time, if $m=1$ it dies after a finite but non-integrable time. The three situations are called, respectively, *supercritical, subcritical* and *critical*.
The critical case, with Brownian spatial motion, has a scaling limit which is a Markov process with values in the space of measures on the state space ${\mathbb{R}}^d$; this process is called the *super-Brownian motion*. If $d=1$, then Konno-Shiga [@ks88] proved in 1988 that a.s. this random measure has a continuous density $X_t(x)$ with respect to the Lebesgue measure ${\rm d}x$ on ${\mathbb{R}}$, and $(X_t(x))_{t\geq 0,x\in{\mathbb{R}}}$ solves the SPDE $$\label{eq:sbm}
\frac{\partial X}{\partial t}= \frac12\,\frac{\partial^2 X}{\partial x^2} + \sqrt{X}\,\xi.$$ The diffusion coefficient of this equation, already introduced by Dawson in [@dawson72], does not satisfy the usual Lipschitz condition and, indeed, *pathwise uniqueness* for is still an open problem, see the papers by Mytnik-Perkins [@mp11] and Mueller-Mytnik-Perkins [@mmp14]. More precisely, the situation is the following: we consider the SPDE $$\label{eq:sigmaholder}
\frac{\partial X}{\partial t}= \frac12\,\frac{\partial^2 X}{\partial x^2} + \sigma(X)\,\xi,$$ with $\sigma:{\mathbb{R}}\to{\mathbb{R}}$ a Hölder function with exponent $\gamma\in\,]0,1[$, namely $|\sigma(x)-\sigma(y)|\leq C|x-y|^\gamma$ and one looks in general for solutions with values in ${\mathbb{R}}$, rather than in ${\mathbb{R}}_+$; in particular, for equation one would have $\sigma(u)=\sqrt{|u|}$. Then:
- if $\gamma>3/4$ we have pathwise uniqueness, namely if we have two solutions $(X^1,\xi)$ and $(X^2,\xi)$ to driven by the same noise $\xi$ with $X^1(0,\cdot)=X^2(0,\cdot)$ a.s., then $X^1\equiv X^2$ almost surely
- if $\gamma<3/4$ then pathwise uniqueness fails in general and there are counterexamples
- if $\sigma(0)=0$ and one is interested only in the class of *non-negative* solutions, then it is not known whether pathwise uniqueness holds or fails in this class for $\gamma<3/4$. This leaves in particular the hope that the equation for super-Brownian motion may satisfy pathwise uniqueness. However for the related equation of super-Brownian motion *with immigration* the pathwise non-uniqueness was proved by Chen in [@chen15].
If the state space ${\mathbb{R}}^d$ has dimension greater or equal to 2, then a.s. the measure $X_t({\rm d}x)$ is singular with respect to the Lebesgue measure (see [@dh79]), but the equation is still well-defined as a *martingale problem*, since the diffusion coefficient $\sigma(x)=\sqrt{x}$ has the special property that $\sigma^2(x)=x$ is linear. Remarkably, this martingale problem is well-posed and one can prove uniqueness in law of these superprocesses using a technique called *duality* due to Watanabe [@watanabe68], see the cited paper by Konno-Shiga [@ks88]; duality can also be applied to prove uniqueness for other processes, see the works of Shiga [@shiga81; @shiga87] and Mytnik [@mytnik96].
Finally, we mention that superprocesses are related to Le Gall’s Brownian snake, see [@legall99], which also plays a crucial role in the context of planar random maps, see e.g. Miermont’s lecture notes [@miermont].
The theory
==========
During the ’80s and the ’90s, several monographs were published with the aim of presenting a systematic theory of SPDEs.
The first major monograph was Walsh’s Saint-Flour lecture notes [@walsh86], which were published in 1986. In this course Walsh proposed a general approach to SPDEs which has been very influential; his point of view has a very probabilistic flavour, since it consists in regarding the solution $u=u(t,x)$ of a (parabolic or hyperbolic) SPDE as a *multi-parameter process*, or more generally a *multi-parameter random field*. The stochastic integration with respect to space-time white noise is developed according to this point of view, considering $t\mapsto \xi(t,\cdot)$ as a so-called *martingale measure*, thus generalizing the Itô theory. We have used Walsh’s notations for the equations numbered from to above, and for others below.
In 1992 the first book by Da Prato-Zabczyk [@dpz1] was published. This monograph, also known as the *red book* among Da Prato’s students, is still the reference text for the so-called *semigroup approach* to SPDEs. Da Prato-Zabczyk’s point of view is to treat a SPDE as an-infinite dimensional SDE, and the solution $u=u(t,\cdot)$ as a function-valued process with a single parameter, the time $t$. The notations are different from those of Walsh; for example the stochastic heat equation with additive space-time white noise is written as $$\d X=AX\d t+\d W$$ where $X_t=u(t,\cdot)\in L^2({\mathbb{R}})= H$, $A:D(A)\subset H\to H$ is the realization of $\partial^2_x$ in $H$, $(W_t)_{t\geq 0}$ is a *cylindrical Wiener process*. The solution to this equation is called the *stochastic convolution* and is written explicitly as $$X_t=e^{tA}X_0+\int_0^t e^{(t-s)A}\d W_s, \qquad t\geq 0.$$ The general SPDE with non-linear coefficients is written as $$\d X=(AX+F(X))\d t+\Sigma(X)\d W$$ where $F:D(F)\subseteq H\to H$ is some non-linear function and $\Sigma$ is a map from $H$ to the linear operators in $H$. This approach has a more functional-analytical flavour, and is based mainly on the study of the properties of the semigroup $(e^{tA})_{t\geq 0}$ generated by $A$ in $H$, and their interplay with the properties of the cylindrical Wiener process $W$. This non-linear equation is usually written in its *mild formulation* $$X_t=e^{tA}X_0+\int_0^t e^{(t-s)A}\,F(X_s)\d s+\int_0^t e^{(t-s)A}\,\Sigma(X_s)\d W_s.$$
During the ’90s there was also an important activity on *infinite-dimensional analysis*, namely on elliptic and parabolic PDEs where the space-variable belongs to a Hilbert space. The connection with SPDEs is given by the notion of *infinitesimal generator* which is associated with a Markov process with continuous paths. As for finite-dimensional diffusions, the transition semigroup of the solution to a SPDE solves a parabolic equation, known as *Kolmogorov equation*. One can find a systematic theory of these operators in the third book by Da Prato-Zabczyk [@dpz3]. Much work was dedicated to existence and uniqueness of invariant measures, see the next section; the second Da Prato-Zabczyk book was entirely dedicated to this topic [@dpz2].
It can be recalled that Itô introduced his notion of stochastic differential equations in order to give a probabilistic representation of the solution to Kolmogorov equations. Viceversa, if the Kolmogorov equation is well-posed, then it is possible to construct the law of the associated Markov process. This allows to construct *weak* (in the probabilistic sense) solutions, especially in the form of *martingale solutions*, see the 1979 monograph by Stroock-Varadhan [@sv79] on the theory for finite dimensional diffusions.
The construction of the transition semigroup of a Markov process in a locally compact space can be done also with another analytical tool, a *Dirichlet Form*, for which a theory was developped in particular by Fukushima, see the monographs [@fukushima80; @fot11]. The state space of a SPDE is however always a function space, and therefore infinite-dimensional. The extension of Fukushima’s theory to non locally compact spaces was a project of Albeverio-H[ø]{}egh-Krohn [@ah77] since the ’70s and was finally obtained by Ma-Röckner [@mr92]. Although Dirichlet forms allow to construct only weak solutions, they are a powerful tool in very singular situations, where pathwise methods are often ineffective.
Another approach to SPDEs is given by Krylov’s $L^p$-theory, see for example [@krylov94].
Ergodicity of Navier-Stokes
===========================
The Navier-Stokes equation for the flow of an incompressible fluid is one of the most prominent PDEs and it is therefore not surprising that its stochastic version was among the first SPDEs to be studied, starting from the 1973 paper [@bt73] by Bensoussan-Temam. The equation has the form (in Walsh’s notation) $$\frac{\partial u}{\partial t}+(\nabla u)\cdot u= \nu\Delta u -\nabla p + \xi, \qquad {\rm div}\ u=0,$$ where $u(t,x)\in{\mathbb{R}}^d$ denotes the value of the velocity of the fluid at time $t\geq 0$ and position $x\in{\mathbb{R}}^d$, $p(t,x)$ is the pressure, $\nu>0$ and $\xi$ is an external noise whose structure will be made precise below.
The statistical approach to hydrodynamics is based on the assumption that the fluid has a stationary state (invariant measure) on the phase space; by the ergodic theorem, the time average of an observable computed over the dynamics converges for large time to the average of the observable with respect to the invariant measure. This ergodicity property must however be proved, and in the case of the Stochastic Navier Stokes equation in 2D this has been a very active area of research, at least between the 1995 paper by Flandoli-Maslowski [@fm95] and the 2006 paper by Hairer-Mattingly [@hm06].
Ellipticity versus hypoellipticity
----------------------------------
For stochastic differential equations in general, the choice of the external noise plays a very important role. In most of the literature on SPDEs, the space-time noise $\xi$ is realised as the following series $$\xi(t,x) = \sum_{k=1}^\infty \lambda_k\, e_k(x)\, \dot{B}_k(t), \qquad t\geq 0, \ x\in {\mathcal O}\subseteq {\mathbb{R}}^d,$$ where $(\lambda_k)_k$ is a sequence of real numbers, $(e_k)_k$ an orthonormal basis of $L^2({\mathcal O},{\rm d}x)$ and $(B_k)_k$ an independent family of standard Brownian motions. If $\lambda_k=1$ for all $k$ then we have space-time white noise, which has the property that for all $\varphi\in L^2({\mathcal O},{\rm d}x)$ the random variable $$\int_{[0,T]\times{\mathcal O}} \varphi(t,x)\, \xi(t,x)\d t\d x:=
\sum_{k=1}^\infty \langle \varphi,e_k\rangle_{L^2({\mathcal O},{\rm d}x)}\, {B}_k(T)$$ has normal law ${\mathcal N}\left(0,T\,\|\varphi\|^2_{L^2({\mathcal O},{\rm d}x)}\right)$.
In analogy with the finite-dimensional case, if $\lambda^2_k\geq {\varepsilon}>0$ for all $k$, then we are in the *elliptic* case. In finite dimension, we are in a degenerate case as soon as $\lambda_k=0$ for some $k$; in infinite dimension, however, we can have $\lambda_k>0$ for all $k$ but $\lambda_k\to 0$ as $k\to+\infty$. This situation is neither degenerate nor elliptic.
The paper by Flandoli-Maslowski proved for the first time ergodicity for a stochastic Navier-Stokes equation in 2D, under the assumption that $\lambda_k>0$ for all $k$ but $\lambda_k\to 0$ as $k\to+\infty$ with two (different) power-law controls from above and from below. This article sparked an intense activity and a heated debate which revolved around the following question: what is the most relevant choice of the noise structure, which allows to prove ergodicity?
If, as in Flandoli-Maslowski [@fm95], the noise is sufficiently non-degenerate, namely if $\lambda_k>0$ and $\lambda_k\to 0$ not too fast as $k\to+\infty$, then it is often possible to prove ergodicity using an argument due to Doob and based on two ingredients: the *Strong-Feller property* and *irreducibility*; the former means that the transition semigroup of the dynamics maps bounded Borel functions on the state space into continuous functions, the latter that all non-empty open sets of the state space are visited with positive probability at any positive time. The Strong-Feller property is proved with ideas coming from Malliavin calculus, in particular on an integration by parts on the path space which is now known as the Bismut-Elworthy-Li formula, see the paper by Elworthy-Li [@el94] and the monograph by Cerrai [@cerrai01]; irreducibility is based on control theory for PDEs. These techniques were explored and applied to a number of examples in the second Da Prato-Zabczyk book [@dpz2] of 1996.
However, it soon appeared clear that it was possible to consider a degenerate noise and still obtain uniqueness of the invariant measure. Here by degenerate we mean that $\lambda_k=0$ for all $k>N$, where $N$ is a deterministic integer. The main idea behind this line of research was that, if the noise acted on a sufficiently large but finite number of *modes* (i.e. the functions $e_k$), then the noise is elliptic on the modes which determine the long-time behavior of the dynamics: we can call this the *essentially elliptic* case. These results, together with exponential convergence to equilibrium, were proved independently (for Gaussian or for discrete noise) by three groups of authors during the same years: Mattingly [@matt99] and E-Mattingly-Sinai [@ems01], Kuksin-Shirikyan [@ks00; @ks01], Bricmont-Kupiainen-Lefevere [@bkl01; @bkl02].
However in these works the number $N$ of randomly forced modes is not universal but depends on the parameters $\nu$ and $\sum_k \lambda^2_k$ of the equation. This was dramatically improved in the paper by Hairer-Mattingly [@hm06] published in 2006 in Annals of Mathematics, which proved that it is enough to inject randomness only in *four* well-chosen modes, then the non-linearity propagated the randomness to the whole system for any $\nu>0$: the so-called *hypoelliptic* case, for which it is possible to derive uniqueness of the invariant measure for the 2D stochastic Navier-Stokes. One of the main novelties in this paper was the notion of the *asymptotic Strong-Feller property*, which could be proved in the hypoelliptic case, while the standard Strong-Feller property requires much stronger non-degeneracy properties of the noise.
Let us mention here that the Malliavin Calculus, see e.g. Nualart’s monograph [@nualart], has played an important role for Navier-Stokes like for many other SPDEs.
My SPDEs
========
The results on the ergodicity of the stochastic Navier-Stokes equation seemed at the time to make SPDEs with degenerate noise particularly prominent. Now that singular SPDEs with space-time white noise and regularity structures have become so famous, this may seem even strange. In fact, since the very first papers that I have mentioned, see Cabaña [@cabana70] and Dawson [@dawson72], the research activity on SPDEs with genuinely infinite-dimensional noise has always been intensive and most of the problems I have mentioned above concern space-time white noise.
The case of degenerate noise is certainly more difficult if one wants to prove ergodicity, as we have seen. However, if the noise is spatially finite-dimensional, then the solution to the SPDE are typically smooth in space, although still Brownian-like in time. In the case of space-time white noise, on the contrary, the solution are rather Brownian-like *in space* if the space dimension is $d=1$, and even less regular in time; if $d>1$, as we have already seen, solutions are rather distributions.
Therefore, SPDEs driven by space-time white noise are particularly strange objects: even the solutions to the simplest equation, as the stochastic heat equation with additive space-time white noise, are far too irregular for any of the derivatives which appear in to make any sense as a function. The KPZ equation has almost an explicit solution given by the Cole-Hopf transform $h=\log\psi$, with $\psi$ solution to the stochastic heat equation with multiplicative space-time white noise ; however the KPZ equation itself makes no sense as it is written in !
It is in this topic that I made my first steps as a researcher. I did my PhD at Scuola Normale in Pisa under the supervision of Giuseppe Da Prato (also known as Beppe) from 1997 to 2001. Like Da Prato himself and many of his students, I started as an analyst but felt increasingly attracted by probability theory, in particular stochastic calculus and SDEs. In the shelves of Beppe’s office I found the Revuz-Yor monograph, which became one of my favourite mathematics books. I started to dream of unifying two worlds: the classical Itô theory of stochastic calculus based on martingales, and SPDEs.
Chapter 5 in the book by Revuz-Yor on local time and reflecting Brownian motion was one of the topics which most intrigued me. At that time Da Prato was studying equations of the form $$\label{eq:sdi}
\d X\in (AX-\partial U(X))\d t+{\rm d} W$$ with $U:H\to{\mathbb{R}}$ a *convex* lower semi-continuous but not necessarily differentiable function. In the deterministic setting, this is a classical problem and the set $\partial U(x)$ is the *subdifferential* at a point $x\in H$, namely the set of all directions $h\in H$ such that the affine subspace $U(x)+\{z\in H:\langle z,h\rangle =0\}$ lies below the graph of $U$. For a simple example, think of the function ${\mathbb{R}}\ni x\mapsto|x|\in{\mathbb{R}}_+$, which is convex and has as subdifferential the set $\{1\}$ for all $x>0$, the set $\{-1\}$ for all $x<0$ and the set $[-1,1]$ for $x=0$. Then equation is rather a *stochastic differential inclusion*, and if $U$ is differentiable at $x$ then $\partial U(x)=\{\nabla U(x)\}$. There is an extensive literature on this problem in the finite-dimensional case, see e.g. Cépa [@cepa], much less so in infinite dimension where many problems remain open.
The case of $U$ being equal to $0$ on a closed convex set $K\subseteq H$ and to $+\infty$ on $H\setminus K$ seemed to be outside the scope of Da Prato’s techniques. I convinced myself that this case had to be related with reflection on the boundary of $K$, but I was unable to make this precise. Then Samy Tindel pointed out to me a 1992 paper by Nualart and Pardoux [@nupa] on the following SPDE with reflection at 0 $$\label{eq:nupa}
\frac{\partial u}{\partial t} = \frac12\frac{\partial^2 u}{\partial x^2} +\xi +\eta, \qquad t\geq 0, \ x\in[0,1],$$ where $\eta$ is a Radon measure on $]0,+\infty[\,\times\,]0,1[$, $u$ is *continuous* and non-negative, and the support of $\eta$ is included in the zero set $\{(t,x): u(t,x)=0\}$ of $u$, or equivalently $$\label{eq:nupa2}
u\geq 0, \qquad \eta\geq 0, \qquad \int_{]0,+\infty[\,\times\,]0,1[} u\d\eta=0.$$ This is a *stochastic obstacle problem*, the obstacle being the constant function equal to $0$, which can be formulated in the abstract setting of the stochastic differential inclusion . Continuity of $(t,x)\mapsto u(t,x)$ here is essential in order to make sense of the condition ; in this setting the Walsh approach is clearly necessary, since continuity of $t\mapsto u(t,\cdot)$ in $L^2(0,1)$ would not be sufficient. In higher space dimension, $u$ is not expected to be continuous and indeed it remains an open problem to define in this case a notion of solution to -. We note also that this equation arises as the scaling limit of interesting microscopic models of random interfaces: see Funaki-Olla [@fo01] and Etheridge-Labbé [@el15].
The Nualart-Pardoux paper was motivated by stochastic analysis but it was an entirely deterministic work, which pushed the PDE techniques to cover a situation of minimal regularity for the solution; a probabilistic interpretation of this result remained elusive. This is what I tried to give with the results of my PhD thesis. First I identified in [@lz01] the unique invariant measure of - as the 3-d Bessel bridge (also known as the normalized Brownian excursion), an important process which plays a key role in the study of Brownian motion and its excursion theory, see [@reyo]. Then I proved in [@lz02] an infinite-dimensional integration by parts with respect to the law of the 3-d Bessel bridge, which gave a powerful probabilistic tool to study the reflection measure $\eta$ (it provides its *Revuz measure*). Then I set out to study the fine properties of the solution, in particular of the contact set $\{(t,x): u(t,x)=0\}$ between the solution $u$ and the obstacle $0$, see [@lz04] and the paper [@dmz06] in collaboration with Dalang and Mueller.
In these papers I tried to realize my dream, by showing that solutions to SPDEs display very rich and new phenomena with respect to finite-dimensional SDEs, and that it was possible to go much beyond results on existence and uniqueness. I found some interesting link between classical stochastic processes arising in the study of Brownian motion and SPDEs. For a more recent account, see my Saint-Flour lecture notes [@lz15].
However it does not seem that this point of view has been followed by many others. As we are going to see, the SPDE community would soon be heading in a very different direction.
Rough paths and regularity structures
=====================================
In 1998 T. Lyons published a paper [@lyons98] on a new approach to stochastic integration. Lyons was an accomplished probabilist and an expert of stochastic analysis. Therefore it may seem puzzling that the aim of his most famous contribution to mathematics, the invention of *rough paths*, is to give a deterministic theory of stochastic differential equations!
The classical Itô theory of stochastic calculus, see again [@reyo], is a wonderful tool to study stochastic processes (more precisely continuous semimartingales). Not only does it allow to prove existence and uniqueness of solutions to stochastic differential equations, but it also allows to compute the law of a great variety of random variables and stochastic processes. The key tool is that of martingales, which allow explicit computations of expectations and probabilities with often deep and surprising results.
In particular one obtains well-posedness of SDEs in ${\mathbb{R}}^d$ of the form $$\label{eq:SDE1}
\d X_t=b(X_t)\d t+\sigma(X_t)\d W_t,$$ with $b:{\mathbb{R}}^d\to{\mathbb{R}}^d$ and $\sigma:{\mathbb{R}}^d\to {\mathbb{R}}^d\otimes{\mathbb{R}}^d$ smooth coefficients and $(W_t)_{t\geq 0}$ a Brownian motion in ${\mathbb{R}}^d$. However, in general $X$ is not better than a *measurable* function of $W$. This fact is rarely mentioned in courses of stochastic calculus, and probabilists seem used to it. Nevertheless, a physicist may point out that Brownian motion or its derivative, white noise, are an approximation of a real noise, not the other way round; an analyst may found this lack of continuity disturbing. Therefore a theory which is too sensitive on the structure of the noise is not so satisfactory after all. A *robust* theory would be more convincing from this point of view. In the late ’70s, the works of Doss [@doss77] and Sussmann [@sussmann78] gave sufficient conditions on the coefficient $\sigma$ for continuity of the maps $W\mapsto X$ in the sup-norm topology on $C([0,T];{\mathbb{R}}^d)$. These conditions were however very restrictive for $d>1$.
Following an early intuition by Föllmer [@follmer81], Lyons constructed a deterministic (*pathwise*) approach to stochastic integration. The main result is the construction of a topology that makes the map $W\mapsto X$ continuous. However, there is a very important twist: the topology is not just on $W$ or $X$, but on a richer object which contains more information. If for example $W:[0,T]\to{\mathbb{R}}^d$ is a deterministic smooth path, then one needs to consider a finite number of *iterated integrals* of $W$, which take the form $${\bf W}^{n}_{s,t}=\int_{s<u_1<\cdots<u_n<t} \d{W}_{u_1}\otimes\cdots\otimes\d{W}_{u_n}, \qquad n\in{\mathbb{N}}, \ 0\leq s\leq t\leq T,$$ where $\d W_u=\dot{W}_{u}\d u$. For a fixed $\gamma\in\,]0,1[$, one takes $N\in{\mathbb{N}}$ such that $N\gamma\leq 1<(N+1)\gamma$ and for every smooth $W:[0,T]\to{\mathbb{R}}^d$ $${\bf W}^{(N)}_{s,t}:=1+\sum_{n=1}^N {\bf W}^{n}_{s,t},\qquad 0\leq s\leq t\leq T,$$ which belongs to the truncated tensor algebra $T^{(N)}=\oplus_{n=0}^N ({\mathbb{R}}^d)^{\otimes n}$. We note that ${\bf W}^{1}_{s,t}=W_t-W_s$, so that ${\bf W}^{(N)}_{s,t}$ *contains* the increments of the original process, plus additional information. We can now define a distance between two such objects ${\bf W}^{(N)}_{s,t}$ and ${\bf V}^{(N)}_{s,t}$, for smooth $W,V:[0,T]\to{\mathbb{R}}^d$ $$\d_\gamma\left({\bf W}^{(N)},{\bf V}^{(N)}\right):=\sup_{n=1,\ldots,N} \sup_{s\ne t} \frac{\left|{\bf W}^{n}_{s,t}-{\bf V}^{n}_{s,t}\right|}{|t-s|^{n\gamma}}.$$ Then Lyons’ result was that the map ${\bf W}^{(N)}\mapsto {\bf X}^{(N)}$, where $W,X:[0,T]\to{\mathbb{R}}^d$ are smooth processes which satisfy , is *continuous* with respect to the metric ${\rm d}_\gamma$.
Lyons’ paper [@lyons98] was astounding for its novelty: it introduced in stochastic analysis a number of concepts which were unknown to many probabilists, in particular the algebraic language based on the work of Chen [@chen57] on iterated integrals. Moreover it presented a radically different approach to the pillar of modern probability theory, the Itô stochastic calculus. For these reasons, it seems that Lyons’ ideas took some time before being widely accepted by the community and became really famous only fifteen years later, when Hairer proved their power in the context of SPDEs. See the book of Friz-Hairer [@fh14] for a pedagogical introduction.
Singular SPDEs and regularity structures
----------------------------------------
As we have seen above, several interesting physical models were described in the ’80s with SPDEs such as the *dynamical $\phi^4_d$ model*, recall the stochastic quantization , $$\label{phi4d}
\frac{\partial \phi}{\partial t}= \Delta \phi -\phi^3 + \xi, \qquad x\in{\mathbb{R}}^d,$$ for $d=2,3$ and the KPZ equation . In both equations there are ill-defined non-linear functionals of some distribution. Equations of this kind are now commonly known as *singular SPDEs*.
In 2003 Da Prato-Debussche [@dpd03] solved the stochastic quantization in $d=2$ with the following idea: they wrote $\phi=z+v$, where $z$ is the solution to the linear stochastic heat equation with additive white noise $$\frac{\partial z}{\partial t}= \Delta z + \xi, \qquad x\in{\mathbb{R}}^2,$$ and they wrote an equation for $v=\phi-z$ $$\frac{\partial v}{\partial t}= \Delta v -z^3-3z^2v-3zv^2-v^3, $$ which is now random only through the explicit Gaussian process $z$. We note that $z$ is still a distribution, so that the terms $z^2$ and $z^3$ are still ill-defined; however it turns out that it is possible to give a meaning to these terms as distributions with the classical *Wick renormalization*. Then, the products $z^2v$ and $zv^2$ are defined using *Besov spaces*. This allows to use a fixed point argument for $v$ and obtain existence and uniqueness for the original (renormalized) equation. However this technique does not work for $d=3$, since in this case the products $z^2v$ and $zv^2$ are still ill-defined.
Since Lyons’ foundational paper of 1998, rough paths have been based on *generalised Taylor expansions*, with standard monomials replaced by iterated integrals of the driving noise. In 2004 Gubinelli built on this idea a new approach to rough integration based on the notion of *controlled paths* [@gubi04] and started to work on the project of a rough approach to SPDEs, see for example the 2010 paper [@gt10] with Tindel.
In 2011 Hairer [@hairer11] considered the equation $$\frac{\partial u}{\partial t} = \frac12\frac{\partial^2 u}{\partial x^2} +g(u)\,\frac{\partial u}{\partial x}+ \xi , \qquad t\geq 0, \ x\in{\mathbb{R}}$$ with $u$ and $\xi$ taking values in ${\mathbb{R}}^d$ with $d>1$, and $g$ taking values in ${\mathbb{R}}^{d\times d}$. Although this is less frightening than KPZ, the product $g(u)\frac{\partial u}{\partial x}$ is ill-defined for the usual reason: the partial derivative of $u$ is a distribution, the function $g(u)$ is not smooth, and therefore the product cannot be defined by an integration by parts or other classical tools (the fact that $u$ is vector valued prevents in general this product from being written as $\frac{\partial}{\partial x}G(u)$). The idea was to treat the solution $u(t,x)$ as a rough path *in space*.
In 2013 Hairer managed to apply the same techniques to KPZ [@hairer13], thus giving a well-posedness theory for this equation first introduced in 1986. The importance of this result was amplified by the explosion of activity around the KPZ universality class following the 2011 papers by Balázs-Quastel-Seppäläinen [@bqs11] and Amir-Corwin-Quastel [@acq11], which proved that the Cole-Hopf solution proposed by Bertini-Cancrini has indeed the scaling computed in the original KPZ paper [@kpz86] with non-rigorous renormalization group techniques.
In order to solve the stochastic quantization in $d=3$, and many other equations, Hairer [@hairer14] expanded the theory of rough paths to cover functions of space-time. Da Prato-Debussche [@dpd03] had solved the case $d=2$ with the *global* expansion $\phi=z+v$ of the solution, in terms of an explicit term $z$ and a *remainder* $v$. Hairer’s idea was to use rather *local* expansions at each point $(t,x)$ in space-time, with a far-reaching generalization of the classical notion of Taylor expansion. The theory has been developed and expanded in three subsequent papers: Bruned-Hairer-Zambotti [@bhz], Chandra-Hairer [@ch16], Bruned-Chandra-Chevyrev-Hairer [@BCCH].
In the meantime, Gubinelli-Imkeller-Perkowski [@gip] constructed a different approach to singular SPDEs based on *paracontrolled distributions*, combining the *paradifferential calculus* coming from harmonic analysis and the ideas of rough paths. This approach is effective in many situations like KPZ and the stochastic quantization, see also the paper [@mw17] by Mourrat-Weber on the convergence of the two-dimensional dynamic [I]{}sing-[K]{}ac model to the dynamical $\phi^4_2$, see , but not in all cases which are covered by regularity structures. In my personal opinion it is Hairer’s theory which transposes in the most faithful way Gubinelli’s ideas on rough paths from SDEs to SPDEs.
Another interesting approach to the KPZ equation is that of energy solutions by Gonçalves-Jara [@gj14] and Gubinelli-Jara [@gj13], which is particularly effective in order to prove convergence under rescaling of a large class of particle systems to a martingale problem formulation of KPZ. Uniqueness for such a martingale problem was proved in [@gp18] by Gubinelli-Perkowski. Other construction of the $\phi^4_3$ dynamical model are due to Kupiainen [@kupiainen], using renormalization group methods, and to Albeverio-Kusuoka [@alku], using finite-dimensional approximations.
Conclusions
===========
In this brief and personal history of SPDEs I have left aside many topics that would deserve more attention, for example
- *regularization by noise*, see Flandoli-Gubinelli-Priola [@fgp10]
- the stochastic FKPP equation, see Mueller-Mytnik-Quastel [@mmq11]
- stochastic dispersive equations, stochastic conservation laws and viscosity solutions for fully non-linear SPDEs
- numerical analysis of SPDEs.
I hope that I have at least managed to express my enthousiasm for this topic. The last seven years have been particularly exciting: Gubinelli and Hairer have clearly influenced each other in a number of occasions, and their work has spurred an exceptional activity in this area. Rough paths and regularity structures tend to make relatively little use of classical probability theory, and my project of combining stochastic calculus and SPDEs went exactly in the opposite direction. However in the years before 2013 I felt somewhat discouraged by the lack of progress of this project, and Hairer’s paper on KPZ came as a revelation to me. What came afterwards was one of those rare situations when reality surpasses our own dreams.
The message that I wished to convey is that the ground for the success of today was prepared by a considerable amount of work by a whole community, in particular on equations driven by space-time white noise. I am convinced that this activity has produced many ideas which could and should be of interest for other communities and there are already encouraging signs in this direction.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Suppose that all primes are colored with $k$ colors. Then there exist monochromatic primes $p_1,p_2,p_3$ such that $p_1+p_2=p_3+1$.'
address: ' Department of Mathematics, Shanghai Jiaotong University, Shanghai 200240, People’s Republic of China'
author:
- Hongze Li
- Hao Pan
title: 'A Schur-type addition theorem for primes'
---
[^1]
Introduction
============
In [@GreenTao], Green and Tao proved a celebrated result that the primes contain arbitrarily long non-trivial arithmetic progressions. In fact, they proved a Szemerédi-type [@Szemeredi75] result for primes:
Thus if all primes are colored with $k$ colors, then there exist arbitrarily long monochromatic arithmetic progressions. This is a van der Waerden-type [@vanderWaerden27] theorem for primes. (The well-known van der Waerden theorem states that for any $k$-coloring of all positive integers, there exist arbitrarily long monochromatic arithmetic progressions.)
On the other hand, Schur’s theorem [@Schur16] is another famous result in the Ramsey theory for integers. Schur’s theorem asserts that for any $k$-coloring of all positive integers, there exist monochromatic $x,y,z$ such that $x+y=z$. In this paper, we shall prove a Schur-type theorem for primes.
\[t1\] Suppose that all primes are arbitrarily colored with $k$ colors. Then there exist monochromatic primes $p_1,p_2,p_3$ such that $p_1+p_2=p_3+1$.
Furthermore, motivated by the Green-Tao theorem and Theorem \[t1\], we propose the following conjecture:
Suppose that all primes are colored with $k$ colors. Then for arbitrary $l\geq 3$, there exist monochromatic primes $p_0,p_1,p_2,\ldots,p_l$ such that $p_1,\ldots,p_l$ form an arithmetic progression with the difference $p_0-1$.
Theorem \[t1\] will be proved in the next section. And our proof uses a variant of Green’s method [@Green05] in his proof of Roth’s theorem for primes.
Proof of Theorem \[t1\]
=======================
\[schur\] Suppose that the set $\{1,2,\ldots,n\}$ is split into $A_1\cup A_2\cup\cdots\cup A_k$. Then there exists a constant $C_1(k)>0$ such that $$\sum_{1\leq i\leq k}|\{(x,y,z):\, x,y,z\in A_i, x+y=z\}|\geq
C_1(k)n^2$$ if $n$ is sufficiently large.
This result is not new. In fact, Robertson and Zeilberger [@RobertsonZeilberger98], Schoen [@Schoen99] had showed that if the integers from 1 to $n$ are colored with two colors, then there exist at least $(1/22-\epsilon)n^2$ monochromatic Schur triples $\{x,y,x+y\}$. Furthermore, Robertson and Zeilberger [@RobertsonZeilberger98] also claimed that for any $k$-coloring of $\{1,\ldots,n\}$, the number of monochromatic Schur triples is greater than $$\big(\frac{1}{2^{2k-3}11}-\epsilon\big)n^2.$$
However, for the sake of completeness, here we give a proof of Lemma \[schur\]. Suppose that $1,2,\ldots,n$ are colored with $k$ colors. Let $G$ be a complete graph with the vertex set $V=\{v_0,v_1,\ldots,v_n\}$. Then we $k$-color all edges of $G$ by giving the edge $v_sv_t$ the color of $t-s$ for every $0\leq
s<t\leq n$. Clearly for $0\leq r<s<t\leq n$, three vertices $v_r,v_s,v_t$ form a monochromatic triangle if and only if $\{s-r,t-s,t-r\}$ is a monochromatic Schur triple. And it is easy to see that one monochromatic Schur triple is corresponding to at most $n$ monochromatic triangles. Hence Lemma \[schur\] immediately follows from the next lemma:
Let $G$ be a complete graph with $n$ vertices. If all edges of $G$ are colored with $k$ colors, then there exist at least $C_1'(k)n^3$ monochromatic triangles provided that $n$ is sufficiently large, where $C_1'(k)>0$ is a constant only depending on $k$.
Since $G$ is a complete graph, $G$ contains $\binom{n}{3}$ triangles. We use induction on $k$. There is nothing to do when $k=1$. Assume that $k\geq 2$ and our assertion holds for any smaller value of $k$. Suppose that the vertex set $V$ of $G$ is $\{v_1,\ldots,v_n\}$. Then for every $1\leq s\leq n$, by the pigeonhole principle, there exist vertices $v_{t_{s,1}},\ldots,v_{t_{s,\ceil{n/k}}}$ and $1\leq c_s\leq k$ such that the edge $v_sv_{t_{s,1}},\ldots,v_sv_{t_{s,\ceil{n/k}}}$ are colored with the $c_s$-th color, where $\ceil{x}$ denotes the smallest integer not less than $x$. Let us consider the $\binom{\ceil{n/k}}{2}$ edges between $v_{t_{s,1}},\ldots,v_{t_{s,\ceil{n/k}}}$. Suppose that at most $(C_1'(k-1)/2k^3)n^2$ of these edges are colored with the $c_s$-th color. Then by the induction hypothesis on $k-1$, the remainder edges form at least $$C_1'(k-1)(n/k)^3-\frac{C_1'(k-1)}{2k^3}n^3=\frac{C_1'(k-1)}{2k^3}n^3$$ monochromatic triangles, since one edge belongs to at most $n$ triangles.
Then we may assume that for each $1\leq s\leq n$, there exist at least $(C_1'(k-1)/2k^3)n^2$ edges between $v_{t_{s,1}},\ldots,v_{t_{s,\ceil{n/k}}}$ are colored with the $c_s$-th color. Thus we get at least $(C_1'(k-1)/2k^3)n^2$ monochromatic triangles containing the vertex $v_s$. And there are totally at least $$\frac{C_1'(k-1)}{6k^3}n^3$$ monochromatic triangles, by noting that every triangles are counted three times.
Let $A$ be a subset of $\{1,2,\ldots,n\}$ with $|A|\geq
(1-C_1(k)/6)n$. Suppose that $A$ is split into $A_1\cup
A_2\cup\cdots\cup A_k$. Then $$\sum_{1\leq i\leq k}|\{(x,y,z):\, x,y,z\in A_i, x+y=z\}|\geq
\frac{C_1(k)}{2}n^2$$ provided that $n$ is sufficiently large.
Let $\bar{A}=\{1,\ldots,n\}\setminus A$. Then $$\begin{aligned}
&|\{(x,y,z):\, x,y,z\in A_1\cup\bar{A}, x+y=z\}|\\
\leq&|\{(x,y,z):\, x,y,z\in A_1, x+y=z\}|\\
&+|\{(x,y,z):\, \text{one of
}x,y,z\text{ lies in }\bar{A}, x+y=z\}|\\
\leq&|\{(x,y,z):\, x,y,z\in A_1, x+y=z\}|+3|\bar{A}|n.\end{aligned}$$ Hence by Lemma 2.1 we have $$\begin{aligned}
&\sum_{1\leq i\leq k}|\{(x,y,z):\, x,y,z\in A_i, x+y=z\}|\\
\geq&|\{(x,y,z):\, x,y,z\in A_1\cup\bar{A},
x+y=z\}|-3|\bar{A}|n\\
+&\sum_{2\leq i\leq k}|\{(x,y,z):\, x,y,z\in A_i,
x+y=z\}|\\
\geq&\frac{C_1(k)}{2}n^2.\end{aligned}$$
Let $\P$ denote the set of all primes. Assume that $\P=P_1\cup
P_2\cup\cdots\cup P_k$, where $P_i\cap P_j=\emptyset$ for $1\leq
i<j\leq k$. Let $w=w(n)$ be a function tending sufficiently slowly to infinity with $n$ (e.g., we may choose $w(n)=\floor{\frac{1}{4}\log\log n}$), and let $$W=\prod_{\substack{p\in\P\\ p\leqslant w(n)}}p.$$ Clearly we have $W\leqslant\log n$ for sufficiently large $n$. Let $$\kappa=\frac{C_1(k)}{10000k}.$$ In view of the well-known Siegel-Walfisz theorem, we may assume that $n$ is sufficiently large so that $$\sum_{\substack{x\in\P\cap[1,n]\\ x\equiv 1\pmod{W}}}\log
x\geq(1-\kappa)\frac{n}{\phi(W)},$$ where $\phi$ is the Euler totient function. Let $M=n/W$ and $N$ be a prime in the interval $[(2+\kappa)M,(2+2\kappa)M]$. (Thanks to the prime number theorem, such prime $N$ always exists whenever $M$ is sufficiently large.) Define $$\lambda_{b,W,N}(x)=\begin{cases} \phi(W)\log(Wx+b)/WN&\text{ if
}x\leq N\text{ and }Wx+b\text{ is prime},\\
0&\text{otherwise}.
\end{cases}$$ Let $$A_0=\{1\leq x\leq M:\, Wx+1\in\P\}$$ and $$A_i=\{1\leq x\leq M:\, Wx+1\in P_i\}$$ for $1\leq i\leq k$. Define $$a_i(x)=\1_{A_i}(x)\lambda_{1,W,N}(x)$$ for $0\leq i\leq k$, where we set $\1_A(x)=1$ if $x\in A$ and $0$ otherwise. Clearly we have $a_0=a_1+\cdots+a_k$ and $$\sum_{x}a_0(x)=\sum_{1\leq x\leq M}\lambda_{1,W,N}(x)\geq
(1-\kappa)\frac{M}{N}\geq \frac{1}{2}-3\kappa.$$
Below we consider $A_0,A_1,\ldots,A_k$ as the subsets of $\Z_N=\Z/N\Z$. Since $M<N/2$, if there exist $x,y,z\in A_i$ such that $x+y=z$ in $\Z_N$, then we have $p_1+p_2=p_3+1$ in $\Z$, where $p_1=Wx+1\in P_i, p_2=Wy+1\in P_i, p_3=Wz+1\in P_i$. For a complex-valued function $f$ over $\Z_N$, define $\tilde{f}$ by $$\tilde{f}(r)=\sum_{x\in\Z_N}f(x)e(-xr/N),$$ where $e(x)=e^{2\pi\sqrt{-1}x}$. And for two functions $f, g$, define $$(f*g)(x)=\sum_{y\in\Z_N}f(y)g(x-y).$$ It is easy to check that $(f*g)\,\tilde{}=\tilde{f}\tilde{g}$. Let $0<\delta, \epsilon<1/2$ be two sufficiently small real numbers which will be chosen later. Let $$R=\{r\in\Z_N:\, \max_{1\leq i\leq
k}|\tilde{a_i}(r)|\geqslant\delta\}.$$ and $$B=\{x\in\Z_N:\, x\in[-\kappa N,\kappa N],\
\|xr/N\|\leqslant2\epsilon\text{ for all }r\in R\},$$ where $ \|x\|=\min_{z\in\Z}|x-z|$. Here our definition of $B$ is slightly different from Green’s one in [@Green05 Page 1629]. As we shall see later, this modification is the key of our proof.
\[bohr\] $$|B|\geq\epsilon^{|R|}\kappa N.$$
Assume that $R=\{r_1,r_2,\ldots,r_{m}\}$. Let $d$ be the greatest integer not exceeding $1/\epsilon$. Clearly we have $1/d\leq
2\epsilon$ since $\epsilon<1/2$. Let $$G_{t_1,\ldots,t_m}=\{-\kappa N/2\leq x\leq\kappa N/2:\,
t_j/d\leq\{xr_j/N\}<(t_j+1)/d\text{ for }1\leq j\leq m\},$$ where $\{\alpha\}$ denotes the fractional part of $\alpha$. Clearly $$\sum_{0\leq t_1,\ldots,t_m\leq d-1}|G_{t_1,\ldots,t_m}|=\kappa N.$$ Hence there exists a term of $(t_1,\ldots,t_m)$ such that $$|G_{t_1,\ldots,t_m}|\geq d^{-m}\kappa N\geq \epsilon^{m}\kappa N.$$ For any given $x_0\in G_{t_1,\ldots,t_{m}}$, when $x\in
G_{t_1,\ldots,t_{m}}$, we have $x-x_0\in [-\kappa N,\kappa N]$ and $$\|(x-x_0)r_j/N\|\leq|\{xr_j/N\}-\{x_0r_j/N\}|\leq1/d\leq2\epsilon$$ for $1\leq j\leq m$. So $G_{t_1,\ldots,t_{m}}\subseteq x_0+B$. This completes the proof.
\[lambda\] $$\sup_{r\not=0}|\tilde{\lambda}_{b,W,N}(r)|\leq 2\log\log w/w$$ provided that $w$ is sufficiently large.
This is Lemma 6.2 of [@Green05].
Let $\beta=\1_{B}/|B|$ and $a_i'=a_i*\beta*\beta$ for $0\leq i\leq
k$.
\[upper\] Suppose that $\epsilon^{|R|}\geq \kappa^{-2}\log\log
w/w$. Then we have $$\sup_{x\in\Z_N}a_0'(x)\leq\frac{1+3\kappa}{N}.$$
We have $$\begin{aligned}
a_0'(x)=&a_0*\beta*\beta(x)\\
\leq&\lambda_{1,W,N}*\beta*\beta(x)\\
=&N^{-1}\sum_{r\in\Z_N}\tilde{\lambda}_{1,W,N}(r)\tilde{\beta}(r)^2e(xr/N)\\
\leq&N^{-1}\tilde{\lambda}_{1,W,N}(0)\tilde{\beta}(0)^2+N^{-1}\sup_{r\not=0}|\tilde{\lambda}_{1,W,N}(r)|\sum_{r\in\Z_N}|\tilde{\beta}(r)|^2\\
=&N^{-1}\tilde{\lambda}_{1,W,N}(0)+\sup_{r\not=0}|\tilde{\lambda}_{1,W,N}(r)|\sum_{r\in\Z_N}|\beta(r)|^2\\
\leq&\frac{1+\kappa}{N}+\frac{2\log\log w}{w|B|},\end{aligned}$$ where Lemma \[lambda\] is applied in the last step. Thus Lemma \[upper\] immediately follows from Lemma \[bohr\].
\[bourgain\] Let $\rho>2$. For any function $f:\,\Z_N\to\C$, $$\sum_{r\in\Z_N}|(f\lambda_{b,W,N})\,\tilde{}(r)|^\rho\leq
C_2(\rho)\bigg(\sum_{x=1}^N|f(x)|^2\lambda_{b,W,N}(x)\bigg)^{\frac{\rho}{2}}$$ where $C_2(\rho)$ is a constant only depending on $\rho$.
This is an immediate consequence of Theorem 2.1 and Lemma 6.5 of [@Green05].
By Lemma \[bourgain\], we have $$\sum_{r\in\Z_N}|\tilde{a}_i(r)|^\rho\leq C_2(\rho)$$ for $\rho>2$ and $1\leq i\leq k$. In particular, $$\sum_{r\in R}\delta^3\leq
\sum_{r\in\Z_N}\bigg(\sum_{i=1}^k|\tilde{a}_i(r)|^3\bigg)\leq
C_2(3)k,$$ which implies that $|R|\leq C_2(3)\delta^{-3}k$.
\[beta\] For each $r\in R$, $$|1-\tilde{\beta}(r)^4\tilde{\beta}(-r)^2|\leq 384\epsilon^2.$$
By the definition of $B$, we have $$\begin{aligned}
|1-\tilde{\beta}(r)|=\frac{1}{|B|}\bigg|\sum_{x\in
B}(1-e(-xr/N))\bigg| \leq4\pi\sup_{x\in B}\|xr/N\|^2\leq
64\epsilon^2.\end{aligned}$$ So $$\begin{aligned}
|1-\tilde{\beta}(r)^4\tilde{\beta}(-r)^2|=&\bigg|\sum_{j=0}^3\tilde{\beta}(r)^j(1-\tilde{\beta}(r))+\tilde{\beta}(r)^4\sum_{j=0}^1\tilde{\beta}(-r)^j(1-\tilde{\beta}(-r))\bigg|\\
\leq&384\epsilon^2.\end{aligned}$$ by noting that $|\tilde{\beta}(r)|\leq\tilde{\beta}(0)=1$.
\[difference\] For $1\leq i\leq k$, $$\bigg|\sum_{i=1}^k\sum_{\substack{
x,y,z\in\Z_N\\x+y=z}}a_i(x)a_i(y)a_i(z)-\sum_{i=1}^k\sum_{\substack{
x,y,z\in\Z_N\\x+y=z}}a_i'(x)a_i'(y)a_i'(z)\bigg|\leq
\frac{C_3k^2}{N}(\epsilon^2\delta^{-3}+\delta^{\frac{1}{3}}),$$ where $C_3$ is an absolute constant.
Clearly $$\sum_{\substack{
x,y,z\in\Z_N\\x+y=z}}f_1(x)f_2(y)f_3(z)=N^{-1}\sum_{r\in\Z_N}\tilde{f}_1(r)\tilde{f}_2(r)\tilde{f}_3(-r).$$ Hence $$\begin{aligned}
&\sum_{i=1}^k\sum_{\substack{
x,y,z\in\Z_N\\x+y=z}}a_i(x)a_i(y)a_i(z)-\sum_{i=1}^k\sum_{\substack{
x,y,z\in\Z_N\\x+y=z}}a_i'(x)a_i'(y)a_i'(z)\\
=&N^{-1}\sum_{i=1}^k\sum_{r\in\Z_N}\tilde{a}_i(r)^2\tilde{a}_i(-r)(1-\tilde{\beta}(r)^4\tilde{\beta}(-r)^2).\end{aligned}$$ By Lemma \[beta\], $$\begin{aligned}
&\bigg|\sum_{i=1}^k\sum_{r\in
R}\tilde{a}_i(r)^2\tilde{a}_i(-r)(1-\tilde{\beta}(r)^4\tilde{\beta}(-r)^2)\bigg|\\
\leq&384\epsilon^2k|R|\sup_{r}\max_{1\leq i\leq
k}|\tilde{a}_i(r)|^3\\\leq& 384C_2(3)\epsilon^2\delta^{-3}k^2,\end{aligned}$$ since $|\tilde{a}_i(r)|\leq\tilde{a}_i(0)\leq 1$. On the other hand, by the Hölder inequality, we have $$\begin{aligned}
&\bigg|\sum_{i=1}^k\sum_{r\not\in
R}\tilde{a}_i(r)^2\tilde{a}_i(-r)(1-\tilde{\beta}(r)^4\tilde{\beta}(-r)^2)\bigg|\\
\leq&2\sum_{i=1}^k\sum_{r\not\in
R}|\tilde{a}_i(r)|^2|\tilde{a}_i(-r)|\\
\leq&2\sup_{r\not\in R}\max_{1\leq i\leq
k}|\tilde{a}_i(r)|^{\frac{1}{3}}\bigg(\sum_{i=1}^k\sum_{r}|\tilde{a}_i(r)|^{\frac{5}{2}}\bigg)^{\frac{2}{3}}
\bigg(\sum_{i=1}^k\sum_{r}|\tilde{a}_i(-r)|^{3}\bigg)^{\frac{1}{3}}\\
\leq&2C_2(5/2)^{\frac{2}{3}}C_2(3)^{\frac{1}{3}}\delta^{\frac13}k.\end{aligned}$$ We choose $C_3=384C_2(3)+2C_2(5/2)^{\frac{2}{3}}C_2(3)^{\frac{1}{3}}$, then the Lemma follows.
Define $$X=\{x\in\Z_N:\, a_0'(x)\geq \frac{\kappa}{N} \}.$$ Then by Lemma \[upper\], we have $$\frac{1+3\kappa}{N}|X|+\frac{\kappa}{N}(N-|X|)\geq
\sum_{x\in\Z_N}a_0'(x)=\sum_{x\in\Z_N}a_0(x)\geq\frac{1}{2}-3\kappa.$$ It follows that $$|X|\geq\big(\frac{1}{2}-6\kappa\big)N.$$ Notice that $\supp(a_i)\subseteq[1,M]$ and $\supp(\beta)\subseteq[-\kappa N,\kappa N]$, where $$\supp(f)=\{x\in\Z_N:\, f(x)\not=0\}.$$ Hence $$\supp(a_i')=\supp(a_i*\beta*\beta)\subseteq[-2\kappa
N,M+2\kappa N]$$ for $0\leq i\leq k$. Thus we have $$X\subseteq\supp(a_0')\subseteq[-2\kappa N,M+2\kappa N].$$ Let $A_0'=X\cap[1,M]$. Then $$|A_0'|\geq|X|-4\kappa N\geq(1-20\kappa)M,$$ by recalling that $(2+\kappa)M\leq N\leq (2+2\kappa)M$. Since $$a_0'=a_0*\beta*\beta=(a_1+\cdots+a_k)*\beta*\beta=a_1'+\cdots+a_k',$$ we have $$\max_{1\leq i\leq k}a_i'(x)\geq\frac{\kappa}{kN}$$ for each $x\in A_0'$. Let $$X_i=\{x\in A_0':\, a_i'(x)=\max_{1\leq i\leq k}a_i'(x)\}.$$ Clearly $A_0'=X_1\cup\cdots\cup X_k$. Let $A_1'=X_1$ and $$A_i'=X_i\setminus\bigg(\bigcup_{j=1}^{i-1}X_j\bigg)$$ for $2\leq i\leq k$. Then $A_1',\ldots,A_k'$ form a partition of $A_0'$. Furthermore, for $1\leq i\leq k$ and each $x\in A_i'$, we have $$a_i'(x)\geq\frac{\kappa}{kN}.$$ Thus by Corollary \[schur\] and Lemma \[difference\] $$\begin{aligned}
\sum_{i=1}^k\sum_{\substack{ x,y,z\in\Z_N\\x+y=z}}a_i(x)a_i(y)a_i(z)
\geq&\sum_{i=1}^k\sum_{\substack{
x,y,z\in\Z_N\\x+y=z}}a_i'(x)a_i'(y)a_i'(z)-\frac{C_3k^2}{N}(\epsilon^2\delta^{-3}+\delta^{\frac{1}{3}})\\
\geq&\sum_{i=1}^k\sum_{\substack{
x,y,z\in A_i'\\x+y=z}}\bigg(\frac{\kappa}{kN}\bigg)^3-\frac{C_3k^2}{N}(\epsilon^2\delta^{-3}+\delta^{\frac{1}{3}})\\
\geq&\bigg(\frac{\kappa}{kN}\bigg)^3\frac{C_1(k)M^{2}}{2}-\frac{C_3k^2}{N}(\epsilon^2\delta^{-3}+\delta^{\frac{1}{3}})\\\end{aligned}$$ Finally, we may choose sufficiently small $\delta$ and $\epsilon$ with $$\epsilon^{-C_2(3)\delta^{-3}k}\geq \kappa^{-2}\log\log w/w$$ such that $$\epsilon^2\delta^{-3}+\delta^{\frac{1}{3}}\leq\frac{C_1(k)\kappa^3}{24C_3k^5},$$ whenever $N$ is sufficiently large. Thus $$\begin{aligned}
\sum_{i=1}^k\sum_{\substack{ x,y,z\in\Z_N\\x+y=z}}a_i(x)a_i(y)a_i(z)
\geq\frac{C_1(k)\kappa^3M^{2}}{2k^3N^3}-\frac{C_1(k)\kappa^3}{24k^3N}\geq
\frac{C_1(k)\kappa^3}{12k^3N}-\frac{C_1(k)\kappa^3}{24k^3N}>0.\end{aligned}$$ This completes the proof.
Notice that $$\sum_{i=1}^k\sum_{\substack{ x,z\in\Z_N\\
2x=z}}a_i(x)^2a_i(z)=O\bigg(\frac{k\phi(W)^3\log(WN+1)^3}{W^3N^2}\bigg)=o(N^{-1}).$$ Hence in fact there exist three distinct monochromatic primes $p_1,p_2,p_3$ satisfying $p_1+p_2=p_3+1$.
[99]{}
J. Bourgain, *On $\Lambda(p)$-subsets of squares*, Israel J. Math., **67**(1989), 291-311.
J. Bourgain, *Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations*, Geom. Funct. Anal., **3**(1993), 107-156.
B. Green, *Roth’s theorem in the primes*, Ann. Math., [**161**]{}(2005), 1609-1636.
B. Green and T. Tao, *The primes contain arbitrarily long arithmetic progressions*, Ann. Math., [**167**]{}(2008), 481-547.
A. Robertson and D. Zeilberger, *A 2-Coloring of \[1,n\] Can Have $(1/22)n^2 + O(n)$ Monochromatic Schur Triples, But Not Less!*, Electron. J. Combin., **5** (1998), Research Paper 19.
T. Schoen, *The Number of Monochromatic Schur Triples*, European J. Combin., **20** (1999), 855-866.
I. Schur, *Über die Kongruenz $x^m+y^m\equiv z^m\mod p$*, Jahresb. Deutsche Math. Verein., **25** (1916), 114-117.
E. Szemerédi, *On sets of integers containing no k elements in arithmetic progression*, Acta Arith., [**27**]{} (1975), 299-345.
B. L. van der Waerden, *Beweis einer Baudet’schen Vermutung*, Nieuw Arch. Wisk. (2), [**15**]{} (1927), 212-216,
[^1]: This work was supported by the National Natural Science Foundation of China (Grant No. 10771135).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this note, we prove that the kernel of the linearized equation around a positive energy solution in $\rn$, $n\geq 3$, to $-\Delta W-\gamma|x|^{-2}V=|x|^{-s}W^{\crits-1}$ is one-dimensional when $s+\gamma>0$. Here, $s\in [0,2)$, $0\leq\gamma<(n-2)^2/4$ and $\crits=2(n-s)/(n-2)$.'
address: 'Frédéric Robert, Institut Élie Cartan, Université de Lorraine, BP 70239, F-54506 Vand[œ]{}uvre-lès-Nancy, France'
author:
- Frédéric Robert
date: December 29th 2016
title: 'Nondegeneracy of positive solutions to nonlinear Hardy-Sobolev equations'
---
We fix $n\geq 3$, $s\in [0,2)$ and $\gamma<\frac{(n-2)^2}{4}$. We define $\crits=2(n-s)/(n-2)$. We consider a nonnegative solution $W\in C^2(\rnp)\setminus\{0\}$ to $$\label{eq:V}
-\Delta W-\frac{\gamma}{|x|^{2}}W=\frac{W^{\crits-1}}{|x|^{s}}\hbox{ in }\rnp.$$ Due to the abundance of solutions to , we require in addition that $W$ is an energy solution, that is $W\in \dundeux$, where $\dundeux$ is the completion of $C^\infty_c(\rn)$ for the norm $u\mapsto \Vert\nabla u\Vert_2$. Linearizing yields to consider $$\label{def:KV}
K:=\left\{\varphi\in \dundeux/\, -\Delta\varphi-\frac{\gamma}{|x|^2}\varphi=(\crits-1)\frac{W^{\crits-2}}{|x|^s}\varphi\hbox{ in }\dundeux\right\}$$ Equation is conformally invariant in the following sense: for any $r>0$, define $$W_r(x):=r^{\frac{n-2}{2}}W(rx)\hbox{ for all }x\in\rnp,$$ then, as one checks, $W_r\in C^2(\rnp)$ is also a solution to , and, differentiating with respect to $r$ at $r=1$, we get that $$-\Delta Z-\frac{\gamma}{|x|^2}Z=(\crits-1)\frac{W^{\crits-2}}{|x|^s}Z\hbox{ in }\rnp,$$ where $$Z:=\frac{d}{dr}{W_r}_{|r=1}= \sum_ix^i\partial_i W+\frac{n-2}{2}W\in \dundeux.$$ Therefore, $Z\in K$. We prove that this is essentially the only element:
\[th:main\] We assume that $\gamma\geq 0$ and that $\gamma+s>0$. Then $K=\rr Z$. In other words, $K$ is one-dimensional.
Such a result is useful when performing Liapunov-Schmidt’s finite dimensional reduction. When $\gamma=s=0$, the equation is also invariant under the translations $x\mapsto W(x-x_0)$ for any $x_0\in\rn$, and the kernel $K$ is of dimension $n+1$ (see Rey [@Rey] and also Bianchi-Egnell [@BE]). After this note was completed, we learnt that Dancer-Gladiali-Grossi [@dgg] proved Theorem \[th:main\] in the case $s=0$, and that their proof can be extended to our case, see also Gladiali-Grossi-Neves [@ggn].
This note is devoted to the proof of Theorem \[th:main\]. Since $\gamma+s>0$, it follows from Chou-Chu [@ChouChu], that there exists $r>0$ such that $W=\lambda^{\frac{1}{\crits-2}}U_r$, where $$U(x):=\left(|x|^{\frac{2-s}{n-2}\am}+|x|^{\frac{2-s}{n-2}\ap} \right)^{-\frac{n-2}{2-s}}.$$ with $$\eps:=\sqrt{\frac{(n-2)^2}{4}-\gamma}\hbox{ and }\alpha_{\pm}(\gamma):=\frac{n-2}{2}\pm\sqrt{\frac{(n-2)^2}{4}-\gamma}.$$ As one checks, $U\in \dundeux\cap C^\infty(\rnp)$ and $$\label{eq:U}
-\Delta U-\frac{\gamma}{|x|^2}U=\lambda\frac{U^{\crits-1}}{|x|^s}\hbox{ in }\rnp,\hbox{ with }\lambda:=4\frac{n-s}{n-2}\eps^2.$$ Therefore, proving Theorem \[th:main\] reduces to prove that $\tilde{K}$ is one-dimensional, where $$\label{def:tK}
\tilde{K}:=\left\{\varphi\in \dundeux/\, -\Delta\varphi-\frac{\gamma}{|x|^2}\varphi=(\crits-1)\lambda\frac{U^{\crits-2}}{|x|^s}\varphi\hbox{ in }\dundeux\right\}$$
[**I. Conformal transformation.**]{}
We let $\sn:=\{x\in\rn/\, \sum x_i^2=1\}$ be the standard $(n-1)-$dimensional sphere of $\rn$. We endow it with its canonical metric $\can$. We define $$\left\{\begin{array}{cccc}
\Phi: & \rr\times\sn &\mapsto &\rnp\\
&(t,\sigma) & \mapsto & e^{-t}\sigma
\end{array}\right.$$ The map $\Phi$ is a smooth conformal diffeomorphism and $\Phi^\star\eucl=e^{-2t}(dt^2+\can)$. On any Riemannian manifold $(M,g)$, we define the conformal Laplacian as $L_g:=-\Delta_g+\frac{n-2}{4(n-1)}R_g$ where $\Delta_g:=\hbox{div}_g(\nabla)$ and $R_g$ is the scalar curvature. The conformal invariance of the Laplacian reads as follows: for a metric $g'=e^{2\omega}g$ conformal to $g$ ($\omega\in C^\infty(M)$), we have that $L_{g'}u=e^{-\frac{n+2}{2}\omega}L_g(e^{\frac{n-2}{2}\omega}u)$ for all $u\in C^\infty(M)$. It follows from this invariance that for any $u\in C^\infty_c(\rnp)$, we have that $$\label{transfo:delta}
(-\Delta u)\circ \Phi(t,\sigma)=e^{\frac{n+2}{2}t}\left(-\partial_{tt}\hat{u}-\Delta_{\can}\hat{u}+\frac{(n-2)^2}{4}\hat{u}\right)(t,\sigma)$$ for all $(t,\sigma)\in\rr\times \sn$, where $\hat{u}(t,\sigma):=e^{-\frac{n-2}{2}t}u(e^{-t}\sigma)$ for all $(t,\sigma)\in \rr\times\sn$. In addition, as one checks, for any $u,v\in C^\infty_c(\rnp)$, we have that $$\begin{aligned}
\int_{\rn}(\nabla u,\nabla v)\, dx&=& \int_{\rr\times\sn}\left(\partial_t\hat{u}\partial_t\hat{v}+\left(\nabla^\prime\hat{u},\nabla^\prime\hat{v}\right)_{\can}+\frac{(n-2)^2}{4}\hat{u}\hat{v}\right)\, dt\, d\sigma\nonumber\\
&:=&B(\hat{u},\hat{v})\label{def:B}\end{aligned}$$ where we have denoted $\nabla^\prime\hat{u}$ as the gradient on $\sn$ with respect to the $\sigma$ coordinate. We define the space $H$ as the completion of $C_c^\infty(\rr\times\sn)$ for the norm $\Vert\cdot\Vert_H:=\sqrt{B(\cdot,\cdot)}$. As one checks, $u\mapsto \hat{u}$ extends to a bijective isometry $\dundeux\to H$.
The Hardy-Sobolev inequality asserts the existence of $K(n,s,\gamma)>0$ such that $\left(\int_{\rn}\frac{|u|^{\crits}}{|x|^s}\, dx\right)^{\frac{2}{\crits}}\leq K(n,s,\gamma)\int_{\rn}\left(|\nabla u|^2-\frac{\gamma}{|x|^2}u^2\right)\, dx$ for all $u\in C^\infty_c(\rnp)$. Via the isometry $\dundeux\simeq H$, this inequality rewrites $$\left(\int_{\rr\times \sn}|v|^{\crits}\, dt d\sigma\right)^{\frac{2}{\crits}}\leq K(n,s,\gamma)\int_{\rr\times\sn}\left((\partial_t v)^2+|\nabla^\prime v|_{\can}^2+\eps^2v^2\right)\, dtd\sigma,$$ for all $v\in H$. In particular, $v\in L^{\crits}(\rr\times\sn)$ for all $v\in H$.
We define $H_1^2(\rr)$ (resp. $H_1^2(\sn)$) as the completion of $C^\infty_c(\rr)$ (resp. $C^\infty(\sn)$) for the norm $$u\mapsto \sqrt{\int_{\rr}(\dot{u}^2+u^2)\, dx}\; \left(\hbox{resp. }u\mapsto \sqrt{\int_{\sn}(|\nabla^\prime u|^2_{\can}+u^2)\, d\sigma}\right).$$ Each norm arises from a Hilbert inner product. For any $(\varphi,Y)\in C^\infty_c(\rr)\times C^\infty(\sn)$, define $\varphi\star Y\in C^\infty_c(\rr\times\sn)$ by $(\varphi\star Y)(t,\sigma):=\varphi(t)Y(\sigma)$ for all $(t,\sigma)\in\rr\times\sn$. As one checks, there exists $C>0$ such that $$\label{eq:star}
\Vert \varphi\star Y\Vert_H\leq C\Vert \varphi\Vert_{H_1^2(\rr)}\Vert Y\Vert_{H_1^2(\sn)}$$ for all $(\varphi,Y)\in C^\infty_c(\rr)\times C^\infty(\sn)$. Therefore, the operator extends continuously from $H_1^2(\rr)\times H_1^2(\sn)$ to $H$, such that holds for all $(\varphi,Y)\in H_1^2(\rr)\times H_1^2(\sn)$.
\[lem:2\] We fix $u\in C^\infty_c(\rr\times\sn)$ and $Y\in H_1^2(\sn)$. We define $$u_Y(t):=\int_{\sn}u(t,\sigma)Y(\sigma)\, d\sigma=\langle u(t,\cdot),Y\rangle_{L^2(\sn)}\hbox{ for all }t\in\rr.$$ Then $u_Y\in H_1^2(\rr)$. Moreover, this definition extends continuously to $u\in H$ and there exists $C>0$ such that $$\Vert u_Y\Vert_{H_1^2(\rr)}\leq C\Vert u\Vert_H\Vert Y\Vert_{H_1^2(\sn)}\hbox{ for all }(u,Y)\in H\times H_1^2(\sn).$$
[*Proof of Lemma \[lem:2\]:*]{} We let $u\in C^\infty_c(\rr\times\sn)$, $Y\in H_1^2(\sn)$ and $\varphi\in C^\infty_c(\rr)$. Fubini’s theorem yields: $$\int_{\rr}\left(\partial_t u_Y\partial_t\varphi+u_Y\varphi\right)\, dt=\int_{\rr\times\sn}\left(\partial_t u\partial_t(\varphi\star Y)+u\cdot (\varphi\star Y)\right)\, dtd\sigma$$ Taking $\varphi:=u_Y$, the Cauchy-Schwartz inequality yields $$\begin{aligned}
&&\Vert u_Y\Vert_{H_1^2(\rr)}^2\\
&&\leq \sqrt{\int_{\rr\times\sn}\left((\partial_t u)^2+u^2\right)dtd\sigma}
\times \sqrt{\int_{\rr\times\sn}\left((\partial_t (u_Y\star Y))^2+ (u_Y\star Y)^2\right) dtd\sigma}\\
&&\leq C\Vert u\Vert_H\Vert u_Y\star Y\Vert_H\leq C\Vert u\Vert_H\Vert u_Y\Vert_{H_1^2(\rr)}\Vert Y\Vert_{H_1^2(\sn)},\end{aligned}$$ and then $\Vert u_Y\Vert_{H_1^2(\rr)}\leq C\Vert u\Vert_H\Vert Y\Vert_{H_1^2(\sn)}$. The extension follows from density.
[**II. Transformation of the problem.**]{} We let $\varphi\in \tilde{K}$, that is $$-\Delta\varphi-\frac{\gamma}{|x|^2}\varphi=(\crits-1)\lambda\frac{U^{\crits-2}}{|x|^s}\varphi\hbox{ weakly in }\dundeux.$$ Since $U\in C^\infty(\rnp)$, elliptic regularity yields $\varphi\in C^\infty(\rnp)$. Moreover, the correspondance yields $$\label{eq:hphi}
-\partial_{tt}\hphi-\Delta_{\can}\hphi+\eps^2\hphi=(\crits-1)\lambda \hU^{\crits-2}\hphi$$ weakly in $H$. Note that since $\hphi,\hU\in H$ and $H$ is continuously embedded in $L^{\crits}(\rr\times\sn)$, this formulation makes sense. Since $\varphi\in C^\infty(\rnp)$, we get that $\hphi\in C^\infty(\rr\times\sn)\cap H$ and equation makes sense strongly in $\rr\times\sn$. As one checks, we have that $$\hU(t,\sigma)=\left(e^{\frac{2-s}{n-2}\eps t}+e^{-\frac{2-s}{n-2}\eps t}\right)^{-\frac{n-2}{2-s}}\hbox{ for all }(t,\sigma)\in \rr\times\sn.$$ In the sequel, we will write $\hU(t)$ for $\hU(t,\sigma)$ for $(t,\sigma)\in \rr\times\sn$.
The eigenvalues of $-\Delta_{\can}$ on $\sn$ are $$0=\mu_0<n-1=\mu_1<\mu_2<....$$ We let $\mu\geq 0$ be an eigenvalue for $-\Delta_{\can}$ and we let $Y=Y_\mu\in C^\infty(\sn)$ be a corresponding eigenfunction, that is $$-\Delta_{\can}Y=\mu Y\hbox{ in }\sn.$$ We fix $\psi\in C^\infty_c(\rr)$ so that $\psi\star Y\in C^\infty_c(\rr\times\sn)$. Multiplying by $\psi\star Y$, integrating by parts and using Fubini’s theorem yields $$\int_{\rr}\left(\partial_{t}\hphi_Y\partial_t\psi+(\mu+\eps^2)\hphi_Y\psi\right)\, dt=\int_{\rr}(\crits-1)\lambda \hU^{\crits-2}\hphi_Y\psi\, dt,$$ where $\hphi_Y\in H_1^2(\rr)\cap C^\infty(\rr)$. Then $$A_\mu \hphi_Y=0\hbox{ with }A_\mu:=-\partial_{tt}+(\mu+\eps^2-(\crits-1)\lambda \hU^{\crits-2})$$ where this identity holds both in the classical sense and in the weak $H_1^2(\rr)$ sense. We claim that $$\label{eq:phi:0}
\hphi_Y\equiv 0\hbox{ for all eigenfunction }Y\hbox{ of }\mu\geq n-1.$$ We prove the claim by taking inspiration from Chang-Gustafson-Nakanishi ([@gustaf], Lemma 2.1). Differentiating with respect to $i=1,...,n$, we get that $$-\Delta\partial_i U-\frac{\gamma}{|x|^2}\partial_i U-(\crits-1)\lambda\frac{U^{\crits-2}}{|x|^s}\partial_i U=-\left(\frac{2\gamma}{|x|^{4}}U+\frac{s\lambda}{|x|^{s+2}}U^{\crits-1}\right)x_i$$ On $\rr\times\sn$, this equation reads $$-\partial_{tt}\hat{\partial_i U}-\Delta_{\can}\hat{\partial_i U}+\left(\eps^2-(\crits-1)\lambda \hU^{\crits-2}\right)\hat{\partial_i U}=-\sigma_i e^t \left(2\gamma\hU+s\lambda \hU^{\crits-1}\right)$$ Note that $\hat{\partial_i U}=-V\star \sigma_i$, where $\sigma_i:\sn\to \rr$ is the projection on the $x_i$’s and $$V(t):=-e^{-\frac{n-2}{2}t}U^\prime(e^{-t})=e^{(1+\eps)t}\left(\ap +\am e^{2\frac{2-s}{n-2}\eps t}\right)\left(1+e^{2\frac{2-s}{n-2}\eps t}\right)^{-\frac{n-s}{2-s}}>0$$ for all $t\in\rr$. Since $-\Delta_{\can}\sigma_i=(n-1)\sigma_i$ (the $\sigma_i$’s form a basis of the second eigenspace of $-\Delta_{\can}$), we then get that $$A_\mu V\geq A_{n-1}V= e^t\left(2\gamma\hU+s\lambda \hU^{\crits-1}\right)>0\hbox{ for all }\mu\geq n-1\hbox{ and }V>0.$$ Note that for $\gamma>0$, we have that $\am>0$, and that for $\gamma=0$, we have that $\am=0$. As one checks, we have that $$\begin{aligned}
(i)\;\left\{\left(\gamma>0\hbox{ and }\eps>1\right)\hbox{ or }\left(\gamma=0\hbox{ and }s<\frac{n}{2}\right)\right\}&\Rightarrow & V\in H_1^2(\rr)\\
(ii)\; \left\{\left(\gamma>0\hbox{ and }\eps\leq1\right)\hbox{ or }\left(\gamma=0\hbox{ and }s\geq \frac{n}{2}\right)\right\}&\Rightarrow & V\notin L^2((0,+\infty))\end{aligned}$$ [*Assume that case (i) holds:*]{} in this case, $V\in H_1^2(\rr)$ is a distributional solution to $A_\mu V>0$ in $H_1^2(\rr)$. We define $m:=\inf \{\int_{\rr}\varphi A_\mu \varphi\, dt\}$, where the infimum is taken on $\varphi\in H_1^2(\rr)$ such that $\Vert\varphi\Vert_2=1$. We claim that $m>0$. Otherwise, it follows from Lemma \[lem:3\] below that the infimum is achieved, say by $\varphi_0\in H_1^2(\rr)\setminus \{0\}$ that is a weak solution to $A_\mu\varphi_0=m\varphi_0$ in $\rr$. Since $|\varphi_0|$ is also a minimizer, and due to the comparison principle, we can assume that $\varphi_0>0$. Using the self-adjointness of $A_\mu$, we get that $0\geq m\int_{\rr}\varphi_0V\, dt=\int_{\rr}(A_\mu \varphi_0)V\, dt=\int_{\rr}(A_\mu V)\varphi_0\, dt>0$, which is a contradiction. Then $m>0$. Since $A_\mu\varphi_Y=0$, we then get that $\varphi_Y\equiv 0$ as soon as $\mu\geq n-1$. This ends case (i).
[*Assume that case (ii) holds:*]{} we assume that $\varphi_Y\not\equiv 0$. It follows from Lemma \[lem:4\] that $V(t)=o(e^{-\alpha |t|})$ as $t\to -\infty$ for all $0<\alpha<\sqrt{\eps^2+n-1}$. As one checks with the explicit expression of $V$, this is a contradiction when $\eps<\frac{n-2}{2}$, that is when $\gamma>0$. Then we have that $\gamma=0$ and $\eps=\frac{n-2}{2}$. Since $\frac{n}{2}\leq s<2$, we have that $n=3$. As one checks, $(\mu+\eps^2-(\crits-1)\lambda \hU^{\crits-2})>0$ for $\mu\geq n-1$ as soon as $n=3$ and $s\geq 3/2$. Lemma \[lem:4\] yields $\varphi_Y\equiv 0$, a contradiction. So $\varphi_Y\equiv 0$, this ends case (ii).
These steps above prove . Then, for all $t\in\rr$, $\hphi(t,\cdot)$ is orthogonal to the eigenspaces of $\mu_i$, $i\geq 1$, so it is in the eigenspace of $\mu_0=0$ spanned by $1$, and therefore $\hphi=\hphi(t)$ is independent of $\sigma\in\sn$. Then $$-\hphi^{\prime\prime}+(\eps^2-(\crits-1)\lambda \hU^{\crits-2})\hphi=0\hbox{ in }\rr\hbox{ and }\hphi\in H_1^2(\rr).$$ It follows from Lemma \[lem:5\] that the space of such functions is a most one-dimensional. Going back to $\varphi$, we get that $\tilde{K}$ is of dimension at most one, and then so is $K$. Since $Z\in K$, then $K$ is one dimensional and $K=\rr Z$. This proves Theorem \[th:main\].
[**III. Auxiliary lemmas.**]{}
\[lem:5\] Let $q\in C^0(\rr)$. Then $$\hbox{dim}_{\rr}\{\varphi\in C^2(\rr)\cap H_1^2(\rr)\hbox{ such that }-\ddot{\varphi}+q\varphi=0\}\leq 1.$$
[*Proof of Lemma \[lem:5\]:*]{} Let $F$ be this space. Fix $\varphi,\psi\in F\setminus\{0\}$: we prove that they are linearly dependent. Define the Wronskian $W:=\varphi \dot{\psi}-\dot{\varphi}\psi$. As one checks, $\dot{W}=0$, so $W$ is constant. Since $\varphi,\dot{\varphi},\psi,\dot{\psi}\in L^2(\rr)$, then $W\in L^1(\rr)$ and then $W\equiv 0$. Therefore, there exists $\lambda\in\rr$ such that $(\psi(0),\dot{\psi}(0))=\lambda (\varphi(0),\dot{\varphi}(0))$, and then, classical ODE theory yields $\psi=\lambda\varphi$. Then $F$ is of dimension at most one.
\[lem:3\] Let $q\in C^0(\rr)$ be such that there exists $A>0$ such that $\lim_{t\to\pm\infty}q(t)=A$, and define $$m:=\inf_{\varphi\in H_1^2(\rr)\setminus\{0\}}\frac{\int_{\rr}\left(\dot{\varphi}^2+q\varphi^2\right)\, dt}{\int_{\rr}\varphi^2\, dt}.$$ Then either $m>0$, or the infimum is achieved.
Note that in the case $q(t)\equiv A$, $m=A$ and the infimum is not achieved.
[*Proof of Lemma \[lem:3\]:*]{} As one checks, $m\in\rr$ is well-defined. We let $(\varphi_i)_i\in H_1^2(\rr)$ be a minimizing sequence such that $\int_{\rr}\varphi_i^2\, dt=1$ for all $i$, that is $\int_{\rr}\left(\dot{\varphi}_i^2+q\varphi_i^2\right)\, dt=m+o(1)$ as $i\to +\infty$. Then $(\varphi_i)_i$ is bounded in $H_1^2(\rr)$, and, up to a subsequence, there exists $\varphi\in H_1^2(\rr)$ such that $\varphi_i\rightharpoonup \varphi$ weakly in $H_1^2(\rr)$ and $\varphi_i\to \varphi$ strongly in $L^2_{loc}(\rr)$ as $i\to +\infty$. We define $\theta_i:=\varphi_i-\varphi$. Since $\lim_{t\to \pm\infty}(q(t)-A)=0$ and $(\theta_i)_i$ goes to $0$ strongly in $L^2_{loc}$, we get that $\lim_{i\to +\infty}\int_{\rr}(q(t)-A)\theta_i^2\, dt=0$. Using the weak convergence to $0$ and that $(\varphi_i)_i$ is minimizing, we get that $$\int_{\rr}\left(\dot{\varphi}^2+q\varphi^2\right)\, dt+\int_{\rr}\left(\dot{\theta}_i^2+A\theta_i^2\right)\, dt=m+o(1)\hbox{ as }i\to +\infty.$$ Since $1-\Vert\varphi\Vert_2^2=\Vert\theta_i\Vert_2^2+o(1)$ as $i\to +\infty$ and $\int_{\rr}\left(\dot{\varphi}^2+q\varphi^2\right)\, dt\geq m\Vert\varphi\Vert_2^2$, we get $$m\Vert\theta_i\Vert_2^2\geq \int_{\rr}\left(\dot{\theta}_i^2+A\theta_i^2\right)\, dt+o(1)\hbox{ as }i\to +\infty.$$ If $m\leq 0$, then $\theta_i\to 0$ strongly in $H_1^2(\rr)$, and then $(\varphi_i)_i$ goes strongly to $\varphi\not\equiv 0$ in $H_1^2$, and $\varphi$ is a minimizer for $m$. This proves the lemma.
\[lem:4\] Let $q\in C^0(\rr)$ be such that there exists $A>0$ such that $\lim_{t\to\pm\infty}q(t)=A$ and $q$ is even. We let $\varphi\in C^2(\rr)$ be such that $-\ddot{\varphi}+q\varphi=0$ in $\rr$ and $\varphi\in H_1^2(\rr)$.
- If $q\geq 0$, then $\varphi\equiv 0$.
- We assume that there exists $V\in C^2(\rr)$ such that $$-\ddot{V}+qV>0\; ,\; V>0\hbox{ and }V\not\in L^2((0,+\infty)).$$ Then either $\varphi\equiv 0$ or $V(t)=o(e^{-\alpha |t|})$ as $t\to -\infty$ for all $0<\alpha<\sqrt{A}$.
[*Proof of Lemma \[lem:4\]:*]{} We assume that $\varphi\not\equiv 0$. We first assume that $q\geq 0$. By studying the monotonicity of $\varphi$ between two consecutive zeros, we get that $\varphi$ has at most one zero, and then $\ddot{\varphi}$ has constant sign around $\pm\infty$. Therefore, $\varphi$ is monoton around $\pm\infty$ and then has a limit, which is $0$ since $\varphi\in L^2(\rr)$. The contradiction follows from studying the sign of $\ddot{\varphi}$, $\varphi$. Then $\varphi\equiv 0$ and the first part of Lemma \[lem:4\] is proved.
We now deal with the second part and we let $V\in C^2(\rr)$ be as in the statement. We define $\psi:=V^{-1}\varphi$. Then, $-\ddot{\psi}+h \dot{\psi}+Q \psi=0$ in $\rr$ with $h,Q\in C^0(\rr)$ and $Q>0$. Therefore, by studying the zeros, $\dot{\psi}$ vanishes at most once, and then $\psi(t)$ has limits as $t\to\pm\infty$. Since $\varphi=\psi V$, $\varphi\in L^2(\rr)$ and $V\not\in L^2(0,+\infty)$, then $\lim_{t\to +\infty}\psi(t)=0$. We claim that $\lim_{t\to-\infty}\psi(t)\neq 0$. Otherwise, the limit would be $0$. Then $\psi$ would be of constant sign, say $\psi>0$. At the maximum point $t_0$ of $\psi$, the equation would yield $\ddot{\psi}(t_0)>0$, which contradicts the maximum. So the limit of $\psi$ at $-\infty$ is nonzero, and then $V(t)=O(\varphi(t))$ as $t\to-\infty$.
We claim that $\varphi$ is even or odd and $\varphi$ has constant sign around $+\infty$. Since $t\mapsto \varphi(-t)$ is also a solution to the ODE, it follows from Lemma \[lem:5\] that it is a multiple of $\varphi$, and then $\varphi$ is even or odd. Since $\dot{\psi}$ changes sign at most once, then $\psi$ changes sign at most twice. Therefore $\varphi=\psi V$ has constant sign around $+\infty$.
We fix $0<A'<A$ and we let $R_0>0$ such that $q(t)>A'$ for all $t\geq R_0$. Without loss of generality, we also assume that $\varphi(t)>0$ for $t\geq R_0$. We define $b(t):=C_0e^{-\sqrt{A'}t}-\varphi(t)$ for all $t\in\rr$ with $C_0:=2\varphi(R_0)e^{\sqrt{A'}R_0}$. We claim that $b(t)\geq 0$ for all $t\geq R_0$. Otherwise $\inf_{t\geq R_0}b(t)<0$, and since $\lim_{t\to +\infty}b(t)=0$ and $b(R_0)>0$, then there exists $t_1>R_0$ such that $\ddot{b}(t_1)\geq 0$ and $b(t_1)<0$. However, as one checks, the equation yields $\ddot{b}(t_1)<0$, which is a contradiction. Therefore $b(t)\geq 0$ for all $t\geq R_0$, and then $0<\varphi(t)\leq C_0e^{-\sqrt{A'}t}$ for $t\to +\infty$. Lemma \[lem:4\] follows from this inequality, $\varphi$ even or odd, and $V(t)=O(\varphi(t))$ as $t\to-\infty$.
[12]{}
| {
"pile_set_name": "ArXiv"
} |
CERN-TH/99-242\
hep-ph/9908340
**Recent Theoretical Developments in\
CP Violation in the $B$ System**
Robert Fleischer [^1]\
[*Theory Division, CERN, CH-1211 Geneva 23, Switzerland*]{}
[**Abstract**]{}\
CERN-TH/99-242\
August 1999
Setting the Stage
=================
CP violation is one of the central and fundamental phenomena in modern particle physics, providing a very fertile testing ground for the Standard Model. In this respect, the $B$-meson system plays an outstanding role, which is also reflected in the tremendous experimental effort put in the preparations to explore $B$ physics. The BaBar (SLAC) and BELLE (KEK) detectors have already seen their first events – which manifests the beginning of the $B$-factory era in particle physics – and CLEO-III (Cornell), HERA-B (DESY) and CDF-II (Fermilab) will start taking data in the near future. Although the physics potential of these experiments is very promising, it may well be that the “definite” answer in the search for new physics will be left for second-generation $B$-physics experiments at hadron machines, such as LHCb (CERN) or BTeV (Fermilab), which offer, among other things, very exciting ways of using $B_s$ decays.
Within the framework of the Standard Model, CP violation is closely related to the Cabibbo–Kobayashi–Maskawa (CKM) matrix [@ckm], connecting the electroweak eigenstates of the down, strange and bottom quarks with their mass eigenstates. As far as CP violation is concerned, the central feature is that – in addition to three generalized Cabibbo-type angles – also a [*complex phase*]{} is needed in the three-generation case to parametrize the CKM matrix. This complex phase is the origin of CP violation within the Standard Model. Concerning tests of the CKM picture of CP violation, the central targets are the [*unitarity triangles*]{} of the CKM matrix. The unitarity of the CKM matrix, which is described by $$\hat V_{\mbox{{\scriptsize CKM}}}^{\,\,\dagger}\cdot\hat
V_{\mbox{{\scriptsize CKM}}}=
\hat 1=\hat V_{\mbox{{\scriptsize CKM}}}\cdot\hat V_{\mbox{{\scriptsize
CKM}}}^{\,\,\dagger},$$ leads to a set of 12 equations, consisting of 6 normalization relations and 6 orthogonality relations. The latter can be represented as 6 triangles in the complex plane, all having the same area [@AKL]. However, in only two of them, all three sides are of comparable magnitude ${\cal O}(\lambda^3)$, while in the remaining ones, one side is suppressed relative to the others by ${\cal O}(\lambda^2)$ or ${\cal O}(\lambda^4)$, where $\lambda\equiv|V_{us}|=0.22$ denotes the Wolfenstein parameter [@wolf]. The orthogonality relations describing the non-squashed triangles are given as follows: $$\begin{aligned}
V_{ud}\,V_{ub}^\ast+V_{cd}\,V_{cb}^\ast+V_{td}\,V_{tb}^\ast&=&0\label{UT1}\\
V_{ud}^\ast\, V_{td}+V_{us}^\ast\, V_{ts}+V_{ub}^\ast\, V_{tb}&=&0.\label{UT2}\end{aligned}$$ The two non-squashed triangles agree at leading order in the Wolfenstein expansion (${\cal O}(\lambda^3)$), so that we actually have to deal with a single triangle at this order, which is usually referred to as “the” unitarity triangle of the CKM matrix [@ut]. However, in the era of second-generation experiments, starting around 2005, we will have to take into account the next-to-leading order terms of the Wolfenstein expansion, and will have to distinguish between the unitarity triangles described by (\[UT1\]) and (\[UT2\]), which are illustrated in Fig. \[fig:UT\]. Here, $\overline{\rho}$ and $\overline{\eta}$ are related to the Wolfenstein parameters $\rho$ and $\eta$ through [@BLO] $$\overline{\rho}\equiv\left(1-\lambda^2/2\right)\rho,\quad
\overline{\eta}\equiv\left(1-\lambda^2/2\right)\eta,$$ and the angle $\delta\gamma=\lambda^2\eta$ in Fig. \[fig:UT\](b) measures the CP-violating weak $B^0_s$–$\overline{B^0_s}$ mixing phase, as we will see in Subsection \[sec:CP-neut\].
-------- --------
=3.9cm =3.9cm
-------- --------
The outline of this paper is as follows: in Section \[sec:Stan-Meth\], the standard methods to extract CKM phases from CP-violating effects in non-leptonic $B$ decays are reviewed briefly in the light of recent theoretical and experimental results. In Section \[sec:New-Strat\], we then focus on new theoretical developments in this field, including extractions of $\gamma$ from $B\to\pi K$ and $B_{s(d)}\to J/\psi\, K_{\rm S}$ decays, a simultaneous determination of $\beta$ and $\gamma$, which is provided by the modes $B_d\to \pi^+\pi^-$ and $B_s\to K^+K^-$, and extractions of CKM phases and hadronic parameters from angular distributions of certain $B_{d,s}$ decays, such as $B_d\to J/\psi\,\rho^0$ and $B_s\to J/\psi\,\phi$. Finally, in Section \[sec:concl\] we summarize the conclusions and give a brief outlook.
A Brief Look at the Standard Methods to Extract CKM Phases {#sec:Stan-Meth}
==========================================================
In order to determine the angles of the unitarity triangles shown in Fig. \[fig:UT\] and to test the Standard-Model description of CP violation, the major role is played by non-leptonic $B$ decays, which can be divided into three decay classes: decays receiving both “tree” and “penguin” contributions, pure “tree” decays, and pure “penguin” decays. There are two types of penguin topologies: gluonic (QCD) and electroweak (EW) penguins, which are related to strong and electroweak interactions, respectively. Because of the large top-quark mass, also EW penguins play an important role in several processes [@rev]. An outstanding tool to extract CKM phases is provided by CP-violating effects in non-leptonic decays of neutral $B$-mesons.
CP Violation in Neutral $B$ Decays {#sec:CP-neut}
----------------------------------
A particularly simple and interesting situation arises if we restrict ourselves to decays of neutral $B_q$-mesons ($q\in\{d,s\}$) into CP self-conjugate final states $|f\rangle$, satisfying the relation $({\cal CP})|f\rangle=\pm\,|f\rangle$. In this case, the corresponding time-dependent CP asymmetry can be expressed as $$\begin{aligned}
\lefteqn{a_{\rm CP}(t)\equiv\frac{\Gamma(B^0_q(t)\to f)-
\Gamma(\overline{B^0_q}(t)\to f)}{\Gamma(B^0_q(t)\to f)+
\Gamma(\overline{B^0_q}(t)\to f)}=}\nonumber\\
&&2\,e^{-\Gamma_q t}\left[\frac{{\cal A}_{\rm CP}^{\rm dir}(B_q\to f)
\cos(\Delta M_q t)+{\cal A}_{\rm CP}^{\rm mix}(B_q\to f)\sin(\Delta M_q t)}{
e^{-\Gamma_{\rm H}^{(q)}t}+e^{-\Gamma_{\rm L}^{(q)}t}+
{\cal A}_{\rm \Delta\Gamma}(B_q\to f)\left(e^{-\Gamma_{\rm H}^{(q)}t}-
e^{-\Gamma_{\rm L}^{(q)}t}\right)} \right],\label{ee6}\end{aligned}$$ where $\Delta M_q\equiv M_{\rm H}^{(q)}-M_{\rm L}^{(q)}$ denotes the mass difference between the $B_q$ mass eigenstates, and $\Gamma_{\rm H,L}^{(q)}$ are the corresponding decay widths, with $\Gamma_q\equiv\left(\Gamma_{\rm H}^{(q)}+\Gamma_{\rm L}^{(q)}\right)/2$. In Eq. (\[ee6\]), we have separated the “direct” from the “mixing-induced” CP-violating contributions, which are described by $$\label{ee7}
{\cal A}^{\mbox{{\scriptsize dir}}}_{\mbox{{\scriptsize CP}}}(B_q\to f)\equiv
\frac{1-\bigl|\xi_f^{(q)}\bigr|^2}{1+\bigl|\xi_f^{(q)}\bigr|^2}\quad
\mbox{and}\quad
{\cal A}^{\mbox{{\scriptsize mix--ind}}}_{\mbox{{\scriptsize
CP}}}(B_q\to f)\equiv\frac{2\,\mbox{Im}\,\xi^{(q)}_f}{1+\bigl|\xi^{(q)}_f
\bigr|^2}\,,$$ respectively. Here direct CP violation refers to CP-violating effects arising directly in the corresponding decay amplitudes, whereas mixing-induced CP violation is due to interference effects between $B_q^0$–$\overline{B_q^0}$ mixing and decay processes. Whereas the width difference $\Delta\Gamma_q\equiv\Gamma_{\rm H}^{(q)}-
\Gamma_{\rm L}^{(q)}$ is negligibly small in the $B_d$ system, it may be sizeable in the $B_s$ system [@dun; @DGamma-cal], thereby providing the observable $$\label{ADGam}
{\cal A}_{\rm \Delta\Gamma}(B_q\to f)\equiv
\frac{2\,\mbox{Re}\,\xi^{(q)}_f}{1+\bigl|\xi^{(q)}_f
\bigr|^2},$$ which is not independent from ${\cal A}^{\mbox{{\scriptsize
dir}}}_{\mbox{{\scriptsize CP}}}(B_q\to f)$ and ${\cal A}^{\mbox{{\scriptsize mix}}}_{\mbox{{\scriptsize CP}}}(B_q\to f)$: $$\label{Obs-rel}
\Bigl[{\cal A}_{\rm CP}^{\rm dir}(B_s\to f)\Bigr]^2+
\Bigl[{\cal A}_{\rm CP}^{\rm mix}(B_s\to f)\Bigr]^2+
\Bigl[{\cal A}_{\Delta\Gamma}(B_s\to f)\Bigr]^2=1.$$ Essentially all the information needed to evaluate the CP asymmetry (\[ee6\]) is included in the following quantity: $$\xi_f^{(q)}=\mp\,e^{-i\phi_q}\,
\frac{A(\overline{B^0_q}\to f)}{A(B^0_q\to f)}=
\mp\,e^{-i\phi_q}\,
\frac{\sum\limits_{j=u,c}V_{jr}^\ast V_{jb}\,
{\cal M}^{jr}}{\sum\limits_{j=u,c}V_{jr}V_{jb}^\ast\,
{\cal M}^{jr}}\,,$$ where the ${\cal M}^{jr}$ denote hadronic matrix elements of certain four-quark operators, $r\in\{d,s\}$ distinguishes between $\bar b\to\bar d$ and $\bar b\to\bar s$ transitions, and $$\phi_q=\left\{\begin{array}{cr}
+2\beta&\mbox{($q=d$)}\\
-2\delta\gamma&\mbox{($q=s$)}\end{array}\right.$$ is the weak $B_q^0$–$\overline{B_q^0}$ mixing phase. In general, the observable $\xi_f^{(q)}$ suffers from hadronic uncertainties, which are due to the hadronic matrix elements ${\cal M}^{jr}$. However, if the decay $B_q\to f$ is dominated by a single CKM amplitude, the corresponding matrix elements cancel, and $\xi_f^{(q)}$ takes the simple form $$\label{ee10}
\xi_f^{(q)}=\mp\exp\left[-i\left(\phi_q-\phi_{\mbox{{\scriptsize
D}}}^{(f)}\right)\right],$$ where $\phi_{\mbox{{\scriptsize D}}}^{(f)}$ is a weak decay phase, which is given by $$\phi_{\mbox{{\scriptsize D}}}^{(f)}=\left\{\begin{array}{cc}
-2\gamma&\mbox{for dominant
$\bar b\to\bar u\,u\,\bar r$ CKM amplitudes,}\\
0&\,\mbox{for dominant $\bar b\to\bar c\,c\,\bar r\,$ CKM
amplitudes.}
\end{array}\right.$$
The “Gold-Plated” Mode $B_d\to J/\psi\, K_{\rm S}$ {#sec:BdPsiKS}
--------------------------------------------------
Probably the most important application of the formalism discussed in the previous subsection is the decay $B_d\to J/\psi\,
K_{\mbox{{\scriptsize S}}}$, which is dominated by the $\bar b\to\bar c\,c\,\bar s$ CKM amplitude [@rev], implying $$\label{e12}
{\cal A}^{\mbox{{\scriptsize mix--ind}}}_{\mbox{{\scriptsize
CP}}}(B_d\to J/\psi\, K_{\mbox{{\scriptsize S}}})=+\sin[-(2\beta-0)]\,.$$ Since (\[ee10\]) applies with excellent accuracy to $B_d\to J/\psi\,
K_{\mbox{{\scriptsize S}}}$ – the point is that penguins enter essentially with the same weak phase as the leading tree contribution, as is discussed in more detail in Subsection \[sec:BsPsiKS\] – it is referred to as the “gold-plated” mode to determine the CKM angle $\beta$ [@bisa]. Strictly speaking, mixing-induced CP violation in $B_d\to J/\psi\, K_{\rm S}$ probes $\sin(2\beta+\phi_K)$, where $\phi_K$ is related to the CP-violating weak $K^0$–$\overline{K^0}$ mixing phase. Similar modifications of (\[ee10\]) and of the corresponding CP asymmetries must also be performed for other final-state configurations containing $K_{\rm S}$- or $K_{\rm L}$-mesons. However, $\phi_K$ is negligibly small in the Standard Model, and – owing to the small value of the CP-violating parameter $\varepsilon_K$ of the neutral kaon system – can only be affected by very contrived models of new physics [@nir-sil].
First attempts to measure $\sin(2\beta)$ through the CP asymmetry (\[e12\]) have recently been performed by the OPAL and CDF collaborations [@sin2b-exp]: $$\sin(2\beta)=\left\{\begin{array}{ll}
3.2^{+1.8}_{-2.0}\pm0.5&\mbox{(OPAL Collaboration),}\\
0.79^{+0.41}_{-0.44}&\,\mbox{\,(CDF\, Collaboration).}
\end{array}\right.$$ Although the experimental uncertainties are very large, it is interesting to note that these results favour the Standard-Model expectation of a [*positive*]{} value of $\sin(2\beta)$. In the $B$-factory era, an experimental uncertainty of $\left.\Delta\sin(2\beta)\right|_{\rm exp}=0.08$ seems to be achievable, whereas second-generation experiments of the LHC era aim at $\left.\Delta\sin(2\beta)\right|_{\rm exp}={\cal O}(0.01)$.
Another important implication of the Standard Model, which is interesting for the search of new physics, is the following relation: $$\label{e13}
{\cal A}^{\mbox{{\scriptsize dir}}}_{\mbox{{\scriptsize
CP}}}(B_d\to J/\psi\, K_{\mbox{{\scriptsize S}}})\approx0\approx
{\cal A}_{\mbox{{\scriptsize CP}}}(B^\pm\to J/\psi\, K^\pm).$$ In view of the tremendous accuracy that can be achieved in the LHC era, it is an important issue to investigate the theoretical accuracy of (\[e12\]) and (\[e13\]). A very interesting channel in this respect is $B_s\to J/\psi\,K_{\rm S}$ [@BsPsiK], allowing us to extract $\gamma$ and to control the – presumably very small – penguin uncertainties in the determination of $\beta$ from the CP-violating effects in $B_d\to J/\psi\,K_{\rm S}$. We shall come back to this strategy in Subsection \[sec:BsPsiKS\].
The Decay $B_d\to\pi^+\pi^-$
----------------------------
If this mode would not receive penguin contributions, its mixing-induced CP asymmetry would allow a measurement of $\sin(2\alpha)$: $${\cal A}^{\mbox{{\scriptsize mix--ind}}}_{\mbox{{\scriptsize
CP}}}(B_d\to\pi^+\pi^-)=-\sin[-(2\beta+2\gamma)]=-\sin(2\alpha).$$ However, this relation is strongly affected by penguin effects, which were analysed by many authors [@alpha-uncert; @charles]. There are various methods on the market to control the corresponding hadronic uncertainties. Unfortunately, these strategies are usually rather challenging from an experimental point of view.
The best known approach was proposed by Gronau and London [@gl]. It makes use of the $SU(2)$ isospin relation $$\sqrt{2}\,A(B^+\to\pi^+\pi^0)=A(B^0_d\to\pi^+\pi^-)+
\sqrt{2}\,A(B^0_d\to\pi^0\pi^0)$$ and of its CP-conjugate, which can be represented in the complex plane as two triangles. The sides of these triangles can be determined through the corresponding branching ratios, while their relative orientation can be fixed by measuring the CP-violating observable ${\cal A}^{\mbox{{\scriptsize
mix--ind}}}_{\mbox{{\scriptsize CP}}}(B_d\to\pi^+\pi^-)$ [@rev]. Following these lines, it is in principle possible to take into account the QCD penguin effects in the extraction of $\alpha$. It should be noted that EW penguins cannot be controlled with the help of this isospin strategy. However, their effect is expected to be rather small, and – as was pointed out recently [@BF; @GPY] – can be included through an additional theoretical input. Unfortunately, the Gronau–London approach suffers from an experimental problem, since the measurement of $\mbox{BR}(B_d\to\pi^0\pi^0)$, which is expected to be at most of ${\cal O}(10^{-6})$, is very difficult. However, upper bounds on the CP-averaged $B_d\to\pi^0\pi^0$ branching ratio may already be useful to put upper bounds on the QCD penguin uncertainty that affects the determination of $\alpha$ [@charles; @gq-alpha].
Alternative methods to control the penguin uncertainties in the extraction of $\alpha$ from $B_d\to\pi^+\pi^-$ are very desirable. An important one for the asymmetric $e^+$–$e^-$ $B$-factories is provided by $B\to\rho\,\pi$ modes [@Brhopi]. Here the isospin triangle relations are replaced by pentagonal relations, and the corresponding approach is rather complicated. As we will see in Subsection \[sec:BsKK\], an interesting strategy for second-generation $B$-physics experiments at hadron machines to make use of the CP-violating observables of $B_d\to\pi^+\pi^-$ is offered by the mode $B_s\to K^+K^-$, allowing a simultaneous determination of $\beta$ and $\gamma$ [*without*]{} any assumptions about penguin topologies [@BsKK].
The observation of $B_d\to\pi^+\pi^-$ has very recently been announced by the CLEO collaboration, with a branching ratio of $0.47^{+0.18}_{-0.15}\pm0.13$ [@cleo-Bpipi]. Other CLEO results on $B\to\pi K$ modes (see Subsection \[sec:BpiK\]) indicate that QCD penguins play in fact an important role, and that we definitely have to worry about them in the extraction of $\alpha$ from $B_d\to\pi^+\pi^-$. Needless to note that also a better theoretical understanding of the hadronization dynamics of $B_d\to\pi^+\pi^-$ would be very helpful in this respect. In a recent paper [@BBNS], an interesting step towards this goal was performed.
Extracting $2\beta+\gamma$ from $B_d\to D^{(\ast)\pm}\pi^\mp$ Decays {#sec:BDpi}
--------------------------------------------------------------------
-0.5truein
=4truecm =4truecm
The final states of the pure “tree” decays $B_d\to D^{(\ast)\pm}\pi^\mp$ are not CP eigenstates. However, as can be seen in Fig. \[fig:BDpi\], $B^0_d$- and $\overline{B^0_d}$-mesons may both decay into the $D^{(\ast)+}\pi^-$ final state, thereby leading to interference effects between $B^0_d$–$\overline{B^0_d}$ mixing and decay processes. Consequently, the time-dependent decay rates for initially, i.e. at time $t=0$, present $B^0_d$- or $\overline{B^0_d}$-mesons decaying into the final state $f\equiv D^{(\ast)+}\pi^-$ allow us to determine the observable [@rev] $$\xi_f^{(d)}=-\,e^{-i\phi_d}
\frac{A(\overline{B^0_d}\to f)}{A(B^0_d\to f)}
=-\,e^{-i(\phi_d+\gamma)}\frac{1}{\lambda^2R_b}
\frac{\overline{M}_f}{M_{\overline f}},$$ whereas those corresponding to $\bar f\equiv D^{(\ast)-}\pi^+$ allow us to extract $$\xi_{\bar f}^{(d)}=-\,e^{-i\phi_d}
\frac{A(\overline{B^0_d}\to \bar f)}{A(B^0_d\to \bar f)}=
-\,e^{-i(\phi_d+\gamma)}\lambda^2R_b\,\frac{M_{\overline f}}{\overline{M}_f}.$$ Here $R_b\equiv|V_{ub}/(\lambda V_{cb})|=0.41\pm0.07$ is the usual CKM factor, and $$\begin{aligned}
\overline{M}_f&\equiv&\Bigl\langle f\Bigl|\overline{O}_1(\mu){\cal C}_1(\mu)+
\overline{O}_2(\mu){\cal C}_2(\mu)\Bigr|\overline{B^0_d}\Bigr\rangle\\
M_{\overline{f}}&\equiv&\Bigl\langle\overline{f}\Bigl|O_1(\mu){\cal C}_1(\mu)+
O_2(\mu){\cal C}_2(\mu)\Bigr|\overline{B^0_d}\Bigr\rangle\end{aligned}$$ are hadronic matrix elements of the following current–current operators: $$\begin{array}{rclrcl}
\overline{O}_1&=&(\bar d_\alpha u_\beta)_{\mbox{{\scriptsize
V--A}}}\left(\bar c_\beta b_\alpha\right)_{\mbox{{\scriptsize V--A}}},&
\overline{O}_2&=&(\bar d_\alpha u_\alpha)_{\mbox{{\scriptsize
V--A}}}\left(\bar c_\beta b_\beta\right)_{\mbox{{\scriptsize V--A}}},\\
O_1&=&(\bar d_\alpha c_\beta)_{\mbox{{\scriptsize V--A}}}
\left(\bar u_\beta b_\alpha\right)_{\mbox{{\scriptsize V--A}}},&
O_2&=&(\bar d_\alpha c_\alpha)_{\mbox{{\scriptsize V--A}}}
\left(\bar u_\beta b_\beta\right)_{\mbox{{\scriptsize
V--A}}}.
\end{array}$$ The observables $\xi_f^{(d)}$ and $\xi_{\bar f}^{(d)}$ allow a [*theoretically clean*]{} extraction of the weak phase $\phi_d+\gamma$ [@BDpi], as the hadronic matrix elements $\overline{M}_f$ and $M_{\overline{f}}$ cancel in $$\label{Prod}
\xi_f^{(d)}\times\xi_{\bar f}^{(d)}=e^{-2i(\phi_d+\gamma)}.$$ Since the $B^0_d$–$\overline{B^0_d}$ mixing phase $\phi_d$, i.e. $2\beta$, can be determined rather straightforwardly with the help of the “gold-plated” mode $B_d\to J/\psi\, K_{\rm S}$, we may extract the CKM angle $\gamma$ from (\[Prod\]). As the $\bar b\to\bar u$ quark-level transition in Fig. \[fig:BDpi\] is doubly Cabibbo-suppressed by $\lambda^2R_b\approx0.02$ with respect to the $b\to c$ transition, the interference effects are tiny. However, the branching ratios are large (${\cal O}(10^{-3})$), and the $D^{(\ast)\pm}\pi^\mp$ states can be reconstructed with a good efficiency and modest backgrounds. Consequently, $B_d\to D^{(\ast)\pm}\pi^\mp$ decays offer an interesting strategy to determine $\gamma$ [@BDpi-exp]. For the most optimistic scenario, an accuracy of $\gamma$ at the level of $4^\circ$ may be achievable at LHCb after 5 years of taking data.
The “El Dorado” for Hadron Machines: $B_s$ System
-------------------------------------------------
Since the $e^+$–$e^-$ $B$-factories operating at the $\Upsilon(4S)$ resonance will not be in a position to explore the $B_s$ system, it is of particular interest for hadron machines. There are important differences to the $B_d$ system:
- Within the Standard Model, a large $B^0_s$–$\overline{B^0_s}$ mixing parameter $x_s\equiv \Delta M_s/\Gamma_s={\cal O}(20)$ is expected, whereas the mixing phase $\phi_s=-2\lambda^2\eta$ is expected to be very small.
- There may be a sizeable width difference $\Delta\Gamma_s\equiv
\Gamma_{\rm H}^{(s)}-\Gamma_{\rm L}^{(s)}$; the most recent theoretical analysis gives $\Delta\Gamma_s/\Gamma_s={\cal O}(10\%)$ [@DGamma-cal].
There is an interesting correlation between $\Delta\Gamma_s$ and $\Delta M_s$: $$\frac{\Delta\Gamma_s}{\Gamma_s}\approx-\,\frac{3\pi}{2S(x_t)}\,
\frac{m_b^2}{M_W^2}\,\frac{\Delta M_s}{\Gamma_s},$$ where $S(x_t)$ denotes one of the well-known Inami–Lim functions. The present experimental lower limit on $\Delta M_s$ is given by $\Delta M_s>12.4\,\mbox{ps}^{-1}$ (95% C.L.). Interestingly, this lower bound already puts constraints on the allowed region for the apex of the unitarity triangle shown in Fig. \[fig:UT\](a). A detailed discussion of this feature can be found, for instance, in [@BF-rev].
It is also interesting to note that the non-vanishing width difference $\Delta\Gamma_s$ may allow studies of CP-violating effects in “untagged” $B_s$ rates [@dun; @FD]: $$\Gamma[f(t)]\equiv\Gamma(B^0_s(t)\to f)+\Gamma(\overline{B^0_s}(t)
\to f)\propto R_{\rm L}e^{-\Gamma_{\rm L}^{(s)}t}+
R_{\rm H}e^{-\Gamma_{\rm H}^{(s)}t},$$ where there are no rapid oscillatory $\Delta M_st$ terms present. Studies of such untagged rates, allowing us to extract the observable ${\cal A}_{\Delta\Gamma}$ introduced in (\[ADGam\]) through $${\cal A}_{\Delta\Gamma}=\frac{R_{\rm H}-R_{\rm L}}{R_{\rm H}+R_{\rm L}},$$ are more promising than “tagged” rates in terms of efficiency, acceptance and purity. Let us next have a brief look at the $B_s$ benchmark modes to extract CKM phases.
### $B_s\to D_s^\pm K^\mp$
These decays, which receive only contributions from tree-diagram-like topologies, are the $B_s$ counterparts of the $B_d\to D^{(\ast)\pm}\pi^\mp$ modes discussed in Subsection \[sec:BDpi\], and probe the CKM combination $\gamma-2\delta\gamma$ instead of $\gamma+2\beta$ in a [*theoretically clean*]{} way [@adk]. Since one decay path is only suppressed by $R_b\approx0.41$, and is not doubly Cabibbo-suppressed by $\lambda^2 R_b$, as in $B_d\to D^{(\ast)\pm}\pi^\mp$, the interference effects in $B_s\to D_s^\pm K^\mp$ are much larger.
### $B_s\to J/\psi\,\phi$
The decay $B_s\to J/\psi[\to l^+l^-]\,\phi[\to K^+K^-]$ is the $B_s$ counterpart of the “gold-plated” mode $B_d\to J/\psi\,K_{\rm S}$. The observables of the angular distribution of its decay products provide interesting strategies to extract the $B_s^0$–$\overline{B_s^0}$ mixing parameters $\Delta M_s$ and $\Delta\Gamma_s$, as well as the CP-violating weak mixing phase $\phi_s\equiv-2\delta\gamma$ [@ddf1]. Because of $\delta\gamma=\lambda^2\eta$, this phase would allow us to extract the Wolfenstein parameter $\eta$. However, since $\delta\gamma={\cal O}(0.02)$ is tiny within the Standard Model, its extraction from the $B_s\to J/\psi\,\phi$ angular distribution may well be sizeably affected by penguin topologies. These uncertainties, which are an important issue for second-generation $B$-physics experiments at hadron machines, can be controlled with the help of the decay $B_d\to J/\psi\,\rho^0$ [@RF-ang], as is discussed in more detail in Subsection \[sec:ang\].
Since the CP-violating effects in $B_s\to J/\psi\,\phi$ are very small in the Standard Model, they provide an interesting probe for new physics [@nir-sil]. In the case of $B_s\to J/\psi\,\phi$, the preferred mechanism for new physics to manifest itself in the corresponding observables are CP-violating new-physics contributions to $B^0_s$–$\overline{B^0_s}$ mixing. In various scenarios for new physics, for example in the left–right-symmetric model with spontaneous CP violation [@bbmr], there are in fact large contributions to the $B_s^0$–$\overline{B_s^0}$ mixing phase.
Because of its very favourable experimental signature, studies of $B_s\to J/\psi\,\phi$ are not only promising for dedicated second-generation $B$-physics experiments, such as LHCb or BTeV, but also for ATLAS and CMS [@smizanska].
CP Violation in Charged $B$ Decays
----------------------------------
Since there are no mixing effects present in the charged $B$-meson system, non-vanishing CP asymmetries of the kind $$\label{CP-charged}
{\cal A}_{\mbox{{\scriptsize CP}}}\equiv\frac{\Gamma(B^+\to\overline{f})-
\Gamma(B^-\to f)}{\Gamma(B^+\to\overline{f})+\Gamma(B^-\to f)}$$ would give us unambiguous evidence for “direct” CP violation in the $B$ system, which has recently been demonstrated in the kaon system by the new experimental results of the KTeV (Fermilab) and NA48 (CERN) collaborations for Re$(\varepsilon'/\varepsilon)$ [@calvetti].
The CP asymmetries (\[CP-charged\]) arise from the interference between decay amplitudes with both different CP-violating weak and different CP-conserving strong phases. In the Standard Model, the weak phases are related to the phases of the CKM matrix elements, whereas the strong phases are induced by final-state-interaction processes. In general, the strong phases introduce severe theoretical uncertainties into the calculation of ${\cal A}_{\mbox{{\scriptsize CP}}}$, thereby destroying the clean relation to the CP-violating weak phases. However, there is an important tool to overcome these problems, which is provided by [*amplitude relations*]{} between certain non-leptonic $B$ decays. There are two kinds of such relations:
- Exact relations: $B\to DK$ (pioneered by Gronau and Wyler [@gw]).
- Approximate relations, based on flavour-symmetry arguments and certain plausible dynamical assumptions: $B\to \pi K$, $\pi\pi$, $K\overline{K}$ (pioneered by Gronau, Hernández, London and Rosner [@GRL; @GHLR]).
Unfortunately, the $B\to DK$ approach, which allows a [*theoretically clean*]{} determination of $\gamma$, involves amplitude triangles that are expected to be very squashed. Moreover, we have to deal with additional experimental problems [@ads], so that this approach is very challenging from a practical point of view. More refined variants were proposed in [@ads]. Let us note that the colour-allowed decay $B^-\to D^0K^-$ was observed by CLEO in 1998 [@cleo-bdk].
The flavour-symmetry relations between the $B\to \pi K$, $\pi\pi$, $K\overline{K}$ decay amplitudes have received considerable attention in the literature during the last couple of years and led to interesting strategies to probe the CKM angle $\gamma$, which are the subject of the following subsection.
A Closer Look at New Strategies to\
Extract CKM Phases {#sec:New-Strat}
===================================
Extracting $\gamma$ from $B\to\pi K$ Decays {#sec:BpiK}
-------------------------------------------
In order to obtain direct information on $\gamma$ in an experimentally feasible way, $B\to\pi K$ decays seem very promising. Fortunately, experimental data on these modes are now starting to become available. In 1997, the CLEO collaboration reported the first results on the decays $B^\pm\to\pi^\pm K$ and $B_d\to\pi^\mp K^\pm$; in the following year, the first observation of $B^\pm\to\pi^0K^\pm$ was announced. So far, only results for CP-averaged branching ratios have been reported, with values at the $10^{-5}$ level and large experimental uncertainties [@berkelman]. However, already such CP-averaged branching ratios may lead to highly non-trivial constraints on $\gamma$ [@FM]. So far, the following three combinations of $B\to\pi K$ decays were considered in the literature: $B^\pm\to\pi^\pm K$ and $B_d\to\pi^\mp K^\pm$ [@FM]–[@GroRo], $B^\pm\to\pi^\pm K$ and $B^\pm\to\pi^0 K^\pm$ [@BF; @GRL; @NR], as well as the combination of the neutral decays $B_d\to\pi^0 K$ and $B_d\to\pi^\mp K^\pm$ [@BF].
### The $B^\pm\to\pi^\pm K$, $B_d\to\pi^\mp K^\pm$ Strategy
Within the framework of the Standard Model, the most important contributions to these decays originate from QCD penguin topologies. Making use of the $SU(2)$ isospin symmetry of strong interactions, we obtain $$\label{rel1}
A(B^+\to\pi^+K^0)\equiv P,\quad A(B_d^0\to\pi^-K^+)=-\,
\left[P+T+P_{\rm ew}^{\rm C}\right],$$ where $$T\equiv|T|e^{i\delta_T}e^{i\gamma} \quad\mbox{and}\quad
P_{\rm ew}^{\rm C}\equiv-\,\left|P_{\rm ew}^{\rm C}\right|
e^{i\delta_{\rm ew}^{\rm C}}$$ are due to tree-diagram-like topologies and EW penguins, respectively. The label “C” reminds us that only “colour-suppressed” EW penguin topologies contribute to $P_{\rm ew}^{\rm C}$. Making use of the unitarity of the CKM matrix and applying the Wolfenstein parametrization, generalized to include non-leading terms in $\lambda$ [@BLO], we obtain [@defan] $$P\equiv A(B^+\to\pi^+K^0)=-\left(1-\frac{\lambda^2}{2}\right)\lambda^2A\left[
1+\rho\,e^{i\theta}e^{i\gamma}\right]{\cal P}_{tc}\,,$$ where $$\rho\,e^{i\theta}=\frac{\lambda^2R_b}{1-\lambda^2/2}
\left[1-\left(\frac{{\cal P}_{uc}+{\cal A}}{{\cal P}_{tc}}\right)\right].$$ Here ${\cal P}_{tc}\equiv|{\cal P}_{tc}|e^{i\delta_{tc}}$ and ${\cal P}_{uc}$ describe differences of penguin topologies with internal top- and charm-quark and up- and charm-quark exchanges, respectively, and ${\cal A}$ is due to annihilation topologies. It is important to note that $\rho$ is strongly CKM-suppressed by $\lambda^2R_b\approx0.02$. In the parametrization of the $B^\pm\to \pi^\pm K$ and $B_d\to\pi^\mp K^\pm$ observables, it turns out to be useful to introduce $$r\equiv\frac{|T|}{\sqrt{\langle|P|^2\rangle}}\,,\quad\epsilon_{\rm C}\equiv
\frac{|P_{\rm ew}^{\rm C}|}{\sqrt{\langle|P|^2\rangle}}\,,$$ with $\langle|P|^2\rangle\equiv(|P|^2+|\overline{P}|^2)/2$, as well as the strong phase differences $$\delta\equiv\delta_T-\delta_{tc}\,,\quad\Delta_{\rm C}\equiv
\delta_{\rm ew}^{\rm C}-\delta_{tc}\,.$$ In addition to the ratio $$\label{Def-R}
R\equiv\frac{\mbox{BR}(B^0_d\to\pi^-K^+)+
\mbox{BR}(\overline{B^0_d}\to\pi^+K^-)}{\mbox{BR}(B^+\to\pi^+K^0)
+\mbox{BR}(B^-\to\pi^-\overline{K^0})}$$ of CP-averaged branching ratios, also the “pseudo-asymmetry” $$A_0\equiv\frac{\mbox{BR}(B^0_d\to\pi^-K^+)-\mbox{BR}(\overline{B^0_d}\to
\pi^+K^-)}{\mbox{BR}(B^+\to\pi^+K^0)+\mbox{BR}(B^-\to\pi^-\overline{K^0})}$$ plays an important role in the probing of $\gamma$. Explicit expressions for $R$ and $A_0$ in terms of the parameters specified above are given in [@defan].
So far, the only available result from the CLEO collaboration is for $R$: $$\label{RFM-exp}
R=1.0\pm0.4,$$ and no CP-violating effects have been reported. However, if in addition to $R$ also the pseudo-asymmetry $A_0$ can be measured, it is possible to eliminate the strong phase $\delta$ in the expression for $R$, and to fix contours in the $\gamma\,$–$\,r$ plane [@defan]. These contours, which are illustrated in Fig. \[fig:g-r-cont\], correspond to the mathematical implementation of a simple triangle construction [@PAPIII]. In order to determine $\gamma$, the quantity $r$, i.e. the magnitude of the “tree” amplitude $T$, has to be fixed. At this stage, a certain model dependence enters. Since the properly defined amplitude $T$ does not receive contributions only from colour-allowed “tree” topologies, but also from penguin and annihilation processes [@defan; @bfm], it may be sizeably shifted from its “factorized” value. Consequently, estimates of the uncertainty of $r$ using the factorization hypothesis, yielding typically $\Delta r={\cal O}(10\%)$, may be too optimistic.
Interestingly, it is possible to derive bounds on $\gamma$ that do [*not*]{} depend on $r$ at all [@FM]. To this end, we eliminate again $\delta$ in $R$ through $A_0$. If we now treat $r$ as a “free” variable, we find that $R$ takes the minimal value [@defan] $$\label{Rmin}
R_{\rm min}=\kappa\,\sin^2\gamma\,+\,
\frac{1}{\kappa}\left(\frac{A_0}{2\,\sin\gamma}\right)^2\geq
\kappa\,\sin^2\gamma,$$ where $$\label{kappa-def}
\kappa=\frac{1}{w^2}\left[\,1+2\,(\epsilon_{\rm C}\,w)\cos\Delta+
(\epsilon_{\rm C}\,w)^2\,\right],$$ with $w=\sqrt{1+2\,\rho\,\cos\theta\cos\gamma+\rho^2}$. The inequality in (\[Rmin\]) arises if we keep both $r$ and $\delta$ as free parameters [@FM]. An allowed range for $\gamma$ is related to $R_{\rm min}$, since values of $\gamma$ implying $R_{\rm exp}<R_{\rm min}$ are excluded. In particular, $A_0\not=0$ would allow us to exclude a certain range of $\gamma$ around $0^\circ$ or $180^\circ$, whereas a measured value of $R<1$ would exclude a certain range around $90^\circ$, which would be of great phenomenological importance. The first results reported by CLEO in 1997 gave $R=0.65\pm0.40$, whereas the most recent update is that given in (\[RFM-exp\]). If we are willing to fix the parameter $r$, significantly stronger constraints on $\gamma$ can be obtained from $R$ [@BF; @GPY]. In particular, these constraints require only $R\not=1$ and are also effective for $R>1$.
=3.7truecm
The theoretical accuracy of the strategies to probe $\gamma$ with the decays $B^\pm\to\pi^\pm K$ and $B_d\to\pi^\mp K^\pm$ is limited both by rescattering processes of the kind $B^+\to\{\pi^0K^+,\pi^0K^{\ast+},\ldots\}\to\pi^+K^0$ [@FSI; @neubert], which are illustrated in Fig. \[fig:res\], and by “colour-suppressed” EW penguin contributions [@GroRo; @neubert]. In Eq. (\[Rmin\]), these effects are described by the parameter $\kappa$. If they are neglected, we have $\kappa=1$. The rescattering effects, which may lead to values of $\rho={\cal O}(0.1)$, can be controlled in the contours in the $\gamma$–$r$ plane and the constraints on $\gamma$ related to (\[Rmin\]) through experimental data on $B^\pm\to K^\pm K$ decays, which are the $U$-spin counterparts of $B^\pm\to\pi^\pm K$ [@defan; @BKK]. Another important indicator for large rescattering effects is provided by $B_d\to K^+K^-$ modes, for which there already exist stronger experimental constraints [@groro-FSI].
An improved description of the EW penguins is possible if we use the general expressions for the corresponding four-quark operators, and perform appropriate Fierz transformations. Following these lines [@defan; @neubert], we obtain $$\label{EWP-expr1}
q_{\rm C}\,e^{i\omega_{\rm C}}\equiv\frac{\epsilon_{\rm C}}{r}\,
e^{i(\Delta_{\rm C}-\delta)}=0.66\times \left[\frac{0.41}{R_b}\right]
\times a_{\rm C}\,e^{i\omega_{\rm C}},$$ where $a_{\rm C}\,e^{i\omega_{\rm C}}=a_2^{\rm eff}/a_1^{\rm eff}$ is the ratio of certain generalized “colour factors”. Experimental data on $B\to D^{(\ast)}\pi$ decays imply $a_2/a_1={\cal O}(0.25)$. However, “colour suppression” in $B\to\pi K$ modes may in principle be different from that in $B\to D^{(\ast)}\pi$ decays, in particular in the presence of large rescattering effects [@neubert]. A first step to fix the hadronic parameter $a_{\rm C}\,e^{i\omega_{\rm C}}$ experimentally is provided by the mode $B^+\to\pi^+\pi^0$ [@defan]; interesting constraints were derived in [@GPY; @pirjol]. For a detailed discussion of the impact of rescattering and EW penguin effects on the strategies to probe $\gamma$ with $B^\pm\to\pi^\pm K$ and $B_d\to\pi^\mp K^\pm$ decays, the reader is referred to [@BF; @bfm; @BKK].
### The Charged $B^\pm\to \pi^\pm K$, $B^\pm\to\pi^0K^\pm$ Strategy
Several years ago, Gronau, Rosner and London proposed an interesting $SU(3)$ strategy to determine $\gamma$ with the help of the charged decays $B^{\pm}\to\pi^{\pm} K$, $\pi^0K^{\pm}$, $\pi^0\pi^{\pm}$ [@GRL]. However, as was pointed out by Deshpande and He [@dh], this elegant approach is unfortunately spoiled by EW penguins, which play an important role in several non-leptonic $B$-meson decays because of the large top-quark mass [@rf-ewp]. Recently, this approach was resurrected by Neubert and Rosner [@NR], who pointed out that the EW penguin contributions can be controlled in this case by using only the general expressions for the corresponding four-quark operators, appropriate Fierz transformations, and the $SU(3)$ flavour symmetry (see also [@PAPIII]). Since a more detailed presentation of these strategies can be found in the contribution by D. Pirjol to these proceedings, we will just have a brief look at their most interesting features.
In the case of $B^+\to\pi^+K^0$, $\pi^0K^+$, the $SU(2)$ isospin symmetry implies $$\label{charged-iso}
A(B^+\to\pi^+K^0)\,+\,\sqrt{2}\,A(B^+\to\pi^0K^+)=
-\left[(T+C)\,+\,P_{\rm ew}\right].$$ The phase structure of this relation, which has no $I=1/2$ piece, is completely analogous to the $B^+\to\pi^+K^0$, $B^0_d\to\pi^-K^+$ case (see (\[rel1\])): $$T+C=|T+C|\,e^{i\delta_{T+C}}\,e^{i\gamma},\quad
P_{\rm ew}=-\,|P_{\rm ew}|e^{i\delta_{\rm ew}}\,.$$ In order to probe $\gamma$, it is useful to introduce the following observables [@BF]: $$\begin{aligned}
R_{\rm c}&\equiv&2\left[\frac{\mbox{BR}(B^+\to\pi^0K^+)+
\mbox{BR}(B^-\to\pi^0K^-)}{\mbox{BR}(B^+\to\pi^+K^0)+
\mbox{BR}(B^-\to\pi^-\overline{K^0})}\right]\label{Rc-def}\\
A_0^{\rm c}&\equiv&2\left[\frac{\mbox{BR}(B^+\to\pi^0K^+)-
\mbox{BR}(B^-\to\pi^0K^-)}{\mbox{BR}(B^+\to\pi^+K^0)+
\mbox{BR}(B^-\to\pi^-\overline{K^0})}\right],\label{A0c-def}\end{aligned}$$ which correspond to $R$ and $A_0$; their general expressions can be otained from those for $R$ and $A_0$ by making the following replacements: $$r\to r_{\rm c}\equiv\frac{|T+C|}{\sqrt{\langle|P|^2\rangle}}\,, \quad
\delta\to \delta_{\rm c}\equiv\delta_{T+C}-\delta_{tc}\,,\quad
P_{\rm ew}^{\rm C}\to P_{\rm ew}.$$ The measurement of $R_{\rm c}$ and $A_0^{\rm c}$ allows us to fix contours in the $\gamma$–$r_c$ plane, in complete analogy to the $B^\pm\to\pi^\pm K$, $B_d\to\pi^\mp K^\pm$ strategy. However, the charged $B\to\pi K$ approach has interesting advantages from a theoretical point of view. First, the $SU(3)$ symmetry allows us to fix $r_c\propto|T+C|$ [@GRL]: $$\label{SU3-rel1}
T+C\approx-\,\sqrt{2}\,\frac{V_{us}}{V_{ud}}\,
\frac{f_K}{f_{\pi}}\,A(B^+\to\pi^+\pi^0)\,,$$ where $r_c$ thus determined is – in contrast to $r$ – not affected by rescattering effects. Second, in the strict $SU(3)$ limit, we have [@NR] $$\label{SU3-rel2}
q\,e^{i\omega}\equiv\left|\frac{P_{\rm ew}}{T+C}\right|\,
e^{i(\delta_{\rm ew}-\delta_{T+C})}=0.66\times
\left[\frac{0.41}{R_b}\right],$$ which does not – in contrast to (\[EWP-expr1\]) – involve a hadronic parameter.
The contours in the $\gamma$–$r_c$ plane may be affected – in analogy to the $B^\pm\to\pi^\pm K$, $B_d\to\pi^\mp K^\pm$ case – by rescattering effects [@BF]. They can be taken into account with the help of additional data [@defan; @BKK; @FSI-recent]. The major theoretical advantage of the $B^+\to\pi^+K^0$, $\pi^0K^+$ strategy with respect to $B^\pm\to\pi^\pm K$, $B_d\to\pi^\mp K^\pm$ is that $r_c$ and $P_{\rm ew}/(T+C)$ can be fixed by using [*only*]{} $SU(3)$ arguments. Consequently, the theoretical accuracy is mainly limited by non-factorizable $SU(3)$-breaking effects.
Let us finally note that the observable $R_{\rm c}$ – the present CLEO result is $R_{\rm c}=2.1\pm1.1$ – may also imply interesting constraints on $\gamma$ [@NR]. These bounds, which are conceptually similar to [@FM], are related to the extremal values of $R_{\rm c}$ that arise if we keep the strong phase $\delta_{\rm c}$ as an “unknown”, free parameter. As the resulting general expression is rather complicated [@BF], let us expand it in $r_c$ [@NR]. If we keep only the leading-order terms and make use of the $SU(3)$ relation (\[SU3-rel2\]), we obtain $$\label{Rc-expansion}
\left.R_c^{\rm ext}\right|_{\delta_c}^{\rm L.O.}=
1\,\pm\,2\,r_c\,|\cos\gamma-q|.$$ Interestingly, there are no terms of ${\cal O}(\rho)$ present in this expression, i.e. rescattering effects do not enter at this level [@NR]. However, final-state-interaction processes may still have a sizeable impact on the bounds on $\gamma$ arising from the charged $B\to\pi K$ decays. Several strategies to control these uncertainties were considered in the recent literature [@BF; @FSI-recent].
### The Neutral $B_d\to \pi^0 K$, $B_d\to\pi^\mp K^\pm$ Strategy
At first sight, the strategies to probe $\gamma$ that are provided by the observables of the neutral decays $B_d\to \pi^0 K$, $\pi^\mp K^\pm$ are completely analogous to the charged $B^\pm\to\pi^\pm K$, $\pi^0K^\pm$ case [@BF], as the corresponding decay amplitudes satisfy a similar isospin relation (see (\[charged-iso\])). However, if we require that the neutral kaon be observed as a $K_{\rm S}$, we have an additional observable at our disposal, which is due to “mixing-induced” CP violation in $B_d\to\pi^0K_{\rm S}$ and allows us to take into account the rescattering effects in the extraction of $\gamma$ [@BF]. To this end, time-dependent measurements are required. The theoretical accuracy of the neutral strategy is only limited by non-factorizable $SU(3)$-breaking corrections, which affect $|T+C|$ and $P_{\rm ew}$.
### Some Thoughts about New Physics
Since $B^0_q$–$\overline{B^0_q}$ mixing ($q\in\{d,s\}$) is a “rare” flavour-changing neutral-current (FCNC) process, it is very likely that it is significantly affected by new physics, which may act upon the mixing parameters $\Delta M_q$ and $\Delta\Gamma_q$ as well as on the CP-violating mixing phase $\phi_q$. Important examples for such scenarios of new physics are non-minimal SUSY models, left–right-symmetric models, models with extended Higgs sectors, four generations, or $Z$-mediated [@new-phys]. Since $B_d\to J/\psi\,K_{\rm S}$ and $B_s\to J/\psi\,\phi$ – the benchmark modes to measure $\phi_d$ and $\phi_s$ – are governed by current–current, i.e. “tree”, processes, new physics is expected to affect their [*decay amplitudes*]{} in a minor way. Consequently, these modes still measure $\phi_d$ and $\phi_s$.
-------- --------
=8.8cm =8.6cm
-------- --------
In the clean strategies to measure $\gamma$ with the help of pure “tree” decays, such as $B\to DK$, $B_d\to D^{(\ast)\pm}\pi^\mp$ or $B_s\to D_s^\pm K^\mp$, new physics is also expected to play a very minor role. These strategies therefore provide a “reference” value for $\gamma$. Since, on the other hand, the $B\to\pi K$ strategies to determine $\gamma$ rely on the interference between tree and penguin contributions, discrepancies with the “reference” value for $\gamma$ may well show up in the presence of new physics. If we are lucky, we may even get immediate indications for new physics from $B\to\pi K$ decays [@FMat], as the Standard Model predicts interesting correlations between the corresponding observables that are shown in Figs. \[fig:BpiK-char\] and \[fig:BpiK-mix\]. Here the dotted regions correspond to the present CLEO results for $R_{\rm c}$ and $R$. A future measurement of observables lying significantly outside the allowed regions shown in these figures would immediately indicate the presence of new physics. Although the experimental uncertainties are still too large for us to draw definite conclusions, it is interesting to note that the present central value of $R_{\rm c}=2.1$ is not favoured by the Standard Model (see Fig. \[fig:BpiK-char\]). Moreover, if future measurements should stabilize at such a large value, there would essentially be no space left for $A^{\rm c}_0$. These features should be compared with the situation in Fig. \[fig:BpiK-mix\]. The strategies discussed in the following subsections are also well suited to search for new physics.
=7.3cm
Extracting $\gamma$ from $B_{s(d)}\to J/\psi\, K_{\rm S}$ {#sec:BsPsiKS}
---------------------------------------------------------
As we have already noted in Subsection \[sec:BdPsiKS\], the “gold-plated” mode $B_d\to J/\psi\, K_{\rm S}$ plays an outstanding role in the determination of the CP-violating weak $B^0_d$–$\overline{B^0_d}$ mixing phase $\phi_d$, i.e. of the CKM angle $\beta$. In this subsection, we will have a closer look at $B_s\to J/\psi\, K_{\rm S}$, which is related to $B_d\to J/\psi\, K_{\rm S}$ by interchanging all down and strange quarks, as can be seen in Fig. \[fig:BsPsiKS\].
-0.5truein
=4.5truecm =4.5truecm
Making use of the unitarity of the CKM matrix and applying the Wolfenstein parametrization [@wolf], generalized to include non-leading terms in $\lambda$ [@BLO], we obtain [@BsPsiK] $$\label{Bd-ampl2}
A(B_d^0\to J/\psi\, K_{\rm S})=\left(1-\frac{\lambda^2}{2}\right){\cal A'}
\left[1+\left(\frac{\lambda^2}{1-\lambda^2}\right)a'e^{i\theta'}e^{i\gamma}
\right],$$ where $$\label{Aap-def}
{\cal A'}\equiv\lambda^2A\left(A_{\rm cc}^{c'}+A_{\rm pen}^{ct'}\right),$$ with $A_{\rm pen}^{ct'}\equiv A_{\rm pen}^{c'}-A_{\rm pen}^{t'}$, and $$\label{ap-def}
a'e^{i\theta'}\equiv R_b\left(1-\frac{\lambda^2}{2}\right)\left(
\frac{A_{\rm pen}^{ut'}}{A_{\rm cc}^{c'}+A_{\rm pen}^{ct'}}\right).$$ The amplitudes $A_{\rm cc}^{c'}$ and $A_{\rm pen}^{q'}$ ($q\in\{u,c,t\}$) describe the current–current, i.e. “tree”, and penguin processes in Fig. \[fig:BsPsiKS\], and $A_{\rm pen}^{ut'}$ is defined in analogy to $A_{\rm pen}^{ct'}$. On the other hand, the $B_s^0\to J/\psi\, K_{\rm S}$ decay amplitude can be parametrized as follows: $$\label{Bs-ampl}
A(B_s^0\to J/\psi\, K_{\rm S})=-\lambda\,{\cal A}\left[1-a\, e^{i\theta}
e^{i\gamma}\right],$$ where $${\cal A}\equiv\lambda^2A\left(A_{\rm cc}^{c}+A_{\rm pen}^{ct}\right)$$ and $$\label{a-def}
a\, e^{i\theta}\equiv R_b\left(1-\frac{\lambda^2}{2}\right)\left(
\frac{A_{\rm pen}^{ut}}{A_{\rm cc}^{c}+A_{\rm pen}^{ct}}\right)$$ correspond to (\[Aap-def\]) and (\[ap-def\]), respectively. It should be emphasized that (\[Bd-ampl2\]) and (\[Bs-ampl\]) rely only on the unitarity of the CKM matrix. In particular, these Standard-Model parametrizations of the $B_{d(s)}^0\to J/\psi\, K_{\rm S}$ decay amplitudes also take into account final-state-interaction effects, which can be considered as long-distance penguin topologies with internal up- and charm-quark exchanges [@bfm].
If we compare (\[Bd-ampl2\]) and (\[Bs-ampl\]) with each other, we observe that the quantity $a' e^{i\theta'}$ is doubly Cabibbo-suppressed in the $B_d^0\to J/\psi\, K_{\rm S}$ decay amplitude (\[Bd-ampl2\]), whereas $a\, e^{i\theta}$ enters in the $B_s^0\to J/\psi\, K_{\rm S}$ amplitude (\[Bs-ampl\]) in a Cabibbo-allowed way. Consequently, there may be sizeable CP-violating effects in $B_s\to J/\psi\, K_{\rm S}$. As was pointed out in [@BsPsiK], the $U$-spin flavour symmetry of strong interactions allows us to extract $\gamma$, as well as interesting hadronic quantities, from the CP asymmetries ${\cal A}_{\rm CP}^{\rm dir}
(B_s\to J/\psi\, K_{\rm S})$, ${\cal A}_{\rm CP}^{\rm mix}
(B_s\to J/\psi\, K_{\rm S})$ and the CP-averaged $B_{d(s)}\to J/\psi\, K_{\rm S}$ branching ratios. The theoretical accuracy of this approach is only limited by $U$-spin-breaking corrections, and there are no problems due to final-state-interaction effects. As an interesting by-product, this strategy allows us to take into account the – presumably very small – penguin contributions in the determination of $\phi_d=2\beta$ from $B_d\to J/\psi\, K_{\rm S}$, which is an important issue in view of the impressive accuracy that can be achieved in the LHC era. Moreover, we have an interesting relation between the direct $B_{s(d)}\to J/\psi\, K_{\rm S}$ CP asymmetries and the corresponding CP-averaged branching ratios: $$\frac{{\cal A}_{\rm CP}^{\rm dir}(B_d\to J/\psi\,
K_{\rm S})}{{\cal A}_{\rm CP}^{\rm dir}(B_s\to J/\psi\, K_{\rm S})}\approx
-\,\frac{\mbox{BR}(B_s\to J/\psi\, K_{\rm S})}{\mbox{BR}(B_d\to J/\psi\,
K_{\rm S})}\,.$$ The experimental feasibility of the extraction of $\gamma$ sketched above depends strongly on the size of the penguin effects in $B_s\to J/\psi\, K_{\rm S}$, which are very hard to estimate. A similar strategy is provided by $B_{d (s)}\to D^{\,+}_{d(s)}\, D^{\,-}_{d(s)}$ decays. For a detailed discussion, the reader is referred to [@BsPsiK].
Extracting $\beta$ and $\gamma$ from $B_d\to\pi^+\pi^-$ and\
$B_s\to K^+K^-$ {#sec:BsKK}
------------------------------------------------------------
In this subsection, a new way of making use of the CP-violating observables of the decay $B_d\to\pi^+\pi^-$ is discussed [@BsKK]: combining them with those of $B_s\to K^+K^-$ – the $U$-spin counterpart of $B_d\to\pi^+\pi^-$ – a simultaneous determination of $\phi_d=2\beta$ and $\gamma$ becomes possible. This approach is not affected by any penguin topologies – it rather makes use of them – and does not rely on certain “plausible” dynamical or model-dependent assumptions. Moreover, final-state-interaction effects, which led to considerable attention in the recent literature in the context of the determination of $\gamma$ from $B\to\pi K$ decays (see Subsection \[sec:BpiK\]), do not lead to any problems, and the theoretical accuracy is only limited by $U$-spin-breaking effects. This strategy, which is furthermore very promising to search for indications of new physics [@FMat], is conceptually quite similar to the extraction of $\gamma$ from $B_{s(d)}\to J/\psi\, K_{\rm S}$ discussed in the previous subsection. However, it appears to be more favourable in view of the $U$-spin-breaking effects and the experimental feasibility.
-0.5truein
=4.5truecm =4.5truecm
The leading-order Feynman diagrams contributing to $B_d\to\pi^+\pi^-$ and $B_s\to K^+K^-$ are shown in Fig. \[fig:BsKK\]. If we make use of the unitarity of the CKM matrix and apply the Wolfenstein parametrization [@wolf], generalized to include non-leading terms in $\lambda$ [@BLO], the $B_d^0\to\pi^+\pi^-$ decay amplitude can be expressed as follows [@BsKK]: $$\label{Bdpipi-ampl}
A(B_d^0\to\pi^+\pi^-)=e^{i\gamma}\left(1-\frac{\lambda^2}{2}\right){\cal C}
\left[1-d\,e^{i\theta}e^{-i\gamma}\right],$$ where $$\label{C-def}
{\cal C}\equiv\lambda^3A\,R_b\left(A_{\rm cc}^{u}+A_{\rm pen}^{ut}\right),$$ with $A_{\rm pen}^{ut}\equiv A_{\rm pen}^{u}-A_{\rm pen}^{t}$, and $$\label{d-def}
d\,e^{i\theta}\equiv\frac{1}{(1-\lambda^2/2)R_b}
\left(\frac{A_{\rm pen}^{ct}}{A_{\rm cc}^{u}+A_{\rm pen}^{ut}}\right).$$ In analogy to (\[Bdpipi-ampl\]), we obtain for the $B_s^0\to K^+K^-$ decay amplitude $$\label{BsKK-ampl}
A(B_s^0\to K^+K^-)=e^{i\gamma}\lambda\,{\cal C}'\left[1+\left(
\frac{1-\lambda^2}{\lambda^2}\right)d'e^{i\theta'}e^{-i\gamma}\right],$$ where $${\cal C}'\equiv\lambda^3A\,R_b\left(A_{\rm cc}^{u'}+A_{\rm pen}^{ut'}\right)$$ and $$\label{dp-def}
d'e^{i\theta'}\equiv\frac{1}{(1-\lambda^2/2)R_b}
\left(\frac{A_{\rm pen}^{ct'}}{A_{\rm cc}^{u'}+A_{\rm pen}^{ut'}}\right)$$ correspond to (\[C-def\]) and (\[d-def\]), respectively. The general expressions for the $B_d\to\pi^+\pi^-$ and $B_s\to K^+K^-$ observables (\[ee7\]) and (\[ADGam\]) in terms of the parameters specified above can be found in [@BsKK].
As can be seen in Fig. \[fig:BsKK\], $B_d\to\pi^+\pi^-$ and $B_s\to K^+K^-$ are related to each other by interchanging all down and strange quarks. Consequently, the $U$-spin flavour symmetry of strong interactions implies $$\label{U-spin-rel}
d'=d\quad\mbox{and}\quad\theta'=\theta.$$ If we assume that the $B^0_s$–$\overline{B^0_s}$ mixing phase $\phi_s$ is negligibly small, or that it is fixed through $B_s\to J/\psi\,\phi$, the four CP-violating observables provided by $B_d\to\pi^+\pi^-$ and $B_s\to K^+K^-$ depend – in the strict $U$-spin limit – on the four “unknowns” $d$, $\theta$, $\phi_d=2\beta$ and $\gamma$. We have therefore sufficient observables at our disposal to extract these quantities simultaneously. In order to determine $\gamma$, it suffices to consider ${\cal A}_{\rm CP}^{\rm mix}(B_s\to K^+K^-)$ and the direct CP asymmetries ${\cal A}_{\rm CP}^{\rm dir}(B_s\to K^+K^-)$, ${\cal A}_{\rm CP}^{\rm dir}(B_d\to\pi^+\pi^-)$. If we make use, in addition, of ${\cal A}_{\rm CP}^{\rm mix}(B_d\to\pi^+\pi^-)$, $\phi_d$ can be determined as well. The formulae to implement this approach in a mathematical way are given in [@BsKK].
If we use the $B^0_d$–$\overline{B^0_d}$ mixing phase as an input, there is a different way of combining ${\cal A}_{\rm CP}^{\rm dir}
(B_d\to\pi^+\pi^-)$, ${\cal A}_{\rm CP}^{\rm mix}(B_d\to\pi^+\pi^-)$ with ${\cal A}_{\rm CP}^{\rm dir}(B_s\to K^+K^-)$, ${\cal A}_{\rm CP}^{\rm mix}(B_s\to K^+K^-)$. The point is that these observables allow us to fix contours in the $\gamma$–$d$ and $\gamma$–$d'$ planes as functions of the $B^0_d$–$\overline{B^0_d}$ and $B^0_s$–$\overline{B^0_s}$ mixing phases in a [*theoretically clean*]{} way. In Fig. \[fig:BsKKcont\], these contours are shown for a specific example [@BsKK]: $$\label{obs-examp}
\begin{array}{lcllcl}
{\cal A}_{\rm CP}^{\rm dir}(B_d\to\pi^+\pi^-)&=&+24\%,\,\,&
{\cal A}_{\rm CP}^{\rm mix}(B_d\to\pi^+\pi^-)&=&+4.4\%,\\
{\cal A}_{\rm CP}^{\rm dir}(B_s\to K^+K^-)&=&-17\%,\,\,&
{\cal A}_{\rm CP}^{\rm mix}(B_s\to K^+K^-)&=&-28\%,
\end{array}$$ corresponding to the input parameters $d=d'=0.3$, $\theta=\theta'=210^\circ$, $\phi_s=0$, $\phi_d=53^\circ$ and $\gamma=76^\circ$. In order to extract $\gamma$ and the hadronic parameters $d$, $\theta$, $\theta'$ with the help of these contours, the $U$-spin relation $d'=d$ is sufficient. The intersection of the contours shown in Fig. \[fig:BsKKcont\] yields a twofold solution for $\gamma$, given by $51^\circ$ and our input value of $76^\circ$. The resolution of this ambiguity is discussed in [@BsKK]. A first experimental feasibility study for LHCb, using the set of observables given in (\[obs-examp\]), gave an uncertainty of $\left.\Delta\gamma\right|_{\rm exp}=2.3^\circ$ for five years of data taking and looks very promising [@wilkinson].
It should be emphasized that the theoretical accuracy of $\gamma$ and of the hadronic parameters $d$, $\theta$ and $\theta'$ is only limited by $U$-spin-breaking effects. In particular, it is not affected by any final-state-interaction or penguin effects. A first consistency check is provided by $\theta=\theta'$. Moreover, we may determine the normalization factors ${\cal C}$ and ${\cal C}'$ of the $B^0_d\to\pi^+\pi^-$ and $B^0_s\to K^+K^-$ decay amplitudes (see (\[Bdpipi-ampl\]) and (\[BsKK-ampl\])) with the help of the corresponding CP-averaged branching ratios. Comparing them with the “factorized” result $$\left|\frac{{\cal C}'}{{\cal C}}\right|_{\rm fact}=\,
\frac{f_K}{f_\pi}\frac{F_{B_sK}(M_K^2;0^+)}{F_{B_d\pi}(M_\pi^2;0^+)}
\left(\frac{M_{B_s}^2-M_K^2}{M_{B_d}^2-M_\pi^2}\right),$$ we have another interesting probe for $U$-spin-breaking effects. Interestingly, $d'e^{i\theta'}=d\,e^{i\theta}$ is not affected by $U$-spin-breaking corrections within a certain model-dependent approach (a modernized version of the “Bander–Silverman–Soni mechanism” [@bss]), making use – among other things – of the “factorization” hypothesis to estimate the relevant hadronic matrix elements [@BsKK]. Although this approach seems to be rather simplified and may be affected by non-factorizable effects, it strengthens our confidence into the $U$-spin relations used for the extraction of $\beta$ and $\gamma$ from the decays $B_d\to\pi^+\pi^-$ and $B_s\to K^+K^-$.
The strategy discussed in this subsection is very promising for second-generation $B$-physics experiments at hadron machines, where the physics potential of the $B_s$ system can be fully exploited. At the asymmetric $e^+e^-$ $B$-factories operating at the $\Upsilon(4S)$ resonance, BaBar and BELLE, which have already seen the first events, this is unfortunately not possible. However, there is also a variant of the strategy to determine $\gamma$, where $B_d\to\pi^\mp K^\pm$ is used instead of $B_s\to K^+K^-$ [@BsKK]. This approach has the advantage that all required time-dependent measurements can in principle be performed at the asymmetric $e^+e^-$ machines. On the other hand, it relies – in addition to the $SU(3)$ flavour symmetry – on the smallness of certain “exchange” and “penguin annihilation” topologies, which may be enhanced by final-state-interaction effects. Consequently, its theoretical accuracy cannot compete with the “second-generation” $B_d\to\pi^+\pi^-$, $B_s\to K^+K^-$ approach, which is not affected by such problems.
Extracting CKM Phases and Hadronic Parameters from Angular Distributions of $B_{d,s}$ Decays {#sec:ang}
--------------------------------------------------------------------------------------------
A very interesting laboratory to explore CP violation and the hadronization dynamics of non-leptonic $B$ decays is provided by quasi-two-body modes $B_q\to X_1\,X_2$ of neutral $B_q$-mesons, where both $X_1$ and $X_2$ carry spin and continue to decay through CP-conserving interactions [@FD; @ang-stud]. In this case, the time-dependent angular distribution of the decay products of $X_1$ and $X_2$ provides valuable information. For an initially, i.e. at time $t=0$, present $B_q^0$-meson, it can be written as $$\label{ang}
f(\Theta,\Phi,\Psi;t)=\sum_k{\cal O}^{(k)}(t)g^{(k)}(\Theta,\Phi,\Psi),$$ where we have denoted the angles describing the kinematics of the decay products of $X_1$ and $X_2$ generically by $\Theta$, $\Phi$ and $\Psi$. There are two different kinds of observables ${\cal O}^{(k)}(t)$, describing the time evolution of the angular distribution (\[ang\]): observables $\left|A_f(t)\right|^2$, corresponding to “ordinary” decay rates, and interference terms of the type $$\label{inter}
\mbox{Re}[A_{\tilde f}^\ast(t)A_f(t)],\quad \mbox{Im}[A_{\tilde f}^\ast(t)
A_f(t)],$$ where the amplitudes $A_f(t)$ correspond to a given final-state configuration $[X_1\,X_2]_f$. In comparison with strategies using $B_q\to P_1\,P_2$ decays into two pseudoscalar mesons, the angular distributions of the $B_q\to X_1\,X_2$ modes provide many more cross-checks and allow, in certain cases, the resolution of discrete ambiguities, which usually affect the extraction of CKM phases. The latter feature is due to the observables (\[inter\]).
In a recent paper [@RF-ang], I presented the general formalism to extract CKM phases and hadronic parameters from the time-dependent angular distributions (\[ang\]) of certain $B_q\to X_1\,X_2$ decays, taking also into account penguin contributions. If we fix the mixing phase $\phi_q$ separately, it is possible to determine a CP-violating weak phase $\omega$, which is usually given by the angles of the unitarity triangle shown in Fig. \[fig:UT\](a), and interesting hadronic quantities as a function of a [*single*]{} hadronic parameter (this feature is also discussed in another recent paper [@LSS]). If we determine this parameter, for instance, by comparing $B_q\to X_1\,X_2$ with an $SU(3)$-related mode, all remaining parameters, including $\omega$, can be extracted. If we are willing to make more extensive use of flavour-symmetry arguments, it is in principle possible to determine the $B^0_q$–$\overline{B^0_q}$ mixing phase $\phi_q$ as well. As the technical details of this approach are rather involved, let us just have a brief look at some of its applications.
### $B_d\to J/\psi\,\rho^0$ and $B_s\to J/\psi\, \phi$
The structure of the decay amplitudes of these modes is very similar to the ones of $B_s\to J/\psi\,K_{\rm S}$ and $B_d\to J/\psi\,K_{\rm S}$ discussed in Subsection \[sec:BsPsiKS\]. They can be related to each other through $SU(3)$ and certain dynamical arguments, involving “exchange” and “penguin annihilation” topologies, and allow the extraction of the $B^0_d$–$\overline{B^0_d}$ mixing phase $\phi_d=2\beta$. Because of the interference effects leading to the observables (\[inter\]), both $\sin\phi_d$ and $\cos\phi_d$ can be determined, thereby allowing us to fix $\phi_d$ [*unambiguously*]{}. As we have seen above, this phase is an important input for several strategies to determine $\gamma$. For alternative methods to resolve the twofold ambiguity arising in the extraction of $\phi_d$ from ${\cal A}^{\mbox{{\scriptsize mix--ind}}}_{\mbox{{\scriptsize
CP}}}(B_d\to J/\psi\, K_{\mbox{{\scriptsize S}}})=-\sin\phi_d$, the reader is referred to [@ambig].
Should the penguin effects in $B_d\to J/\psi\,\rho^0$ be sizeable, $\gamma$ can be determined as well. As an interesting by-product, this strategy allows us to take into account the penguin effects in the extraction of the $B^0_s$–$\overline{B^0_s}$ mixing phase from $B_s\to J/\psi\,\phi$, which is an important issue for the LHC era. Moreover, valuable insights into $SU(3)$-breaking effects can be obtained.
### $B_d\to\rho^+\rho^-$ and $B_s\to K^{\ast+}\,K^{\ast-}$
The structure of the decay amplitudes of these transitions is completely analogous to the ones of $B_d\to\pi^+\pi^-$ and $B_s\to K^+K^-$ discussed in Subsection \[sec:BsKK\]. They can be related to each other through $U$-spin arguments, thereby allowing the extraction of $\gamma$ and of the $B^0_d$–$\overline{B^0_d}$ and $B^0_s$–$\overline{B^0_s}$ mixing phases. In contrast to the $B_d\to\pi^+\pi^-$, $B_s\to K^+K^-$ strategy, both mixing phases can in principle be determined, and many more cross-checks of interesting $U$-spin relations can be performed.
### $B_d\to K^{\ast0}\,\overline{K^{\ast0}}$ and $B_s\to K^{\ast0}\,\overline{K^{\ast0}}$
These decays are also $U$-spin counterparts and allow the simultaneous extraction of $\gamma$, $\phi_d$ and $\phi_s$. As they are pure penguin-induced modes, they are very sensitive to new physics. A particular parametrization of the $B_d\to K^{\ast0}\,\overline{K^{\ast0}}$ decay amplitude allows us to probe also the weak phase $\phi\equiv\phi_d-2\beta$. Within the Standard Model, we have $\phi=0$. However, this relation may well be affected by new physics, and represents an interesting test of the Standard-Model description of CP violation. Therefore it would be very important to determine this combination of CKM phases experimentally. The observables of the $B_d\to K^{\ast0}[\to\pi^-K^+]\,\overline{K^{\ast0}}[\to\pi^+K^-]$ angular distribution may provide an important step towards this goal.
Since the formalism presented in [@RF-ang], which we have sketched in this subsection, is very general, it can be applied to many other decays. Detailed studies are required to explore which channels are most promising from an experimental point of view.
Conclusions and Outlook {#sec:concl}
=======================
In conclusion, we have seen that the phenomenology of non-leptonic decays of $B$-mesons is very rich and provides a fertile testing ground for the Standard-Model description of CP violation. Research has been very active in this field over the last couple of years, and we have discussed some of the most recent theoretical developments, including determinations of $\gamma$ from $B\to\pi K$ and $B_{s(d)}\to J/\psi\,
K_{\rm S}$ decays, an extraction of $\beta$ and $\gamma$, which is offered by $B_d\to \pi^+\pi^-$ and $B_s\to K^+K^-$, and a general approach to extract CKM phases and hadronic parameters from angular distributions of certain non-leptonic decays of $B_{d,s}$-mesons. In these new strategies, a strong emphasis was given to the $B_s$ system, which has a very powerful physics potential and is of particular interest for $B$-physics experiments at hadron machines.
The $B$-factory era in particle physics has just started, as the BaBar and BELLE detectors have recently observed their first events. In the near future, CLEO-III, HERA-B and CDF-II will also start taking data, and the first results will certainly be very exciting. However, in order to establish the presence of physics beyond the Standard Model, it may well be that we have to wait for second-generation $B$-physics experiments at hadron machines such as LHCb or BTeV, which are expected to start operation around 2005. Hopefully, these experiments will bring several unexpected results, leading to an exciting and fruitful interaction between theorists and experimentalists!
[**[*Acknowledgements*]{}**]{}
I would like to thank the organizers for inviting me to that stimulating conference in a very enjoyable environment.
[99]{}
N. Cabibbo, [*Phys. Rev. Lett.*]{} [**10**]{} (1963) 531; M. Kobayashi and T. Maskawa, [*Progr. Theor. Phys.*]{} [**49**]{} (1973) 652.
R. Aleksan, B. Kayser and D. London, [*Phys. Rev. Lett.*]{} [**73**]{} (1994) 18.
L. Wolfenstein, [*Phys. Rev. Lett.*]{} [**51**]{} (1983) 1945.
L.L. Chau and W.-Y. Keung, [*Phys. Rev. Lett.*]{} [**53**]{} (1984) 1802; C. Jarlskog and R. Stora, [*Phys. Lett.*]{} [**B208**]{} (1988) 268.
A.J. Buras, M.E. Lautenbacher and G. Ostermaier, [*Phys. Rev.*]{} [**D50**]{} (1994) 3433.
For a review, see R. Fleischer, [*Int. J. Mod. Phys.*]{} [**A12**]{} (1997) 2459.
I. Dunietz, [*Phys. Rev.*]{} [**D52**]{} (1995) 3048.
For a recent calculation of $\Delta\Gamma_s$, see M. Beneke, G. Buchalla, C. Greub, A. Lenz and U. Nierste, [*Phys. Lett.*]{} [**B459**]{} (1999) 631.
A.B. Carter and A.I. Sanda, [*Phys. Rev. Lett.*]{} [**45**]{} (1980) 952; [*Phys. Rev.*]{} [**D23**]{} (1981) 1567; I.I. Bigi and A.I. Sanda, [*Nucl. Phys.*]{} [**B193**]{} (1981) 85.
Y. Nir and D. Silverman, [*Nucl. Phys.*]{} [**B345**]{} (1990) 301.
OPAL Collaboration (K. Ackerstaff [*et al.*]{}), [*Eur. Phys. J.*]{} [**C5**]{} (1998) 379; CDF Collaboration (F. Abe [*et al.*]{}), [*Phys. Rev. Lett.*]{} [**81**]{} (1998) 5513; for an updated analysis, see preprint CDF/PUB/BOTTOM/CDF/4855, and the contribution by Petar Maksimovic to these proceedings.
R. Fleischer, [*Eur. Phys. J.*]{} [**C**]{} (1999) DOI 10.1007/s100529900099 \[hep-ph/9903455\].
See, for instance, M. Gronau, [*Phys. Lett.*]{} [**B300**]{} (1993) 163; J.P. Silva and L. Wolfenstein, [*Phys. Rev.*]{} [**D49**]{} (1994) R1151; R. Aleksan [*et al.*]{}, [*Phys. Lett.*]{} [**B356**]{} (1995) 95; A.J. Buras and R. Fleischer, [*Phys. Lett.*]{} [**B360**]{} (1995) 138; F. DeJongh and P. Sphicas, [*Phys. Rev.*]{} [**D53**]{} (1996) 4930; M. Ciuchini [*et al.*]{}, [*Nucl. Phys.*]{} [**B501**]{} (1997) 271; P.S. Marrocchesi and N. Paver, [*Int. J. Mod. Phys.*]{} [**A13**]{} (1998) 251; A. Ali, G. Kramer and C.-D. Lü, [*Phys. Rev.*]{} [**D59**]{} (1999) 014005.
J. Charles, [*Phys. Rev.*]{} [**D59**]{} (1999) 054007.
M. Gronau and D. London, [*Phys. Rev. Lett.*]{} [**65**]{} (1990) 3381.
A.J. Buras and R. Fleischer, preprint CERN-TH/98-319 (1998) \[hep-ph/9810260\], to appear in [*Eur. Phys. J.*]{} [**C**]{}.
M. Gronau, D. Pirjol and T.-M. Yan, preprint CLNS-98-1582 (1998) \[hep-ph/9810482\].
Y. Grossman and H.R. Quinn, [*Phys. Rev.*]{} [**D58**]{}, 017504 (1998).
H. Lipkin, Y. Nir, H. Quinn and A. Snyder, [*Phys.Rev.*]{} [**D44**]{} (1991) 1454; A. Snyder and H. Quinn, [*Phys. Rev.*]{} [**D48**]{} (1993) 2139.
R. Fleischer, [*Phys. Lett.*]{} [**B459**]{} (1999) 306.
D.E. Jaffe (CLEO Collaboration), talk given at the [*8th International Symposium on Heavy Flavour Physics*]{}, Southampton, 25–29 July 1999.
M. Beneke, G. Buchalla, M. Neubert and C.T. Sachrajda, preprint CERN-TH/99-126 (1999) \[hep-ph/9905312\].
R.G. Sachs, preprint EFI-85-22 (1985) (unpublished); I. Dunietz and R.G. Sachs, [*Phys. Rev.*]{} [**D37**]{} (1988) 3186 \[E: [*Phys. Rev.*]{} [**D39**]{} (1989) 3515\]; I. Dunietz, [*Phys. Lett.*]{} [**B427**]{} (1998) 179.
Experimental feasibility studies were performed by J. Gronberg and H. Nelson for the [*The BaBar Phyics Book*]{}, eds.P.F. Harison and H.R. Quinn (SLAC report 504, October 1998), and by J. Rademacker and G. Wilkinson for the Workshop on [*Standard Model Physics (and more) at the LHC*]{}, CERN (1999).
A.J. Buras and R. Fleischer, in [*Heavy Flavours II*]{}, eds. A.J. Buras and M. Lindner (World Scientific, Singapore, 1998) \[hep-ph/9704376\].
R. Fleischer and I. Dunietz, [*Phys. Lett.*]{} [**B387**]{} (1996) 361 and [*Phys. Rev.*]{} [**D55**]{} (1997) 259.
R. Aleksan, I. Dunietz and B. Kayser, [*Z. Phys.*]{} [**C54**]{} (1992) 653.
A.S. Dighe, I. Dunietz and R. Fleischer, [*Eur. Phys. J*]{} [**C6**]{} (1999) 647.
R. Fleischer, preprint CERN-TH/99-92 (1999) \[hep-ph/9903540v2\], to appear in [*Phys. Rev.*]{} [**D**]{}.
G. Barenboim, J. Bernabeu, J. Matias and M. Raidal, [*Phys. Rev.*]{} [**D60**]{} (1999) 016003.
M. Smizanska, these proceedings.
M. Calvetti, these proceedings.
M. Gronau and D. Wyler, [*Phys. Lett.*]{} [**B265**]{} (1991) 172; see also I. Dunietz, [*Phys. Lett.*]{} [**B270**]{} (1991) 75.
M. Gronau, J.L. Rosner and D. London, [*Phys. Rev.Lett.*]{} [**73**]{} (1994) 21.
O.F. Hernández, D. London, M. Gronau and J.L. Rosner, [*Phys. Lett.*]{} [**B333**]{} (1994) 500; [*Phys. Rev.*]{} [**D50**]{} (1994) 4529.
D. Atwood, I. Dunietz and A. Soni, [*Phys. Rev. Lett.*]{} [**78**]{} (1997) 3257.
CLEO Collaboration (M. Athanas [*et al.*]{}), [*Phys.Rev. Lett.*]{} [**80**]{} (1998) 5493.
K. Berkelman, these proceedings.
R. Fleischer and T. Mannel, [*Phys. Rev.*]{} [**D57**]{} (1998) 2752.
R. Fleischer, [*Phys. Lett.*]{} [**B365**]{} (1996) 399.
M. Gronau and J.L. Rosner, [*Phys. Rev.*]{} [**D57**]{} (1998) 6843.
M. Neubert and J.L. Rosner, [*Phys. Lett.*]{} [**B441**]{} (1998) 403 and [*Phys. Rev. Lett.*]{} [**81**]{} (1998) 5076; M. Neubert, [*JHEP*]{} 9902: 014, 1999.
R. Fleischer, [*Eur. Phys. J.*]{} [**C6**]{} (1999) 451.
A.J. Buras, R. Fleischer and T. Mannel, [*Nucl. Phys.*]{} [**B533**]{} (1998) 3.
L. Wolfenstein, [*Phys. Rev.*]{} [**D52**]{} (1995) 537; J.-M. Gérard and J. Weyers, [*Eur. Phys. J.*]{} [**C7**]{} (1999) 1; A.F. Falk [*et al.*]{}, [*Phys. Rev.*]{} [**D57**]{} (1998) 4290; D. Atwood and A. Soni, [*Phys. Rev.*]{} [**D58**]{} (1998) 036005.
M. Neubert, [*Phys. Lett.*]{} [**B424**]{} (1998) 152.
R. Fleischer, [*Phys. Lett.*]{} [**B435**]{} (1998) 221.
M. Gronau and J. Rosner, [*Phys. Rev.*]{} [**D58**]{} (1998) 113005.
D. Pirjol, these proceedings.
N.G. Deshpande and X.-G. He, [*Phys. Rev. Lett.*]{} [**74**]{} (1995) 26.
R. Fleischer, [*Z. Phys.*]{} [**C62**]{} (1994) 81; [*Phys. Lett.*]{} [**B321**]{} (1994) 259.
M. Gronau and D. Pirjol, [*Phys. Lett.*]{} [**B449**]{} (1999) 321 and preprint CLNS-99-1604 (1999) \[hep-ph/9902482\]; K. Agashe and N.G. Deshpande, [*Phys. Lett.*]{} [**B451**]{} (1999) 215 and [**B454**]{} (1999) 359.
For reviews, see Y. Grossman, Y. Nir and R. Rattazzi, in [*Heavy Flavours II*]{}, eds. A.J. Buras and M. Lindner (World Scientific, Singapore, 1998) \[hep-ph/9701231\]; M. Gronau and D. London, [*Phys. Rev.*]{} [**D55**]{} (1997) 2845; Y. Nir and H.R. Quinn, [*Annu. Rev. Nucl. Part. Sci.*]{} [**42**]{} (1992) 211; L. Wolfenstein, [*Phys. Rev.*]{} [**D57**]{} (1998) 6857.
R. Fleischer and J. Matias, preprint CERN-TH/99-164 (1999) \[hep-ph/9906274\].
G. Wilkinson, LHCb study for the Workshop on [*Standard Model Physics (and more) at the LHC*]{}, CERN (1999).
M. Bander, D. Silverman and A. Soni, [*Phys. Rev.Lett.*]{} [**43**]{} (1979) 242.
I. Dunietz, H. Quinn, A. Snyder, W. Toki and H.J. Lipkin, [*Phys. Rev.*]{} [**D43**]{} (1991) 2193.
D. London, N. Sinha and R. Sinha, preprint UDEM-GPP-TH-99-61 (1999) \[hep-ph/9905404\].
See, for example, Y. Grossman and H.R. Quinn, [*Phys. Rev.*]{} [**D56**]{} (1997) 7259; J. Charles, A. Le Yaouanc, L. Oliver, O. Pène and J.-C. Raynal, [*Phys. Lett.*]{} [**B425**]{} (1998) 375 and [*Phys. Rev.*]{} [**D58**]{} (1998) 114021; A.S. Dighe, I. Dunietz and R. , [*Phys. Lett.*]{} [**B433**]{} (1998) 147.
[^1]: Robert.Fleischer@cern.ch
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present the results of a search for optical counterparts to the two quiescent low mass X-ray binaries (X5 and X7) in the globular cluster 47 Tucanae, using high quality [[*Chandra*]{}]{} and [[*HST*]{}]{} images. A faint blue ($V=21.7$; $U-V=0.9$) star within 003 of the eclipsing system X5 shows variability on both short and long timescales, and is the counterpart of the X-ray source. The colors and variability of this object are consistent with the combination of light from an accretion disk and a red main sequence star (possibly somewhat larger than a normal MS star with similar luminosity). No evidence is found for a star showing either variability or unusual colors near the position of X7, but a probable chance superposition of a star with $V=20.25$ limits the depth of our search.'
author:
- 'Peter D. Edmonds,Craig O. Heinke, Jonathan E. Grindlay and Ronald L. Gilliland'
title: '[[*HST*]{}]{} Detection of a Quiescent Low Mass X-Ray Binary Companion in 47 Tucanae '
---
Introduction
============
It has long been thought that quiescent low mass X-ray binaries (qLMXBs) dominate the most luminous of the dim sources in globular clusters (Hertz & Grindlay 1983, Verbunt et al. 1984). Recent observations using Chandra/ACIS imaging and spectroscopy have demonstrated that this is indeed the case. Two systems, X5 and X7 in the massive globular cluster 47 Tuc were previously suspected to be qLMXBs (Hasinger, Johnston & Verbunt 1994, Verbunt & Hasinger 1998) but the sensitivity and resolution of [[*Chandra*]{}]{} was required to confirm this suspicion (Grindlay et al. 2001a, hereafter GHE01a and Heinke et al. 2001a, in preparation, hereafter HGL01). One qLMXB has also been found in each of NGC 6397 (Grindlay et al. 2001b; hereafter GHE01b) and $\omega$ Cen (Rutledge et al. 2001). These 4 qLMXB systems all have thermal spectra that are well modeled by hydrogen atmospheres of hot neutron stars (NSs), with no power law components required. None of them are obviously variable with the exception of X5 (HGL01) which shows deep eclipses as well as dips showing increased neutral hydrogen (X7 shows marginal evidence for a 5.5 hr period).
The logical extension of this work is to search for optical counterparts to these sources, using the potent combination of [[*Chandra*]{}]{} and [[*HST*]{}]{}. With astrometric errors $<$ 01 routinely being achieved for X-ray sources, optical identifications are being reported with unprecedented frequency. These identifications include cataclysmic variables (CVs) and BY Draconis variables (GHE01a, GHE01b), millisecond pulsars (MSPs; Edmonds et al. 2001, hereafter EGH01 and Ferraro et al. 2001) and active LMXBs (Heinke et al. 2001b and White & Angelini 2001).
Searches for optical counterparts to qLMXBs have been less successful. No counterpart has been found for the NGC 6397 qLMXB (any possible companion has [$M_V$]{}> 11; GHE01b) while the $\omega$ Cen qLMXB lies outside the field of view (FOV) of current [[*HST*]{}]{} datasets and stellar crowding will limit deep searches from the ground. Here, we report the use of high quality [[*Chandra*]{}]{}and [[*HST*]{}]{} data to search for optical counterparts to the 47 Tuc qLMXBs X5 and X7. We have discovered a faint, blue and variable counterpart to the eclipsing X5, as reported briefly in GHE01a. This detection, combined with the well determined period, distance, inclination and X-ray spectrum of X5 makes this the best constrained qLMXB known. We also report limits on the qLMXB X7. The astrometry, photometry and time series for both of these searches are described below.
Observations and Analysis
=========================
Details of the [[*Chandra*]{}]{} data used here are given by GHE01a and HGL01. The qLMXBs X5 and X7 have 4576 and 5488 counts respectively (over the 72 ksec observation), with internal, 1$\sigma$ errors of 00082 and 00089 respectively. To search for optical counterparts to X5 and X7, two [[*HST*]{}]{} datasets have been analysed, the 8.3 d observations of Gilliland et al. 2000 (GO-8267: July 3 1999 to July 11 1999) and the archival data of Meylan obtained in three different epochs with $\sim$ 2 year spacings (GO-5912: October 25 1995; GO-6467: November 3 1997; GO-7503: October 28 1999). The Gilliland data provides exquisite $V$ and $I$ time series (with some $U$ data) and the Meylan data provides F300W images in the first two epochs and F300W and F555W images in the third epoch (with limited time series information in each epoch).
Astrometry
----------
Using the zeropoint positional offsets between the [[*Chandra*]{}]{} and [[*HST*]{}]{}coordinate frames, the region within $\sim$20 of the nominal X5 position lies outside the FOV of the Gilliland data set, but is found on the inner part (with respect to cluster center) of the WF4 chip in the Meylan data. Since no [[*Chandra*]{}]{} source in the WF4 FOV with $>50$ counts (not including X5) currently has a plausible optical counterpart, we used the PC astrometry to align the X-ray and optical coordinate frames (incurring a systematic chip-to-chip error, assumed to be 005, which dominates the total error budget). After this correction we found that only three stars are within 05 of the nominal X5 position, with separations of 0033 (0.6$\sigma$; C1), 023 (4.5$\sigma$; C2) and 0292 (5.7$\sigma$; C3). The finding chart shows the F300W (Fig. \[fig1\]a) and F555W (Fig. \[fig1\]b) images, from Meylan epoch 3, for the region around X5 and the insets in Fig. 1a show epochs 1 (‘U(1)’) and 2 (‘U(2)’). Since faint red MS stars appear brighter in the F555W image than in the F300W image, Figures 1a and 1b show that C1 has a blue color. However, C3 (just outside the 5$\sigma$ error circle to the NE) has an even stronger blue color and so is also a potentially viable optical counterpart if the astrometric shift between the PC and WF4 chips is much larger than assumed. This ambiguity is resolved by noting that C1, unlike C3, is clearly brighter in epoch 1 than in epoch 2 (see inset) confirming it as the optical counterpart (hereafter [$\mathrm{X5_{opt}}$]{}).
\[fig1\]
The region within a few arcsec of X7 falls on the PC images of both the Gilliland and Meylan data sets. Since 17 [[*Chandra*]{}]{} sources have likely optical counterparts on the Gilliland PC image, we have corrected for small linear terms in the residual astrometric errors between [[*Chandra*]{}]{} and [[*HST*]{}]{} using least squares fitting. We computed the positional errors for X7 by adding the systematic errors to the random [wavdetect]{} errors in quadrature, resulting in 1$\sigma$ errors of 00065 in RA and 00088 in Dec (see Fig. \[fig1\]c and \[fig1\]d, where 20$\sigma$ error circles are shown). The three nearest stars to X7 in the Gilliland [[*HST*]{}]{} image are 0019 (2.3$\sigma$), 012 (14.7$\sigma$) and 023 (28.6$\sigma$) away (N1, N2 and N3 respectively). Clearly, astrometrically, only N1 ($V=20.25$; $U-V=1.72$; [$M_V$]{} = 6.8) is a viable candidate for the optical companion of X7. Given the FOV of the PC and the detected number of stars on the PC chip with $V<20.25$ (6367), only $6.3\times10^{-3}$ stars are expected within 0019 of N1, assuming constant density over the PC FOV.
Photometry
----------
The Gilliland dataset photometry (containing only X7) is described in Gilliland et al. (2000) and Albrow et al. (2001). The photometry for the Meylan observations (containing X5 and X7) was based on combining the images at each epoch using drizzle routines (Hook, Pirzkal, & Fruchter 1999) in [STSDAS]{} and then using PSF-fitting in [DAOPHOT]{} to calculate instrumental magnitudes. The F555W filter is a good approximation to Johnson $V$ (Holtzman et al. 1995), but F300W differs significantly from the nearest Johnson filter ($U$). Therefore, we used ground-based photometry of 47 Tuc (Sills et al. 2000) and matching of main sequence (MS) turnoffs between the [[*HST*]{}]{} and ground-based datasets to calculate the zeropoint and then applied corrections to F300W-$V$ (by measuring MS ridgelines) to convert it to $U-V$. By definition this MS-ridgeline technique is only applied to stars (like [$\mathrm{X5_{opt}}$]{}) with colors ranging from the main sequence turn-off to the detected end of the MS.
\[fig2\]
Since this technique is non-standard we have performed two consistency checks with other calibration methods. We applied this technique to the Meylan PC data and performed a star-by-star comparison between our photometry and the Gilliland et al. (2000) photometry. Mean differences between the two photometric systems were $< 0.05$ mag in both $U$ and $V$. A star-by-star comparison between the MS-ridgeline $V$ calibration for the Meylan WF4 chip (containing [$\mathrm{X5_{opt}}$]{}) and the standard calibration of Holtzman et al. (1995) applied to a 47 Tuc F555W image from the archive (program GO-6095), also gave mean errors < 0.05 mag. Combined with the 0.1 mag rms internal error in $U-V$ at the $U$ mag of [$\mathrm{X5_{opt}}$]{}, we estimate absolute errors for [$\mathrm{X5_{opt}}$]{} of $\sim$0.2 mag in both $U$ and $V$.
The color magnitude diagrams (CMDs) for the Gilliland PC and Meylan WF4 images are shown in Fig. \[fig2\]. Fig. \[fig2\]a shows the mean epoch 3 CMD position of [$\mathrm{X5_{opt}}$]{} ($V=21.7$; $U-V=0.9$; [$M_V$]{} = 8.2), along with reasonable ranges in magnitude and color given the variability (see below). With [$M_V$]{}$=8.2$, [$\mathrm{X5_{opt}}$]{} has a similar absolute magnitude to that of the qLMXBs Cen-X4 ([$M_V$]{}$=7.5-8.5$; Chevalier et al. 1989) and Aql X-1 ([$M_V$]{}$=8.1$; Chevalier et al. 1999). Also shown are CO and He WD cooling sequences (see caption). Clearly, [$\mathrm{X5_{opt}}$]{} is unlikely to be either a CO WD or a He WD, unlike the MSP companion [$\mathrm{U_{opt}}$]{} (EGH01). Instead, it is more likely that the CMD position of [$\mathrm{X5_{opt}}$]{} represents the sum of a red MS star and a blue component from an accretion disk (see below).
\[fig3\]
Fig. \[fig2\]b shows the Gilliland CMD for X7 showing the position of the three nearest stars (N1, N2 and N3). The only viable counterpart astrometrically, N1, falls very close to the MS ridge line (also in $V$ vs $V-I$) and therefore appears like a normal MS star (unlike [$\mathrm{X5_{opt}}$]{}). This CMD position is consistent, within the errors, with the position in the Meylan data, and the F300W magnitudes from the three epochs are consistent with non-variability, again unlike [$\mathrm{X5_{opt}}$]{}. This suggests that N1 is probably not the X7 counterpart, despite the good astrometric match. Assuming that the real counterpart falls in the less confused part of a 5-$\sigma$ error circle we set limits on its detection of $U >$ 23, $V >$ 23, $I >$ 22 (using the Gilliland data).
Time Series {#timeser}
-----------
Since [$\mathrm{X5_{opt}}$]{} is near (or beyond) the limit of detectability in individual F300W exposures, the time series for [$\mathrm{X5_{opt}}$]{} were calculated by co-adding groups of 3-4 images. Figures \[fig3\]a, b and c show the F300W time series for the 3 different epochs. Also shown are the eclipsed portion of the X-ray phase plot from HGL01 (units converted into time) and a 4.333 hour period sinusoid, as appropriate for X5 but with the 8.666 hour X-ray period divided by two to simulate a double-peaked (ellipsoidal) time series. Eclipses are not included in this model. This model has been shifted in time and magnitude so that it plausibly matches the data for each epoch (the period is not known with sufficient accuracy to phase correct from [[*Chandra*]{}]{} to different [[*HST*]{}]{} epochs). Significant variability is seen within all three epochs and [$\mathrm{X5_{opt}}$]{} is clearly brighter in epoch 1 than in the 2nd and 3rd epochs (see Fig \[fig1\]a). This longterm variability is further evidence for the presence of an accretion disk. Note the deep eclipse observed in both epochs 2 and 3. The F300W eclipse appears to be significantly wider than in X-rays, though we do not observe the system coming out of eclipse. The turnover at the beginning of the second epoch (near Time$=0$ hr) may represent a minimum from ellipsoidal variability, since the timescale of variability agrees well with the ellipsoidal model. The 1st epoch observations may also represent ellipsoidal variations rather than an eclipse, since in its brighter state the relative brightness of the disk compared to the secondary should be enhanced and the eclipse depth should increase. The $V$-band variations, not plotted here, show neither an eclipse nor clear evidence for ellipsoidal variations (expected to have a smaller amplitude than in F300W).
No suggestion of variability is present in the N1 time series (Fig. \[fig3\]d) and no significant signal is seen in the N1 power spectrum ($V$ or $I$), including the possible 5.5 hr period noted by HGL01. The highest peak in the $V$-band corresponds to a period of 2.52 hours (or twice this), with a false-alarm probability = 0.27. The corresponding $V$ amplitude is 0.0043 $\pm$ 0.0011 ($<$ 4-$\sigma$, insignificant for a blind search; similar results hold for $I$). If N1 [*is*]{} the X7 counterpart and is close to filling its Roche lobe then an inclination $< 2.5$ is required to reduce the amplitude for ellipsoidal variations from the maximum expected value of $\sim$0.1, for 90 inclination, to $<0.0043$. This implies that N1 is unlikely to be the X7 companion, and we have calculated the brightness limit for a faint variable star (lying near the line of sight of N1) to be missed by our variability search. The X7 coordinates are so close to N1 that it will be included in any time series extraction. A star at $V=22.9$ with intrinsic variations of 0.1 mag superimposed on the time series of N1 would yield an 8-$\sigma$ detection (versus the highest detected peak at $\sim$ 4-$\sigma$). This time series limit of $V\sim23$ for a companion to X7 will decrease for inclinations $< 90$.
Discussion
==========
Using the stellar models of Bergbusch & Vandenberg (1992), we estimated the brightest possible secondary consistent with our photometry ([$\mathrm{T_{eff}}$]{}$ =
4100$K, $V=21.7$ and mass = 0.53[M$_{\odot}$]{}). Using the secondary radius, the X-ray luminosity of X5 and the binary separation (from Kepler’s Third Law) we estimate that the maximum luminosity from heating of the secondary by the NS (when measured as a fraction of the secondary luminosity) is 2.7%. Therefore, secondary heating probably makes only a small contribution to the variability described above. The dominant sources of short-term variability are likely to be a combination of eclipses of the disk and hot spot by the MS star, ellipsoidal variations and flickering. Further observations are required to better define this variability.
The likely presence of an accretion disk in [$\mathrm{X5_{opt}}$]{}, from variability and the blue color, may appear to be inconsistent with the lack of X-ray evidence for accretion currently in the X5 system, which should yield either long-term X-ray variations or a power law component, neither of which are seen (HGL01). One possible explanation is that the X5 secondary is no longer filling its Roche lobe causing it to be detached from the disk. Such a disk would no longer be accreting matter from the secondary, possibly causing it to enter a long-term quiescent phase with low density and little or no accretion onto the NS. The X5 disk does appear to be relatively faint compared to 47 Tuc CVs, since the $U-V$ color of [$\mathrm{X5_{opt}}$]{} (0.9) is much redder than that of the 47 Tuc CVs V1, V2, W1 and W2 with $U-V$ colors ranging from $-1.25$ to $-0.4$ (Edmonds et al. 2001, in preparation).
To test this ‘detached disk’ theory we have estimated the degree to which [$\mathrm{X5_{opt}}$]{} fills its Roche-lobe as defined by $F$, the ratio between the stellar radius and the Roche lobe radius. Using the Roche-lobe formula from @pac71 ($r/a=\mathrm{0.462[(M_{opt}/(M_{NS}+M_{opt})]^{1/3}}$, where $r$ is the Roche-lobe radius, $a$ is the binary separation, $\mathrm{M_{opt}}$ is the mass of [$\mathrm{X5_{opt}}$]{}, and $\mathrm{M_{NS}=1.4}$[M$_{\odot}$]{}), the stellar radius for a 4100 K model and the binary separation, we find that [$\mathrm{X5_{opt}}$]{} has $F=0.6$, underfilling its Roche lobe. Fainter cooler secondaries will underfill their Roche lobes by slightly larger amounts (e.g. a star with [$M_V$]{} = 10.0 has $F=0.5$). This behavior is consistent with the ‘detached disk’ theory given above, but would be inconsistent with the possible detection of ellipsoidal variations of relatively large amplitude, requiring the secondary to have $F\sim$1.0. The latter possibility would suggest that the X5 secondary is either bloated or slightly evolved, as appears to be the case for some of the CVs in NGC 6397 (Grindlay et al. 2001, in preparation) and as might be expected for a star undergoing mass loss.
If the 5.5 hr X-ray period for X7 (HGL01) is real, and if the X7 secondary underfills its Roche lobe by about the same amount as [$\mathrm{X5_{opt}}$]{}, then [$M_V$]{}$\sim10.6$ and $V\sim24.1$, beyond our variability detection limits and beyond our CMD search limit except with the presence of a reasonably bright disk. The close proximity of N1 only $\sim$002 from the line of sight of X7, clearly makes prospects for such an optical identification difficult. Alternatively, the period could be longer and N1 could be the counterpart, however the complete lack of evidence for a disk or variability rules against this possibility.
The logical follow-up to these observations are spectroscopic studies to measure the radial velocity amplitude of absorption lines from [$\mathrm{X5_{opt}}$]{}. This, combined with the known inclination and spectroscopic and photometric determinations of $\mathrm{M_{opt}}$ would give an estimate of the mass of the X5 NS. Using the X-ray spectrum constraints on the NS radius and redshift (HGL01) combined with the NS mass would give the first compelling test of the equation of state of a NS. Also of interest would be the detection of emission lines in the optical spectrum. Since there is evidence for an accretion disk (from this work) and hot gas in the system (from the X-ray light curve), we expect strong disk or coronal emission lines to be superimposed on the absorption line spectrum of the secondary. Study of the emission line profiles can test for mass outflow (visible as P Cygni profiles) from the system, or detect evidence for a bipolar jet (visible as broadened emission lines).
We thank Ata Sarajedini, Raja Guhathakurta, and Justin Howell for contributing to the photometric analysis and Bryan Gaensler and Frank Verbunt for helpful comments on the manuscript. This work was supported in part by STScI grants GO-8267.01-97A (PDE and RLG) and HST-AR-09199.01-A (PDE).
Albrow, M. D, Gilliland, R. L., Brown, T. M., Edmonds, P. D., Guhathakurta, P., & Sarajedini, A. 2001, ApJ, accepted Bergbusch, P. A. & Vandenberg, D. A. 1992, , 81, 163 Bergeron, P., Wesemael, F., & Beauchamp, A. 1995, , 107, 1047 Chevalier, C., Ilovaisky, S. A., van Paradijs, J., Pedersen, H., & van der Klis, M. 1989, , 210, 114 Chevalier, C., Ilovaisky, S. A., Leisy, P., & Patat, F. 1999, , 347, L51 Edmonds, P. D., Gilliland, R. L., Heinke, C. O., Grindlay, J. E., & Camilo, F. 2001, ApJ, 557, L57 (EGH01) Ferraro, F. R., Possenti, A., D’Amico, N., & Sabbi, E. 2001, ApJ, submitted Freire, P., Camilo, F., Lorimer, D. R., Lyne, A. G., Manchester, R. N. & D’Amico, N. 2000, , In press (astro-ph/0103372) Gilliland, R. L. et al. 2000, , 545, L47 Grindlay, J. E., Heinke, C. O., Edmonds, P. D. & Murray, S. 2001a, Science, 292, 2290 (GHE01a) Grindlay, J. E., Heinke, C. O., Edmonds, P. D. Murray, S., & Cool, A. M. 2001b, ApJ, accepted (GHE01b) Hasinger, G., Johnston, H. M., & Verbunt, F. 1994, , 288, 466 Heinke, C. O., Edmonds, P. D. & Grindlay, J. E. 2001b, ApJ, accepted Hook, R. N., Pirzkal, N., & Fruchter, A. S. 1999, ASP Conf. Ser. 172: Astronomical Data Analysis Software and Systems VIII, 8, 337 Hertz, P. & Grindlay, J. E. 1983, , 275, 105 Holtzman, J. A., Burrows, C. J., Casertano, S., Hester, J. J., Trauger, J. T., Watson, A. M., & Worthey, G. 1995, , 107, 1065 Paczy[ń]{}ski, B. 1971, , 9, 183 Rutledge, R. E., Bildsten, L., Brown, E. F., Pavlov, G. G., & Zavlin, V. E. 2001, ApJ, submitted (astro-ph/0105405) Serenelli, A. M., Althaus, L. G., Rohrmann R. D. & Benvenuto, O. G. 2001, , 325, 607 Sills, A., Bailyn, C. D., Edmonds, P. D., & Gilliland, R. L. 2000, , 535, 298 Verbunt, F., Elson, R., & van Paradijs, J. 1984, , 210, 899 Verbunt, F. & Hasinger, G. 1998, , 336, 895 White, N. E. & Angelini, L. 2001, , accepted (astro-ph/0109359)
[cllrrccr]{}
X5 & 00 24 00.991(1) & $-$72 04 53.202(7) & 60.7 & 263.0 & 22.5(2) & 21.7(2) & 8.67\
X7 & 00 24 03.528(1) & $-$72 04 51.938(6) & 498.5 & 775.5 & & & 5.50\
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Auger $LMM$ spectra and preliminary model simulations of Ar$^{9+}$ and metastable Ar$^{8+}$ ions interacting with a clean monocrystalline $n$-doped Si(100) surface are presented. By varying the experimental parameters, several spectroscopic features have been observed providing valuable information for the development of an adequate interaction model. On our apparatus the ion beam energy can be lowered to almost mere image charge attraction. High data acquisition rates could still be maintained yielding an unprecedented statistical quality of the Auger spectra.'
address:
- 'Institut für Kernphysik, Westfälische Wilhelms-Universität Münster, Wilhelm-Klemm-Str. 9, D-48149 Münster, Germany'
- 'Laboratoire de Physico-Chimie Théorique, C.N.R.S. et Université de Bordeaux I, 351 Cours de Libération, 33405 Talence Cedex, France'
author:
- 'J. Ducrée[^1], J. Mrogenda, E. Reckels, M. Rüther, A. Heinen, Ch. Vitt, M. Venier, J. Leuker, and H.J. Andrä'
- 'R. Díez Muiño'
title: 'Interactions of Ar$^{9+}$ and metastable Ar$^{8+}$ with a Si(100) surface at velocities near the image acceleration limit'
---
Introduction {#sec:intro}
============
The interactions of highly charged ions (HCI) with surfaces have attracted the strong interest of several research groups in the past, getting a strong boost in the last decade due to the increasing availability of high performance HCI ion sources and improvement of other experimental equipment.
In recent years, various technological applications of HCI surface collisions have been conceived, in particular for the wide field of microscopic and nanoscopic surface modification. In order to foster these efforts, a better understanding of the different stages of the scattering process has to be attained. Experimentalists hope to take advantage of the quick charge exchange processes and the release of the large amount of potential energy stored in the HCI. Unfortunately little consent has been accomplished among researchers on the time scales and the location of these processes although a comprehensive series of spectra and interpretations has been published on this crucial issue so far.
According to the classical overbarrier model [@Bur91; @Duc97], the neutralization of the HCI sets in at a critical distance of typically $R_c \simeq 15$ [Å]{} in front of the first bulk layer. $R_c$ depends on the target work function $W$ and the initial charge $q$ of the HCI. In the region below $R_c$, target band electrons are successively captured into resonant ionic Rydberg states with $n \simeq q
\sqrt{R_\infty/W}$. As soon as more than two electrons have been transferred, the highly excited hollow atom starts to relax via autoionization processes yielding low-energy electrons. X-ray emission is strongly suppressed for light nuclei. Several studies [@Mey95; @Hat96] have been carried out showing that the overwhelming fraction of the reflected particles is neutral and suggesting that the projectile charge $q$ is already compensated on the incoming path. Nevertheless, it is commonly accepted by now [@Sch94] that the intra-atomic transition rates involved in the cascade are by far too slow to perform a complete relaxation of the neutralized HCI in front of the surface.
Autoionization spectra originating from highly charged ions containing initial inner-shell vacancies are characterized by a strong and intense low-energy region and a uniquely shaped high-energy branch which can unambiguously be ascribed to intra-atomic transitions involving the inner-shell vacancy. Despite of the low transition rates, certain peak structures can even be associated with Auger emission from fully populated shells neighboring the initial core configuration.
In order to clarify the evolution from Rydberg populations to fully occupied lower shells and motivated by new experimental findings [@Mey91; @Koe94; @Hst94] about large fractions of subsurface emission within the autoionization spectra, additional interaction mechanisms have been postulated and worked into simulations [@Lim95a; @Pag95; @Sto95]. Also a comparison between Auger spectra for the same HCI projectile impinging on different target species [@Lim96] and a new theoretical approach [@Arn95a] shed new light on the interaction scenario. It seems that the energetic positions of target and projectile electronic states play an important role in all direct inner-shell filling mechanisms below the surface. After the HCI has penetrated into the bulk, band electrons can shield the HCI core charge and directly feed the lower lying hollow atom states while generating a plasmon or an electron hole pair [@Die95; @Die96] (so called $MCV$ processes). For projectiles with high kinetic energies, electrons can be directly transferred from bulk atom levels into inner projectile levels yielding a velocity dependent filling rate [@Lim95a].
In an attempt to extract information on particular transition types from the spectra, experimentalists have analyzed $L$-Auger spectra of Ar$^{9+}$ ions impinging on tungsten [@Zwa87; @Fol89; @Zwa89; @And91A], copper [@Koe92] and gold [@Mey91]. These early efforts have been obstructed by the large number of initial $M$-shell configurations that had to be considered in the interpretation of the $LMM$ spectra with a few distinctive structures only. In recent years, research activities focused on $K$-Auger spectra of hydrogenlike second row ions C$^{5+}$, N$^{6+}$, O$^{7+}$, F$^{8+}$, Ne$^{9+}$ [@Hst94; @And93; @Fol90; @Lim94a] and Ar$^{17+}$ [@Mey95] instead. Some clearly pronounced peak regions can be identified in most of these spectra and assigned to a comparatively small set of initial $L$-shell configurations. A strong systematic dependence of the relative peak intensities of these $KLL$ spectra on the experimental conditions has provided valuable information about the contributing ionic shell configurations.
In this paper, we present several series of $L$-Auger spectra emitted during the interaction of Ar$^{9+}$ and metastable “heliumlike” Ar$^{8+}$ ions impinging with beam energies between 8eV and 4.6 keV and different experimental geometries on an n-doped Si(100) crystal. For the first time, we discovered significant modifications of the shape of the autoionization spectra for different projectile energies well below 1 keV and for different observation and interaction geometries. They include two geometries that largely suppress the detection of all subsurface electron emission. The so obtained spectra exhibit a unique peak profile that largely deviates from spectra taken under all other experimental geometries.
These effects are very surprising because in the energy regime below 1 keV all collisional $M$-shell sidefeeding can generally be ruled out and $MCV$ rates can be treated in a static approximation. In order to understand the behavior of the spectra at different incident energies we developed an interaction model taking into account the special role of the $3d$ subshell, which mediates an efficient $M$-shell filling via valence band electrons within the bulk. Incorporating this model into a Monte Carlo simulation, the observed alterations in the subpeak intensities and positions can qualitatively be reproduced. This model is experimentally supported by a series of $L$-Auger spectra emitted by metastable Ar$^{8+}$ projectiles. Under the same experimental conditions, the Ar$^{8+}$ $LMM$ Auger peak structures turn out to be amazingly similar to the Ar$^{9+}$ $LMM$ structures.
Section \[sec:setup\] will introduce to the experimental setup implemented for our measurements. In Section \[sec:observations\], we display several sets of autoionization spectra as obtained under specified experimental conditions. Section \[sec:grouping\] will describe how the $LMM$ subpeaks in our Ar$^{9+}$ spectra can be assigned to particular groups of intra-atomic transitions. The next Section \[sec:subsim\] outlines the basic ingredients of the subsurface interaction model which we employ for the simulation of the Auger spectra. In Section \[sec:evolution\], we extract information about the evolution of the projectile neutralization and Auger emission from the combined analysis of experimental observations and the simulation results. Further experimental proof for the portrayed interaction mechanism will be given with the discussion of the [ *L*]{}-Auger spectra of metastable Ar$^{8+}$ projectiles in \[sec:Ar8spectra\]. Finally, in \[sec:discussion\], we summarize the basic findings of this paper and give a short outlook on future research.
Experimental setup {#sec:setup}
==================
Highly charged ions are extracted by a fixed voltage of –20kV from an ECR ion source, developed in our laboratory. The metallic vacuum chamber of the source can be floated on selectable potentials $U_Q$ with respect to earth potential. These ions are $q/m$ separated by a double focusing sector magnet system including an aberration correction lens. Two electrostatic Einzel lenses convey the beam through the intermediate stages of a differentially pumped vacuum system which is needed to maintain the pressure gradient between the ECR source ($p \simeq 1 \times 10^{-6}$mbar) and the UHV target chamber ($p \simeq 5 \times 10^{-12}$mbar). Before hitting the grounded Si wafer, the ions pass through two deceleration lenses which are optimized for a maximum of ions deposited on the target surface of approximately 1cm$^2$.
The kinetic ion energy distribution is recorded by an ion spectrometer which is mounted on the beam axis close behind the movable target. For Ar$^{9+}$ and Ar$^{8+}$ beams, the full width at half maximum never exceeded 2eV per charge. The center of the peak is a measure for the kinetic projectile energy after deceleration $E_{\mbox{kin}} = q
(U_Q + U_P)$ where $U_P$ is the plasma potential which builds up between the plasma and the walls of the ECR source. An averaged value of $U_P = 12$V has been observed with variations over months of less than $\pm$2V. The Si(100) surface has been prepared by successive cycles of Ar$^{1+}$ sputtering at grazing incidence and annealing until all impurities have disappeared from AES spectra and good LEED patterns have shown up.
The geometry within the target chamber is displayed in Fig. \[fig:geometry\](a). The beam axis intersects the target surface at an angle $\Theta$. Electrons are detected by an electrostatic entrance lens followed by a $150^\circ$ spherical sector analyzer at an angle $\Psi$ with respect to the surface. In most measurements we chose $\Theta+\Psi=90^\circ$. As $\Psi$ approaches $0^\circ$ in Fig. \[fig:geometry\](b), the path length inside the solid for electrons which are emitted below the surface drastically increases such that the detection of above or near surface emission is clearly favored. Due to the chamber alignment and the large acceptance angle of $\eta = 16\pm6^{\circ}$ of our electron spectrometer entrance lens, below-surface emission is always observed, but to a much smaller extent than above or near surface emission. The absolute spectral intensity in the ($\Psi \simeq 0^\circ$)-geometries greatly diminishes, though. By rotating the target of Fig. \[fig:geometry\](a) around the ion beam axis with the surface normal pointing out of the image plane, the condition of $\Theta+\Psi=90^\circ$ could be relaxed, and geometries with $\Theta=5^\circ$ and $\Psi=0^\circ$ have been achieved.
The effective incident energy of the ions on the surface is given by $E_{\mbox{kin}}$ plus the energy gain resulting from the image charge acceleration [@Win93] $$E_{\mbox{im}} \simeq \frac{W}{3\sqrt{2}} q^{3/2}$$ where the work function $W$ equals 4.6eV for our Si target and $q =
9$. Accordingly, there will always remain a minimum incident energy of approx. 29eV leading to an additional perpendicular projectile velocity component $\Delta v_{\perp} = \sqrt{2 \cdot
E_{\mbox{im}}/m}$. Thus the interaction period of the ion in front of the surface can principally not be stretched above an upper limit depending on $q$ and $W$ even though the original perpendicular velocity component $v_{\perp} = \sqrt{2 E_{\mbox{kin}}/m} \cdot
\cos(\Theta)$ of the projectile may vanish by selecting $U_Q = -U_P$ or $\Theta \mapsto 0^\circ$.
When the incident energy $E_{\mbox{kin}}$ is lowered the beam spreads up at the target (Liouville’s theorem) and incident angles may deviate from their nominal values $\Theta$. In the energy domain $E_{\mbox{kin}} < E_{\mbox{im}}$, the projectile path is strongly bent by the attractive image acceleration causing increased effective incident angles $\Theta_{\mbox{eff}}$, especially for small $\Theta$. Hence the values given for $\Theta$ in this paper are intended to delineate the chamber geometry rather than the effective scattering geometry of an individual projectile.
Projectile penetration depths at the stage of complete neutralization and deexcitation can be estimated by multiplying $v_\perp$ by a typical overall interaction time of $10^{-14}$s. With $E_\perp =
\frac{1}{2} m v_\perp^2$ expressed in eV, the perpendicular path length $z_{pen}$ of the Ar projectile within the bulk can be attained from $z_{pen} = 0.22$Å$\times \sqrt{E_\perp [\mbox{eV}]}$. This implies that at energies $E_\perp$ in the range of 100eV, $z_{pen}$ stays below one lattice constant amounting to 5.43Å for Si. [TRIM]{} simulations [@Zie85] performed for a 10eV and a 100eV Ar$^{1+}$ beam impinging on a Si crystal at perpendicular incidence yield average lateral ranges of 3$\pm$1Åand 10$\pm$4Å, respectively. These distances refer to the total penetration depth until the ion is stopped within the bulk.
Experimental observations {#sec:observations}
=========================
Using the apparatus described in the preceeding section, we have measured secondary electron spectra emitted by Ar$^{9+}$ and metastable Ar$^{8+}$ ions during their interaction with the Si wafer. In this work we will focus on examining the well defined high-energy $L$-Auger peaks covering the interval between 120eV and 300eV. The spectra also feature a low-energy part which extends up to more than 100eV. The analysis of electron spectra in this energy domain is aggravated by a lack of substructures, the superposition of kinetic and intra-atomic emission and their sensitivity to stray electromagnetic fields. Regarding the high-energy branch, we point out that no background due to kinetic electron emission has to be considered for $E_{\mbox{kin}} \leq 121$ eV since the collision energies $E_{coll}=E_{\mbox{kin}}+E_{\mbox{im}}$ are smaller than the lower bound of the spectral region to be examined. By selecting $U_Q=-20$V$<U_P= 12 \pm
2$V, we can prevent HCIs from reaching the grounded target. Only projectiles that are partially neutralized before the deceleration stages and secondary electrons which are generated by collisions of the HCIs with beam transport lens elements (these are on negative potentials) can hit the target where they may set free secondary electrons. We discovered that both contributions are negligible.
In Fig. \[fig:Ar9Si:45deg:energy:normreg\] we present three Ar$^{9+}$ spectra measured under $\Theta=45^\circ$ and with $E_{\mbox{kin}}=9$ eV, 121eV and 1953eV. This and all following spectra are normalized to the total intensity in the $L$-Auger region between 160eV and 240eV. Considering that at maximum one $L$-Auger process per ion takes place, this type of normalization method is suitable to display the intensity shifts between $L$-transition subgroups as discussed in this paper. We note that the calibration of the spectra to the absolute beam intensity is prone to errors which emerge from the uncertainty in the correction factors compensating geometrical and kinetic effects.
At first we recognize the general shape of an Ar$^{9+}$ $LMM$ spectrum featuring a dominant peak at 211eV, a broad structure reaching down to about 120eV on the low-energy side and a shoulder sitting on the high energy tail of the spectrum. At $E_{\mbox{kin}}=9$ eV, this shoulder can be resolved into two subpeaks of almost equal height at 224eV and 232eV. Proceeding to higher $E_{\mbox{kin}}$, the 232eV-peak disappears and the 224eV-peak gains intensity. Presumably due to the poor statistical quality, the latter 232eV-substructure cannot unambiguously be identified in de Zwart´s [@Zwa89] measurement[^2] which was taken under the same experimental geometry and roughly the same incident energy on a tungsten target.
The data acquisition statistics of our spectra exhibits a remarkably high quality. Beam current shifts during measurements are compensated by an online normalization of the spectra to the overall charge current $I_q$ hitting the target. The accumulated counts per 1eV energy channel in the ($\Theta=45^\circ$,$E_{\mbox{kin}}=121$ eV)-spectrum amount to more than 200,000 at the 211eV-maximum letting the relative error drop below 0.3%. We note that each spectrum in Fig. \[fig:Ar9Si:45deg:energy:normreg\] has been recorded in a single five minute run. This is possible due to the high current $I_q=125$nA on the target which can be converted into a particle current $I_p$ by dividing $I_q$ over the projectile charge $q$ and applying a correction factor compensating secondary electron emission. Multiplying $I_p$ by an appropriate geometrical factor, it can be shown that the overall experimental count rate in the high-energy branch roughly correlates to the emission of one high-energy electron per incoming HCI.
The spectral series of Ar$^{9+}$ ions impinging on Si(100) with constant $E_{\mbox{kin}}=121$ eV in Fig. \[fig:Ar9Si:121eV:angle:normreg\] displays the variation of the relative peak intensities with the experimental geometry. Recalling Fig. \[fig:Ar9Si:45deg:energy:normreg\], we discover that the presence of a strong 232eV-subpeak is connected to minimum perpendicular velocities $v_\perp$. In the measurement under $\Theta=90^\circ$, the observation angle $\Psi$ is very flat and a second broad peak region evolves around 198eV. Switching to the other “grazing observation” alignment at $\Theta=5^\circ,\Psi=0^\circ$, this structure is preserved proving that its presence is related to a small observation angle $\Psi$ rather than the direction of incidence $\Theta$ or $\Theta_{\mbox{eff}}$.
Under $\Theta=5^\circ$, the perpendicular projectile penetration into the bulk is principally limited to less than one lattice constant. The severe discrepancy between the two spectra under $\Theta=5^\circ$ and $\Theta=5^\circ,\Psi=0^\circ$ in Fig. \[fig:Ar9Si:121eV:angle:normreg\] illustrates the extreme above-surface sensitivity of the ($\Psi=0^\circ$)-measurements since the physics of interaction is only determined by $\Theta_{\mbox{eff}}$ and $E_{\mbox{kin}}$ which remain constant. We deduce that the broad peak region is generated above or at least near the first bulk layer. Because this region looses its weight under $\Psi=45^\circ$ when electrons originating from all interaction phases are detected, above-surface processes only supply a minor fraction of the total high-energy emission. Nevertheless, the ratio between the $detected$ above- and below-surface emission is strongly enhanced at grazing observation $\Psi = 0^\circ$ and small projectile penetration depths.
To obtain a quantitative estimate, we ran <span style="font-variant:small-caps;">TRIM</span> calculations [@Zie85] for Ar$^{1+}$ ions colliding with a Si target. The results show that a few percent of the incoming particles are reflected for $E_{\mbox{kin}}=121$ eV and 2 keV complying with the preceeding interpretation of the ($\Psi = 0^\circ$)-spectra. We point out that one has to be careful about adopting these findings for HCI beams because the <span style="font-variant:small-caps;">TRIM</span> code solely employs potentials which are strictly speaking only valid for onefold ionized ground state projectiles. For incident energies of less than 10eV when $E_{\mbox{im}} >
E_{\mbox{kin}}$, the code fails to produce physically meaningful output since it obviously misrepresents the potentials evolving from the complex coupling of the HCI-surface system. These potentials are decisive for the calculation of the HCI trajectory along the prolonged interaction period in front of the surface and the reflection probability. At such low incident energies, no experimental data on reflection coefficients of Ar$^{q+}$ impinging on Si(100) are available in the literature or refers to grazing incidence conditions where the physics of interaction is different despite the similar vertical velocity components. The detection of the unique peak profile at grazing observation combined with the oncoming discussion may be regarded as indirect experimental evidence for the existence of reflected projectiles.
The shifts of the upper edge of the 211eV-peak in Fig. \[fig:Ar9Si:121eV:angle:normreg\] can consistently be explained by an enhanced below-surface damping of the emitted electrons at $\Theta=90^\circ$ which is more effective than at $\Theta=5^\circ,\Psi=0^\circ$ due to the higher perpendicular velocity component $v_\perp$.
In Fig. \[fig:Ar9Si:9eV:angle:normreg\] we show spectra of Ar$^{9+}$ ions impinging on a n-Si(100) surface under different incident angles with minimal kinetic energies, i.e., $E_{\mbox{kin}} = 9$ eV. For $\Theta=5^\circ$ and $\Theta=45^\circ$, the spectra are nearly identical reflecting the fact that the self-image attraction is greater than the kinetic projectile energy so that the effective angle of incidence $\Theta_{\mbox{eff}}$ becomes almost independent of its original value $\Theta$. While approaching perpendicular incidence, the same broad region between 160eV and 205eV as in Fig. \[fig:Ar9Si:121eV:angle:normreg\] pops up again. For the two different ($\Psi=0^\circ$)-geometries, the main peaks exhibit about the same height. Since $v_\perp$ is minimal in all four spectra, the upper edge of the 211eV-peak remains sharp and does not shift to lower energies due to bulk damping as in Fig. \[fig:Ar9Si:45deg:energy:normreg\]. Even more, the high-energy branches above 211eV coincide almost perfectly. Keeping in mind our particular choice of normalization method and the minimum incident energy $E_{\mbox{kin}} = 9$ eV, the latter feature suggests that the peak intensity within the high-energy tail region results from above-surface emission which is insensitive to bulk damping of the outgoing electrons.
In Fig. \[fig:Ar9Si:92deg:energy:normreg\] we present another series of Ar$^{9+}$ spectra taken at a fixed angle $\Theta=90^\circ$ (i.e., $\Psi=0^\circ$) for different incident energies $E_{\mbox{kin}}$. As the point of emissions moves deeper into the solid, below-surface contributions are successively filtered out by bulk damping. The double-peak profile transforms into a single unstructured maximum widening to the low-energy side as $E_{\mbox{kin}}$ increases. The low-energy bounds of the 198eV maximum coincide at $E_{\mbox{kin}}=9$ eV and 121eV. The spectrum measured at $E_{\mbox{kin}}=121$ eV demonstrates that the appearance of the broad peak structure under $\Psi = 0^\circ$ and the 232eV-peak occurring solely at minimal $v_\perp$ are obviously not immediately linked to each other.
The combined analysis of the spectra in Figs. \[fig:Ar9Si:45deg:energy:normreg\]-\[fig:Ar9Si:92deg:energy:normreg\] renders the following preliminary picture which will be supported by further evidence and simulations in the next sections. The dominant 211eV-peak originates from below-surface emission since its center moves downward, it broadens and its intensity decreases when long path length of the emitted electrons through the bulk to the spectrometer entrance can be assumed. Furthermore, it does not disappear with growing $v_\perp$. This also holds for the lower lying part of the spectrum. Two equally intense subpeaks on the high-energy shoulder exclusively appear when $v_\perp$ is minimized. As $v_\perp$ increases, the 224eV-peak gains intensity while the 232eV-peak quickly vanishes. This behavior suggests a dependence of the 232eV-intensity on the above-surface interaction time even though the resulting emission process may occur after surface penetration.
The broad peak region between 160eV and 205eV under $\Psi=0^\circ$ and $E_{\mbox{kin}} \leq 121$ eV represents near- or above-surface emission since the “detection window” is shallow and the chamber geometry favors detection of above-surface transitions at the same time. Subsurface contributions are shielded by bulk damping. For reasons that will be given in Section \[sec:Ar8spectra\], it is likely that it is made up of a small fraction of above-surface emission from partially screened incoming or ionized reflected particles. The preceeding experimental findings will play a crucial role in the conception of an interaction model in Section \[sec:evolution\].
Energetic grouping of atomic $LMM$ transitions {#sec:grouping}
==============================================
In this section we will attribute some spectral features occurring in the energy range between 150eV and 300eV to distinct groups of $LMM$ Auger transitions. The energetic overlap between neighboring groups will “fortunately” turn out to be sufficiently small such that relative peak intensities can be related to the participation of distinguished Auger processes. Furthermore, certain projectile deexcitation mechanisms can definitively be ruled out if no intensity is measured in their proper energy range. By merely comparing peak energies, we obtain valuable information concerning the HCI-solid interaction which supplement the experimental observations of Section \[sec:observations\] *before* launching any simulation. At the present state of research, peak energies can be evaluated more accurately than transition rates for the HCI solid system.
We employ the well known Cowan code [@Cow81] in order to simulate configuration energies based on spherically symmetrized wave functions for *free* atoms and ions. In order to calculate Auger transition energies *within the bulk*, we have to take into account the effect of the self induced charge cloud consisting of valence band (VB) electrons which surrounds the HCI. First approaches have been made on this behalf [@Arn95a; @Arn95] using the density functional theory (DFT). Results show that the nonlinear screening effects due to the electron gas are to a good approximation equivalent to the screening by outer shell “spectator” electrons in a free atom.
The hollow atom entering the bulk loses all Rydberg shell electrons due to the screening by the target electron gas. The radii of the resonantly populated orbitals are of the order of the capture distances, i.e., about 10[Å]{} and therefore much larger than the Thomas-Fermi screening length of less than one [Å]{}ngstr[ø]{}m as derived in a free electron gas model. Therefore all Rydberg levels will be depleted leaving behind the original $1s^22s^2p^5$ core configuration and possibly some $M$-, $N$- and $O$-shell electrons. The target electron gas will swiftly take over the role of the outer electrons to screen and so neutralize the HCI charge. A good estimate for the reaction rate of the electron gas to the HCI “point charge” perturbation is provided by the plasmon frequency which lies in the vicinity of $10^{16}$s$^{-1}$ for metals. This is way above typical rates of the other HCI bulk interaction processes and we can thus assume that the HCI core screening by VB electrons is instantaneous. Except for the special handling of the transitions with $3d$ participation, which will be outlined below, all subsurface Auger transition energies given in this paper will hence be derived for neutral initial states possessing a total amount of $q$ $M$- and $N$-shell electrons and singly ionized final states.
Let us now look at the grouping of $LMM$ transitions which is plotted in Fig. \[fig:Ar9:LMM:histo:spec\]. The histogram displays the energetic positions of all $LMM$ transitions originating from initial $2p^53s^xp^yd^z$ configurations ($n_M=x+y+z \leq 9$) of “hollow” Ar$^{9+}$ atoms which are neutralized via $q-n_M$ “spectator” electrons in the $N$-shell. Angular momentum coupling as in [@Sch94] is not taken into account. Each transition is weighted by unity in the plot discarding transition rates and statistical factors due to different subshell occupations. For the sake of clarity, the whole spectrum is convoluted by a Gaussian function of constant width 2eV. This modification evens out conglomerations of Auger lines at certain energies which are an artifact of strictly applying the spectator electron approximation. The width is sufficiently small not to lead to an additional overlap of $LMM$ subgroup intensities.
Within the same group, Auger transition energies generally tend to increase steadily with the overall shell population. For comparison, the dotted line in Fig. \[fig:Ar9:LMM:histo:spec\] represents an autoionization spectrum of Ar$^{9+}$ ions impinging on a Si(100) surface at $E_{\mbox{kin}}=121$ eV and $\Theta=\Psi=45^\circ$ as reproduced from the experimental data in Fig. \[fig:Ar9Si:45deg:energy:normreg\].
Fig. \[fig:Ar9:LMM:histo:spec\] reveals that $LMM$ Auger transitions involving a free and initially neutral Ar atom can cover the energy interval between 166eV ($2p^53s^24s^2p^5 \mapsto
2p^63s^04s^2p^5$) and 267eV ($2p^53d^9 \mapsto 2p^63d^7$). For convenience, the groups of $LMM$ transitions displayed in Fig. \[fig:Ar9:LMM:histo:spec\] and the following part of the paper are classified by the angular $\ell$ quantum numbers of two participating $M$-shell electrons. In all cases, the final states are made up of the atomic $2p$ level, the remaining [ *M*]{}-core states and an appropriate continuum state. For $LMM$ processes, we omit the $2p$ level in our notation.
The low-energy part of the $LMM$ spectrum can be assigned to $3ss$- and $3sp$ transitions. The higher $3sp$ intensity can be explained by their statistical weight and their $3p$ contribution clearly enhancing the transition rates. The fact that the two small peaks arising in some spectra between 190eV and 200eV fall into the $3sp$ peak region in Fig. \[fig:Ar9:LMM:histo:spec\] might be fortuitous. Due to our coarse resolution concerning the energetic grouping, we are not able to ascribe these peaks to particular $3sp$ transitions.
Several things indicate that the dominant peak region around 211eV is composed of $3pp$ transitions out of a massively occupied [ *M*]{}-shell instead of $3sd$ transitions the energy range of which also covers this peak region. At first, it is intuitively plausible, considering that all three bound state wave functions possess the same angular momentum, that the by far highest $LMM$ rates are calculated for the $3pp$ group. Second, the sharp upper edge of the 211eV-maximum resembles the upper boundary of the $3pp$ curve which is composed of $3pp$ transitions out of a completed $M$-shell. Due to level filling statistics, a sharp edge is unlikely to form if its corresponding transitions take place out of intermediate shell occupations. Third, atomic structure calculations yield that $3pp$ energies accumulate around 211eV for all initial $3s^2p^yd^z$ configurations ($y+z \geq 5$), regardless of the particular choice of $y$ and $z$. This automatically implies that prior to the majority of all $3pp$ decays either more than seven electrons have to be captured into the $M$-shell or the induced charge cloud provides an equivalent screening effect.
According to the $LMM$ grouping in Fig. \[fig:Ar9:LMM:histo:spec\] we can assign the two subpeaks on the high-energy shoulder of the $LMM$-maximum to $3sd$- and $3pd$ transitions, respectively. $3pp$ processes are unlikely to contribute to the region above 213eV since they require at least one $3s$ vacancy along with a ninefold occupied $M$-shell. These initial configurations will immediately be converted into $3s^2$ configuration due to the very fast super Coster-Kronig (sCK) decay channel involving three $M$-shell electron levels.
The spectral range of the $3pd$ peak is cut off at about 235eV and $3dd$ transitions do obviously not produce enough intensity to appear with a distinct peak region in the spectra. These observations provide experimental evidence that the $3d$ level cannot be completely populated within the bulk and that quick sCK transitions tend to carry $3d$ populations into lower lying sublevels before $LMM$ transitions take place. The missing structures and the spectral range of the high-energy tail extending above 300eV suggest that it consists of the large variety of LXY transitions with X,Y$\in${N, O} rather than $3dd$ transitions.
The $LMM$ cut-off at 235eV can be understood by taking a deeper look at the effective projectile potential $V_{\mbox{eff}}$ within the bulk (see Fig. \[fig:potential\]) which is deformed with respect to the corresponding free ionic Coulomb potential $V^{free}_{Coul}$. Close to the projectile nucleus $r \ll a_0$, the effective potential $V_{\mbox{eff}}$ converges into $V^{free}_{Coul}$. At intermediate distances $r \simeq a_0$, the screening of outer levels and the electron gas starts to act on the projectile levels. In this domain $V_{\mbox{eff}}$ is well represented by a free atom potential $V^{screen}_{Coul}$ which is screened by outer shell spectator electrons. All $nl$ subshells with energies $E^{nl}_b$ are elevated by a subshell dependent amount of $\Delta E^{nl}_b$ with respect to $V^{free}_{Coul}$. Far away from the nucleus the effective potential $V_{\mbox{eff}}$ merges into $V_0$ denoting the bottom of the valence band.
Fig. \[fig:bind\] displays the $M$-sublevel binding energies $E^{nl}_b$ of Ar$^{9+}$ as a function of the total $M$-shell population $n_M$. The values have been calculated by the Cowan code for spectator electron configurations, i.e., for the potential $V^{screen}_{Coul}$. This modeling has proven to yield good agreement with experimental and more sophisticated theoretical results in the past. In a work by Schippers *et al.* [@Sch94], the main $KLL$ peak energies of the hydrogenlike second row ions C$^{5+}$, N$^{6+}$, O$^{7+}$, F$^{8+}$ and Ne$^{9+}$ have been reproduced. Arnau *et al.* [@Arn95] have demonstrated that the spectator electron model complies with DFT calculations including nonlinear screening effects for hydrogenlike Ne$^{9+}$ ions in an Al target. Detailed calculations even reveal that the induced charge density tries to mimic the shape of the wavefunctions of the neighboring unoccupied atomic level.
In Fig. \[fig:bind\] we added $2p$ binding energies of hydrogenlike C$^{5+}$ and Ne$^{9+}$ as obtained from the spectator model and for comparison the DFT calculation for Ne$^{9+}$ as a function of the total $L$-shell population $n_L$. Following [@Arn95], the screening of the atomic spectator electrons resembles the screening by the VB electron gas because the inner atomic levels are energetically separated from the VB much like they are separated from next higher subshell in a free atom. This argument holds for the Ar$^{9+}$ $3s$- and $3p$ level and also for nearly all $L$-shell levels in hydrogenlike HCIs which are situated between the C$^{5+}$ and Ne$^{9+}$ curves.
The evolution of the $3d$ sublevel energies with $n_M$ in Fig. \[fig:bind\] differs from the lower lying subshells, though. We observe that the $3d$ level binding energies are significantly closer to the VB and grow above $V_0$ as soon as more than five electrons populate the $M$-shell. We performed a DFT calculation showing that $3d$ electrons are already lost to the VB continuum for $n_M>4$. The spectral cut-off in the $3pd$ transition domain in Fig. \[fig:Ar9:LMM:histo:spec\] can now be explained by omitting all contributions from $3pd$ transitions with $n_M>4$. Aiming to correct for the shape of $V_{\mbox{eff}}$ which largely deviates from $V_{Coul}^{screen}$ for $E_b^{n\ell} \simeq V_0$ (see Figs. \[fig:potential\] and \[fig:bind\]), we shift the atomic $3d$ level to $V_0$ for $n_M \leq 4$ to attain higher transition energies compared to the mere spectator electron model. In this manner we derive the experimental $3sd$- and $3pd$ peak positions on the high-energy shoulder within an accuracy of 2% and 1%, respectively.
Monte Carlo simulation of the subsurface interaction phase {#sec:subsim}
==========================================================
In order to elucidate the interaction mechanism which eventually generates the measured spectra we worked out a Monte Carlo simulation [@Kal86]. Our goal was to reproduce the intensity shifts of the observed spectra for different incident energies in Fig. \[fig:Ar9Si:45deg:energy:normreg\]. On the analogy of previous simulations by Schippers et al. [@Sch94], Page et al. [@Pag95] and Stolterfoht et al. [@Sto95] on the $L$-shell filling of hydrogenlike highly charged ions at metal surfaces, we only keep track of the populations of the two innermost projectile shells containing at least one vacancy and focus on the most dominant transition rates. The ionic cores are neutralized by $N$-shell spectator electrons. Among all intra-atomic Auger processes, only those yielding an electron above the vacuum level are considered.
During the simulation, the three $M$-subshell populations are recorded continuously. Transition rates, transition energies and sublevel energies are evaluated dynamically at each iteration step according to the particular $\{n_{3s}|n_{3p}|n_{3d}\}$ configuration. From one step to the next, only the fastest transition which is derived statistically from its nominal rate takes place. The Monte Carlo method implies the averaging of the simulation results over a sufficient amount of projectiles. We find that the simulated spectra converge after $N \simeq 1 \times 10^5$ particle runs and chose $N=1
\times 10^6$. In our implementation of the subsurface cascade, each particle is started at the first bulk layer with a fixed angle of incidence $\Theta=45^\circ$ and energy $E_{\mbox{kin}}$. For $E_{\mbox{kin}}=121$ eV and 2 keV we assume an initially empty $M$-shell.
Intra-Atomic rates {#sec:intraatomic}
------------------
The $LMM$ rates are evaluated by a fit expression proposed by Larkins [@Lar71C] for free multiply ionized atoms possessing no $N$-shell spectator electrons. Accordingly, if one or two of the $n$ electrons of a subshell which could contain $n_0$ electrons are involved in an Auger process, the Auger rate calculated using the formulae appropriate for a filled shell $\Gamma^{\mbox{filled}}_{\ell_1 \ell_2}$ is reduced by $n/n_0$ or $[n(n-1)]/[n_0(n_0-1)]$, respectively. Values for $\Gamma^{\mbox{filled}}_{\ell_1 \ell_2}$ are only supplied for $3ss$-, $3sp$- and $3pp$ transitions in the literature which account for the greatest part of the overall $LMM$ intensity in the literature. For $3sd$-, $3pd$- and $3dd$ transitions, we scale the $LMM$ rates $\Gamma^{\mbox{filled}}_{3\ell d}$ to reproduce the experimental peak heights. Table \[tab:intrarates\] lists the six $\Gamma^{\mbox{filled}}_{\ell_1 \ell_2}$ rates which are held constant for different simulations. These $LMM$ rates should not be greatly affected by the embedding of the HCI into the electron gas because they chiefly depend on the radii of the participating $M$-subshells which remain fairly unchanged. To show this we recall that the shape of the induced charge cloud is similar to the $N$-shell. Within the hydrogen atom approximation, the radii of the screening cloud $r_{sc}$ and the atomic shells (schematically inserted in Fig. \[fig:potential\]) both scale with $(n-1)^2
\{1+\frac{1}{2}[1-\frac{\ell(\ell+1)}{(n-1)^2}]\}$. The ratio $r_{sc}/r_{3p} = 2.5$ with $sc = 4p$ has to be related to the ratio $r_{3p}/r_{3s}$ amounting to 0.83. Due to its great extension, the screening electron cloud should therefore have a minor impact on the $M$-shell orbitals and hence on the $LMM$ rates given in Table \[tab:intrarates\].
Since we do not resolve $N$-sublevels, Coster-Kronig $MMN$ transitions have to be handled by a global base rate for each $M$-level pair. In a simple approach, we weight each $MMN$ base rate by the initial $M$-sublevel occupation and final state vacancies such that the average rate amounts to $3 \times 10^{14}$s$^{-1}$. For the purposes of this paper, only the order of magnitude with respect to the other transition types matters. We remark that Armen and Larkins [@Arm91] have calculated transition rates for $MMN$ decay channels which are of the order of $4 \times 10^{14}$s$^{-1}$, depending strongly on the angular coupling. This is in sufficiently good agreement with our assumption. Only $MMN$ transitions with a final state above the continuum level are included leaving over solely $(3s)(3d)$N transitions which are of particular importance for the initial phase of the interaction.
The sCK $MMM$ rates are known to be 10 to 100 times faster than any rates for Auger transitions possessing the initial and final holes in different principal shells. In our simulation, they mainly serve to regroup any $M$-shell configuration into the appropriate $M$-shell ground state before $LMM$ transitions take place. To achieve this, we utilize a base rate of $1 \times 10^{15}$s$^{-1}$ which is scaled by the $M$-subshell occupation statistics. In Table \[tab:sCKrates\] we put together the average number of $MMM$ processes per particle and the average $M$-sublevel occupation at the time of $MMM$ emission for the two sCK transitions which are relevant for our simulation.
$MCV$ filling within the bulk {#sec:filling}
-----------------------------
Target levels below $V_0$ can be filled by transitions involving electrons of valence band states (C) which are perturbed by the ionic core. The energy gain is conveyed either onto another VB electron which is emitted into the continuum or a collective excitation (plasmon) is created in the medium. The theoretical approach including the charge displacement in the description of the excited outgoing electrons is much more complicated and, at present, only unperturbed valence band states (V) are included in the calculations[@Die95; @Die96]. The VB electrons take on the role of outer shells in a free atom.
Using DFT to describe the interaction between the ion and the metal valence band and following the same scheme as in [@Die96], we have derived $MCV$ rates for the Ar$^{9+}$–Si system. Table \[tab:MCVrates\] lists the rates per spin state $\Gamma^{MCV}_{3\ell}$ into the three $M$-sublevels with the number of initial $M$-shell electrons $n_M$ as parameter. These $MCV$ rates still have to be multiplied by the number of unoccupied final states in the particular $M$-sublevel to attain actual transition rates between two atomic configurations. $\Gamma^{MCV}_{tot}$ denotes the overall $MCV$ rate into the $M$-shell after carrying out the appropriate statistics. Since sCK transitions are much faster than $MCV$s (cf. Table \[tab:sCKrates\]), we only consider “Coster-Kronig final states” as initial configurations in the DFT calculation. The transition rates are independent of the projectile velocity $v_p$ equaling their static values for all incident energies occurring in this work.
Table \[tab:MCVrates\] reveals that $\Gamma^{MCV}_{3d}$ assumes by far the highest values. Taking into account the high degeneracy of the $3d$ level, effective rates $\Gamma^{MCV}_{3d}$ exceed $\Gamma^{MCV}_{3p}$ and $\Gamma^{MCV}_{3s}$ by more than one and two orders of magnitude, respectively. With increasing $n_M$, $MCV$ transfer into the $3p$ state accelerates reaching the $\Gamma^{MCV}_{3d}$ values at low $n_M$. This is important considering that for $n_M>4$ the $3d$ shell vanishes and $MCV$s into the $3p$ level constitute the most effective $M$-shell filling mechanism which is eventually responsible for the formation of the dominant 211eV-peak.
Collisional filling {#sec:collisions}
-------------------
For projectile energies above 1 keV, sidefeeding into the HCI $M$-shell due to direct electron transfer from target atom core levels supplies a velocity dependent filling rate. The transfer crossection increases with the energetic vicinity of inner projectile and target states [@Gre95] which is maximum for the Ar$^{9+}$ $3s$ level with the $2p$ bulk level of Si possessing $E_b^{2p}=109$ eV (cf. Fig \[fig:bind\]). Experimentally, a Si target $LMM$ Auger peak for spectra with $E_{\mbox{kin}} \geq 1$ keV can be observed which is directly connected to the vacancy transfer. For 2 keV projectiles traveling through a silicon crystal in (100)-direction, collisional filling supplies a $3s$ sidefeeding rate of $\Gamma_{3s}^{coll} = v_p/d = 1.8
\times 10^{14}$s$^{-1}$ going on the assumption of one electron transfer per collision. Within the energy range below 1 keV, collision frequencies are small and the distance of closest approach is too large even for head-on collisions to allow a sufficient level crossing for sidefeeding [@Gre95].
Simulation of the 121eV- and 2 keV-spectra {#sec:121eVspectrum}
------------------------------------------
In Fig. \[fig:Ar:8:9:Si:sim:exp\] we plot the experimental spectra from Fig. \[fig:Ar9Si:45deg:energy:normreg\] into three subplots and compare them with our simulation results which are convoluted by a Gaussian function of 3eV width. In this section we look at the Ar$^{9+}$ spectra and postpone the discussion of the Ar$^{8+}$ spectra which are displayed in the same plot to Section \[sec:Ar8spectra\]. The difference between the simulated spectra in (a) and (b) stems from collisional filling which is exclusively enabled for $E_{\mbox{kin}}=2$ keV. In addition, we performed a convolution of the 2 keV-spectrum with an exponential function with a decay length of 3 a.u. to compensate for elastic and inelastic energy losses of electrons on their way through the bulk region. For $E_{\mbox{kin}} < 2$ keV, this damping becomes negligible due to the shallow projectile penetration.
The intensity ratios among the different $LMM$ subgroups and their peak positions are approximately reproduced. The $3pp$ region displays too much intensity, though which might be caused by the [ *LMM*]{} rate fit formula (cf. Section \[sec:intraatomic\]) overestimating the $3pp$ rates for high $M$-populations, see also [@Lar71C] (Table VI). The $3ss$ intensity is clearly too low suggesting that other transitions types not considered in our model may contribute to this region. The enhancement of the $3sd$ peak parallel to the disappearance of the $3pd$ peak and intensity gain of the $3sp$ region towards the $E_{\mbox{kin}}=2$ keV-spectrum as a consequence of the collisional filling can nicely be observed (cf. Table \[tab:intrarates\]). The average $M$-sublevel populations at the time of $LMM$ emission (cf. Table \[tab:intrarates\]) indicate that the high-energy shoulder is generated along the early subsurface interaction phase. On the other hand, the dominant $3pp$ peak occurs at high $M$-populations benefitting from the growing $MCV$ rates into the $3p$ level and the disappearance of the $3d$ level towards high $n_M$. The missing $3dd$ intensity confirms the presence of the fast $MMN$ and [ *MMM*]{} decay channels which inhibit the buildup of $3d$ populations larger than one.
In the experimental spectra, the low-energy tail displays much less structure than the simulation indicating that the mere spectator electron model might be incomplete. We carried out other simulations where 20% of the $LMM$ transitions start out from singly ionized initial configurations such that the peak regions loose part of their intensity to the low-energy side. Doing so the intensity dip around 200eV gets partially ironed out and the low-energy tail stretches beyond 160eV. A similar effect could be induced by the consideration of L$_{2,3}MMM$ double Auger processes [@Abe75] for which Carlson and Krause [@Car65] measured a relative contribution to all radiationless transitions of 10$\pm$2% and energy shifts of more than 10eV [@Sie69]. For the sake of the clarity of the displayed simulation results we did not implement this correction in Fig. \[fig:Ar:8:9:Si:sim:exp\].
Simulation for a statistical initial $M$-population {#sec:simulation}
---------------------------------------------------
It is very surprising that by reducing the incident energy from about 121eV to 9eV, a significant shift in the relative peak intensities still takes place. On the one hand velocity dependent below-surface filling can be ruled out in this energy domain, on the other hand this effect must originate from different subshell populations at the time of $LMM$ emission. Let us assume for the moment that individual $M$-subshells of each particle are filled statistically (by a Poisson distribution which is cut off at the subshell degeneracy) at the first bulk layer according to their respective degeneracy, i.e., $\left<n_{3\ell}\right>$=2/18, 6/18 and 10/18, multiplied by the mean total $M$-shell population $\left<n_M\right>$ for $\ell=3s$-, $3p$- and $3d$ level, respectively. In Fig. \[fig:Ar:8:9:Si:sim:exp\] we present results of a Monte Carlo simulation with $\left<n_M\right>=2$.
For a greater part of these initial configurations, new [ *M*]{}-shell redistribution channels open up via $MMN$s and sCKs which are energetically forbidden for $n_M=0$ and carry part of the $3d$ population immediately into the $3p$- rather than the $3s$ level. The simulations in Fig. \[fig:Ar:8:9:Si:sim:exp\](b,c) and Table \[tab:intrarates\] indeed reproduce the intensity shift from the $3sd$ peak to the $3pd$ peak at 232eV going from $E_{\mbox{kin}}=$121eV to 9eV. We remark that this simple model of an initial $M$-shell population before bulk penetration does not hold exactly for the Ar$^{8+}$ simulation where we set $n_{3s}=1$, $\left<n_{3p}\right>=1$ and $\left<n_{3d}\right>=1$. We are going to provide a physical motivation for the model in Section \[sec:Ar8spectra\].
The evolution of the subsurface cascade {#sec:evolution}
=======================================
According to the experimental clues and arguments of Sections \[sec:observations\] and \[sec:grouping\], the overwhelming part of the high-energy branch originates from below-surface emission. For this phase, we designed the simulation presented in the previous section. In the following we describe the evolution of the subsurface cascade on the basis of the simulation results combined with the experimental data.
As the HCI penetrates into the crystal bulk region all electrons that have previously been captured into outer Rydberg levels will be lost and band electrons will neutralize the core charge over a distance of roughly the Debye screening length of the electron gas. Thus a second generation of hollow atoms emerges within the bulk.
Prior to any electron capture, the $O$-shell of the Ar$^{9+}$ core is the uppermost ionic shell to still fit below $V_0$. As long as not more than two electrons populate inner levels, solely XCV transitions (with X$\in${L, M, N, O}) can proceed. Since the XCV transition probability increases with the effective screening and degeneracy of the final level, XCVs preferably populate the [ *O*]{}-shell. Before any significant NOO and MNO Auger emission can take place, the rapid XCV filling successively pushes the $O$- and $N$-shell above $V_0$. This period is accompanied by LCV, LNO, LMN transitions etc. creating the smoothly decreasing part of the spectrum above the $3pd$ edge. We note that this early phase of the neutralization may already start before complete bulk penetration when the projectile travels through the vacuum tail of the valence band.
The loss of whole atomic shells into the valence band stops when the $M$-shell is reached. At this point of the scenario, a low [ *M*]{}-shell population with a statistical preference for the $3d$ level (due to its high degeneracy) is likely to occur. [ *MMN*]{}-CK processes transfer these $3d$ electrons quickly into the $3s$ level before a large $3d$ population can accumulate. Other [ *MMN*]{} transitions $(3p)(3d)$N and later $(3s)(3p)$N are energetically forbidden. This $M$-shell redistribution is accelerated by high speed sCK processes with rates of the order of $10^{15}$s$^{-1}$. Whereas $3sdd$ transitions are immediately possible, $MMM$ transitions into the $3p$ level require $n_M >
3$. Along this early $M$-shell redistribution phase the [ *M*]{}-population remains fairly constant at $n_M \simeq 2+n_{3s}$, though because one $M$-electron is lost along each $MMM$ process. Thus $3sd$-$LMM$ processes out of initial $3s^2d$ constellations are characteristic for this phase causing the 224eV-peak in the experimental spectra. It lasts comparatively long because the condition $n_M \simeq 2$ keeps the $MCV$ rates (cf. Table \[tab:MCVrates\]) minimal.
The Ar$^{9+}$ core will always be surrounded by an induced VB charge cloud (C) because the number of bound states $n_b$ below $V_0$ is smaller than the projectile core charge $q=9$ (Fig. \[fig:bind\]). Hence $MCV$ processes continue to populate empty $M$-levels faster and faster with increasing $n_M$. As soon as $n_M>3$ is satisfied, $3pdd$ sCKs become energetically possible and a $3p$ population builds up while the $3d$ population remains approximately at one due to the presence of the $MMN$ and sCK decay channels. $3dd$ transitions require the transient formation of very unstable $M$-shell configuration that are unlikely to occur so they do not appear in the spectra. At $n_M>4$, the $3d$ level vanishes into the valence band thus interrupting further $3sd$- and $3pd$ emission. Since the $3pp$-$LMM$ transitions possess much higher rates than any other $LMM$ transitions they clearly prevail during this later stage of the subsurface interaction.
The dominant peak which is centered at 211eV for $E_{\mbox{kin}} \leq 121$ eV in Fig. \[fig:Ar9Si:45deg:energy:normreg\] corresponding to $3pp$ transitions with $n_M \geq 7$ provides evidence for the described mechanism, in particular for the high $MCV$ rates into the $3d$- and later the $3p$ level. The intensity gain of the $3sd$ peak with respect to the $3pd$ peak for high $E_{\mbox{kin}}$ is consistent with the greater time window of the former transition during the early interaction phase. This effect furthermore verifies the assumption of collisional sidefeeding into the $3s$ level and therefore the 224eV-peak assignment by itself. All phases are accompanied by $3ss$- and $3sp$-$LMM$ transitions which constitute the low-energy tail and the region around the two faint subpeaks between about 180eV and 200eV, respectively.
Spectra of metastable A$^{8+}$ projectiles {#sec:Ar8spectra}
==========================================
Seeking to extract additional experimental evidence for the described Ar$^{9+}$ interaction mechanism, we performed a series of measurements involving metastable ($2p^53s$) Ar$^{8+}$ ions colliding at $\Theta=45^\circ$ and various kinetic energies with a Si crystal (Fig. \[fig:Ar8Si:45deg:energy:normreg\]). A straight comparison with the corresponding Ar$^{9+}$ series in Fig. \[fig:Ar9Si:45deg:energy:normreg\] shows that the general shape of the spectra is unaffected by the additional $3s$ electron except for a slight enhancement of the $3ss$- and $3sp$ intensities. In fact, the only new structure observed is a small peak arising at 247eV for $E_{\mbox{kin}}=8$ eV and generally for lowest perpendicular projectile velocities $v_\perp$ which can also be deduced from Fig. \[fig:Ar8Si:8eV:angle:normreg\].
The 247eV-peak has been discussed in detail in [@Duc97L] along with corresponding peaks which occur under similar conditions in the spectra of second row ions in $1s2s$ configurations. It can be assigned to so called [*LMV*]{}$_W$ transitions in the course of which the $3s$ electron jumps into the $2p$ vacancy. The emitted electron comes from a level possessing a binding energy which equals the target work function $W=4.6$ eV for silicon. Due to the shape of the subsurface potential $V_{\mbox{eff}}$ (Fig. \[fig:potential\]), these levels cannot exist after projectile penetration into the bulk occurred. As mentioned earlier in Section \[sec:grouping\], the strong decrease in spectral intensity above 235eV gives evidence for this assertion.
The identification of an above-surface [*LMV*]{}$_W$ peak suggests that inner atomic shells X$\in${M, N, O, $\ldots$} could be partially filled before bulk penetration by an autoionization process XV$_W$V$_W$. We mentioned earlier that also $MCV$ set in with continuously increasing rates as the HCI travels through the vacuum tail of the valence band. Compared to the $MCV$ filling within the bulk, these near-surface $M$-shell filling channels are likely to proceed significantly slower, though. Since sCK processes require certain minimum $M$-shell populations they are widely inhibited for these constellations. One can thus expect that a very slow projectile might enter the bulk region with a low $M$-shell population $\left< n_M
\right> \simeq 2$ favoring the $3d$ level due to its degeneracy. This way we can motivate the ansatz for the simulation of the spectra at minimum $E_{\mbox{kin}}$ in Section \[sec:simulation\], even though an explicit experimental evidence is still missing.
The astonishing similarity of the rest of the Ar$^{8+}$ and the Ar$^{9+}$ data bears out our previous assumption of fast $MCV$, $MMN$ and sCK processes within the bulk redistributing any $M$-shell population swiftly into the $3s$ level. In order to compensate for the additional $3s$ electron in Ar$^{8+}$ $M$-shell, sCKs have to proceed before an $LMM$ transition takes place. This automatically implies that the $M$-shell must be sufficiently populated and quickly replenished at this point. Because a large $M$-shell population far in front of the surface would be in contradiction to all previous experiments we can exempt the above-surface zone as the origin of the emitted electrons. This obviously also holds for the $3pd$ peak at 232eV.
We made use of the great correspondence of the Ar$^{8+}$ and the Ar$^{9+}$ spectra to check the mechanisms and rates entering our interaction model. For the simulations on Ar$^{8+}$ projectiles which are also shown in Fig. \[fig:Ar:8:9:Si:sim:exp\] we kept the same transitions types and rates but added an $3s$ electron to the initial $M$-shell population. Within the accuracy of our interaction model, the similarity of the two series is well reproduced.
Summary and discussion {#sec:discussion}
======================
In this work we have presented detailed experimental results on the interaction of Ar$^{9+}$ and metastable Ar$^{8+}$ ions impinging on a Si(100) crystal. Doing so we focused on autoionization spectra measured at low impact energies. In this energy domain, we identified several new spectral features which alter with the perpendicular projectile velocity component and with the angle of incidence and observation. A consistent interaction model has been suggested for which $MCV$ processes and the energetic vicinity of the Ar$^{9+}$ $3d$ subshell to the bottom of the silicon valence band play a decisive role.
The subsurface interaction phase has been simulated using a Monte Carlo code. Feeding the code with realistic transition rates, we have been able to reconstruct the experimental peak positions and intensity shifts for different projectile energies. Our results give indirect evidence for a very effective below-surface $MCV$ filling as postulated by theory. In contrast to $KLL$ spectra of hydrogenlike second row ions impinging on metal surfaces, the main intensity of the Ar$^{9+}$ $LMM$ spectra is located on the high-energy side of the peak region corresponding to a massively occupied $3p$ subshell. We demonstrated that this peculiar shape of the high-energy region is linked to the special role of the $3d$ subshell which mediates a fast $M$-shell filling in the beginning and later disappears due to the screening of the valence band electron gas.
We presented spectra measured at small observation angles with respect to the surface parallel. They contain a high intensity peak region which most likely originates from Auger emission of incoming or reflected projectiles which do not experience the full bulk screening, yet. In addition, we spotted a distinct peak in the Ar$^{8+}$ spectra for the lowest perpendicular incident velocities which can be explained by a unique above-surface process involving the $L$-vacancy and two electrons from the resonantly populated shells.
HCI beams have been deemed a candidate for future surface modification techniques for some time. It has been demonstrated that single ions can give rise to nanoscale size features on certain surfaces [@Par95]. Also sputter yields on insulators could be significantly enhanced by using slow HCIs instead of fast singly charged projectiles. At very low kinetic energies, the energy deposition concentrates on a very small area which extends approximately one lattice constant in the vicinity of the first bulk layer. In this manner, an energy of several keV can be carried into this zone where it might be converted into activation energy for processes like sputtering, crystal growth and surface catalysis. Research in this field is under way and first results have been presented already.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was sponsored by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie under Contract No. 13N6776/4. We are also grateful for support from the Ministerium für Wissenschaft und Forschung des Landes Nordrhein-Westfalen.
J. Burgdörfer, P. Lerner, and F. W. Meyer, Phys. Rev. A [**44**]{}, 5674 (1991).
J. Ducrée, F. Casali, and U. Thumm, accepted by Phys. Rev. A (1998).
F. W. Meyer, L. Folkerts, H. O. Folkerts, and S. Schippers, Nucl. Instrum. Methods Phys. Res., Sect. B [**98**]{}, 441 (1995).
S. Hatke, A. Hoffknecht, S. Hustedt, J. Limburg, I. G. Hughes, R. Hoekstra, W. Heiland, and R. Morgenstern, Nucl. Instrum. Methods Phys. Res., Sect. B [ **115**]{}, 165 (1996).
S. Schippers, J. Limburg, J. Das, R. Hoekstra, and R. Morgenstern, Phys. Rev. A [**50**]{}, 540 (1994).
F. W. Meyer, S. H. Overbury, C. D. Havener, P. A. [Zeijlmans van Emmichoven]{}, and D. M. Zehner, Phys. Rev. Lett. [**67**]{}, 723 (1991).
R. Köhrbrück, M. Grether, A. Spieler, N. Stolterfoht, R. Page, A. Saal, and J. Bleck-Neuhaus, Phys. Rev. A [**50**]{}, 1429 (1994).
S. Hustedt, J. Freese, S. Mähl, W. Heiland, S. Schippers, J. Bleck-Neuhaus, M. Grether, R. Köhrbrück, and N. Stolterfoht, Phys. Rev. A [**50**]{}, 4993 (1994).
J. Limburg, S. Schippers, I. Hughes, R. Hoekstra, R. Morgenstern, S. Hustedt, N. Hatke, and W. Heiland, Nucl. Instrum. Methods Phys. Res., Sect. B [ **98**]{}, 436 (1995).
R. Page, A. Saal, J. Thomaschewski, L. Aberle, J. Bleck-Neuhaus, R. Köhrbrück, M. Grether, and N. Stolterfoht, Phys. Rev. A [**52**]{}, 1344 (1995).
N. Stolterfoht, A. Arnau, M. Grether, R. Köhrbrück, A. Spieler, R. Page, A. Saal, J. Thomaschewski, and J. Bleck-Neuhaus, Phys. Rev. A [**52**]{}, 445 (1995).
J. Limburg, S.Schippers, R. Hoekstra, R. Morgenstern, H. Kurz, M. Vana, F. Aumayr, and H. Winter, Nucl. Instrum. Methods Phys. Res., Sect. B [**115**]{}, 237 (1996).
A. Arnau, P. A. [Zeijlmans van Emmichoven]{}, J. I. Juaristi, and E. Zaremba, Nucl. Instrum. Methods Phys. Res., Sect. B [**100**]{}, 279 (1995).
R. [Díez Mui[ñ]{}o]{}, A. Arnau, and P. M. Echenique, Nucl. Instrum. Methods Phys. Res., Sect. B [**98**]{}, 420 (1995).
R. [Díez Mui[ñ]{}o]{}, N. Stolterfoht, A. Arnau, A. Salin, and P. M. Echenique, Phys. Rev. Lett. [**76**]{}, 4636 (1996).
S. T. de Zwart, Nucl. Instrum. Methods Phys. Res., Sect. B [**23**]{}, 239 (1987).
L. Folkerts and R. Morgenstern, Journal de Physique [**Colloque C1, suppl. no 1**]{}, 541 (1989).
S. T. de Zwart, A. G. Drentje, A. L. Boers, and R. Morgenstern, Surf. Sci. [ **217**]{}, 298 (1989).
H. J. Andrä, A. Simionovici, T. Lamy, A. Brenac, G. Lamboley, S. Andriamonje, J. J. Bonnet, A. Fleury, M. Bonnefoy, M. Chassevent, and A. Pesnelle, Z. Phys. D [**21, suppl.**]{}, 135 (1991).
R. Köhrbrück, K. Sommer, J. P. Biersack, J. Bleck-Neuhaus, S. Schippers, P. Ronci, D. Lecler, F. Fremont, and N. Stolterfoht, Phys. Rev. A [**45**]{}, 4653 (1992).
H. J. Andrä, A.Simionovici, T. Lamy, A. Brenac, and A. Pesnelle, Europhys. Lett. [**23**]{}, 361 (1993).
L. Folkerts and R. Morgenstern, Europhys. Lett. [**13**]{}, 377 (1990).
J. Limburg, J. Das, S. Schippers, R. Hoekstra, and R. Morgenstern, Phys. Rev. Lett. [**73**]{}, 786 (1994).
H.Winter, C. Auth, R. Schuch, and E. Beebe, Phys. Rev. Lett. [**71**]{}, 1939 (1993).
J. F. Ziegler, J. P. Biersack, and U. Littmark, in [*The [S]{}topping and [R]{}ange of [I]{}ons in [S]{}olids*]{}, edited by J. F. Ziegler (Pergamon Press, New York, 1985), Vol. 1.
R. D. Cowan, [*The Theory of Atomic Structure and Spectra*]{} (University of California Press, Berkeley, 1981).
A. Arnau, R. Köhrbrück, M. Grether, A. Spieler, and N. Stolterfoht, Phys. Rev. A [**51**]{}, R3399 (1995).
M. H. Kalos and P. A. Whitlock, [*Monte [C]{}arlo [M]{}ethods*]{} (John Wiley & Sons, New York, 1986), Vol. I: Basics.
F. P. Larkins, J. Phys. B [**4**]{}, L29 (1971).
G. B. Armen and F. P. Larkins, J. Phys. B [**24**]{}, 741 (1991).
M. Grether, A. Spieler, R. Köhrbrück, and N. Stolterfoht, Phys. Rev. A [ **52**]{}, 426 (1995).
T. [Å]{}berg, in [*Atomic Inner-Shell Processes I: Ionization and Transition Probabilities*]{}, edited by B. Crasemann (Academic Press, New York, 1975), Chap. 9, pp. 353-375.
T. A. Carlson and M. O. Krause, Bull. Amer. Phys. Soc. [**10**]{}, 455 (1965).
K. Siegbahn, C. Nordling, A. Fahlman, R. Nordberg, K. Hamrin, J. Hedman, G. Johansson, T. Bergmark, L. O. Werme, R. Manne, and Y. Baer, [*ESCA Applied to Free Molecules*]{} (North-Holland, Amsterdam, 1969).
J. Ducrée, J. Mrogenda, E. Reckels, M. Rüther, A. Heinen, Ch. Vitt, M. Venier, J. Leuker, and H. J. Andrä, submitted to Phys. Rev. Lett. (unpublished).
D. C. Parks, R. Bastasz, R. W. Schmieder, and M. Stöckli, J. Va. Sci. Technol. B [**13**]{}, 941 (1995).
process $\Gamma^{\mbox{filled}}_{3\ell_1\ell_2}$ $E_{\mbox{kin}}=9$ eV ($\left<n_M \right>=2$) $E_{\mbox{kin}}=121$ eV $E_{\mbox{kin}}=2$ keV
--------- ------------------------------------------ ----------------------------------------------- ------------------------- -------------------------
$3ss$ $3.31 \times 10^{12}$ 0.8% (2.0$|$4.5$|$0.1) 1.0% (2.0$|$4.3$|$0.1) 2.0% (2.0$|$4.1$|$0.1)
$3sp$ $5.29 \times 10^{13}$ 15.9% (1.6$|$5.1$|$0.0) 17.1% (1.7$|$5.1$|$0.0) 22.1% (2.0$|$5.0$|$0.0)
$3pp$ $1.98 \times 10^{14}$ 72.5% (1.4$|$5.5$|$0.0) 70.0% (1.5$|$5.5$|$0.0) 66.8% (2.0$|$5.4$|$0.0)
$3sd$ $6.20 \times 10^{14}$ 5.3% (1.2$|$0.5$|$1.4) 7.2% (1.2$|$0.4$|$1.5) 7.4% (2.0$|$0.3$|$1.4)
$3pd$ $1.65 \times 10^{15}$ 4.6% (0.6$|$1.5$|$1.4) 3.7% (0.8$|$1.4$|$1.4) 1.4% (1.5$|$1.2$|$1.2)
$3dd$ $4.13 \times 10^{14}$ 0.8% (0.4$|$0.4$|$2.4) 1.0% (0.5$|$0.2$|$2.3) 0.4% (1.2$|$0.1$|$2.2)
: Monte Carlo simulation results on $LMM$ processes for Ar$^{9+}$ impinging on Si(100) with $E_{\mbox{kin}}=9$ eV, 121eV and 2 keV. $\Gamma^{\mbox{filled}}_{3\ell_1\ell_2}$ gives the $LMM$ rate for a filled $M$-shell as required for the implemented fit formula [@Lar71C]. For each simulation, we list the relative intensity and, in brackets, the average ($n_{3s}|n_{3p}|n_{3d}$)-configuration at the time of $LMM$ decay which provides information about the evolution of the subsurface cascade.[]{data-label="tab:intrarates"}
process $E_{\mbox{kin}}=9$ eV ($\left<n_M \right>=2$) $E_{\mbox{kin}}=121$ eV $E_{\mbox{kin}}=2$ keV
--------- ----------------------------------------------- ------------------------- -------------------------
$3sdd$ 66.2% (0.2$|$0.4$|$2.4) 81.8% (0.2$|$0.2$|$2.4) 11.7% (1.0$|$0.1$|$2.2)
$3pdd$ 16.3% (1.0$|$0.2$|$2.8) 21.0% (1.0$|$0.2$|$1.5) 17.9% (1.5$|$0.1$|$2.4)
: Monte Carlo simulation results on $MMM$ processes for Ar$^{9+}$ impinging on Si(100) with $E_{\mbox{kin}}=9$ eV, 121eV and 2 keV. The table lists the average occurrence of each transition type and, in brackets, the average $M$-sublevel population at the time of $MMM$ emission. Other sCK transitions are energetically forbidden.[]{data-label="tab:sCKrates"}
$n_M$ $\Gamma^{MCV}_{3s}$ \[s$^{-1}$\] $\Gamma^{MCV}_{3p}$ \[s$^{-1}$\] $\Gamma^{MCV}_{3d}$ \[s$^{-1}$\] $\Gamma^{MCV}_{tot}$ \[s$^{-1}$\]
------- ---------------------------------- ---------------------------------- ---------------------------------- -----------------------------------
0 $9.92 \times 10^{11}$ $2.07 \times 10^{12}$ $6.61 \times 10^{13}$ $8.10 \times 10^{14}$
1 $1.21 \times 10^{13}$ $2.54 \times 10^{13}$ $9.18 \times 10^{13}$ $1.08 \times 10^{15}$
2 - $3.26 \times 10^{13}$ $1.22 \times 10^{14}$ $1.41 \times 10^{15}$
3 - $4.46 \times 10^{13}$ $1.44 \times 10^{14}$ $1.70 \times 10^{15}$
4 - $6.53 \times 10^{14}$ - $2.61 \times 10^{15}$
5 - $5.78 \times 10^{14}$ - $1.74 \times 10^{15}$
6 - $4.48 \times 10^{14}$ - $9.17 \times 10^{14}$
7 - $3.27 \times 10^{14}$ - $3.27 \times 10^{14}$
: $MCV$ rates for the Ar$^{9+}$/Si system. The table lists $MCV$ transition rates per spin state $\Gamma^{MCV}_{3\ell}$ for each $M$-sublevel and the overall $MCV$ rate $\Gamma^{MCV}_{tot}$ taking into account occupation statistics as evaluated by DFT calculations. $n_M$ gives the initial number of $M$-electrons. The rates refer to initial $M$-shell ground state configurations. For $n_M=0$, $MCV$ processes filling the $3d$ level possess by far the highest rates. As the subsurface cascade proceeds and $n_M>4$, the $3d$ level vanishes and the $MCV$s into the $3p$ level rapidly populate the $M$-shell.[]{data-label="tab:MCVrates"}
[^1]: author to whom correspondence should be addressed.Electronic address: ducree@uni-muenster.de
[^2]: There has obviously been a mistake in the calibration of the plot on the energy axis that has been corrected in [@Fol89].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Algorithm configuration methods optimize the performance of a parameterized heuristic algorithm on a given distribution of problem instances. Recent work introduced an algorithm configuration procedure (“Structured Procrastination”) that provably achieves near optimal performance with high probability and with nearly minimal runtime in the worst case. It also offers an *anytime* property: it keeps tightening its optimality guarantees the longer it is run. Unfortunately, Structured Procrastination is not *adaptive* to characteristics of the parameterized algorithm: it treats every input like the worst case. Follow-up work (“LeapsAndBounds”) achieves adaptivity but trades away the anytime property. This paper introduces a new algorithm, “Structured Procrastination with Confidence”, that preserves the near-optimality and anytime properties of Structured Procrastination while adding adaptivity. In particular, the new algorithm will perform dramatically faster in settings where many algorithm configurations perform poorly. We show empirically both that such settings arise frequently in practice and that the anytime property is useful for finding good configurations quickly.'
author:
- |
Robert Kleinberg\
Department of Computer Science\
Cornell University\
`rdk@cs.cornell.edu` Kevin Leyton-Brown\
Department of Computer Science\
University of British Columbia\
`kevinlb@cs.ubc.ca` Brendan Lucier\
Microsoft Research\
`brlucier@microsoft.com` Devon Graham\
Department of Computer Science\
University of British Columbia\
`drgraham@cs.ubc.ca`
bibliography:
- 'main.bib'
title: 'Procrastinating with Confidence: Near-Optimal, Anytime, Adaptive Algorithm Configuration'
---
Introduction
============
Algorithm configuration is the task of searching a space of *configurations* of a given algorithm (typically represented as joint assignments to a set of algorithm parameters) in order to find a single configuration that optimizes a performance objective on a given distribution of inputs. In this paper, we focus exclusively on the objective of minimizing average runtime. Considerable progress has recently been made on solving this problem in practice via general-purpose, heuristic techniques such as ParamILS [@hutter-aaai07a; @hutter-jair09a], GGA [@ansotegui-cp09a; @ansotegui-ijcai15a], irace [@birattari-gecco02a; @lopez-ibanez-tech11a] and SMAC [@hutter-bayesopt11; @hutter-lion11a]. Notably, in the context of this paper, all these methods are *adaptive*: they surpass their worst-case performance when presented with “easier” search problems.
Recently, algorithm configuration has also begun to attract theoretical analysis. While there is a large body of less-closely related work that we survey in Section \[sec:related\], the first nontrivial worst-case performance guarantees for general algorithm configuration with an average runtime minimization objective were achieved by a recently introduced algorithm called *Structured Procrastination (SP)* [@ijcai17]. This work considered a worst-case setting in which an adversary causes every deterministic choice to play out as poorly as possible, but where observations of random variables are unbiased samples. It is straightforward to argue that, in this setting, any fixed, deterministic heuristic for searching the space of configurations can be extremely unhelpful. The work therefore focuses on obtaining candidate configurations via random sampling (rather than, e.g., following gradients or taking the advice of a response surface model). Besides its use of heuristics, SMAC also devotes half its runtime to random sampling. Any method based on random sampling will eventually encounter the optimal configuration; the crucial question is the amount of time that this will take. The key result of @ijcai17 is that SP is guaranteed to find a near-optimal configuration with high probability, with worst-case running time that nearly matches a lower bound on what is possible and that asymptotically dominates that of existing alternatives such as SMAC.
Unfortunately, there is a fly in the ointment: SP turns out to be impractical in many cases, taking an extremely long time to run even on inputs that existing methods find easy. At the root, the issue is that SP treats every instance like the worst case, in which it is necessary to achieve a fine-grained understanding of every configuration’s runtime in order to distinguish between them. For example, if every configuration is very similar but most are not quite ${\varepsilon}$-optimal, subtle performance differences must be identified. SP thus runs every configuration enough times that with high probability the configuration’s runtime can accurately be estimated to within a $1+{\varepsilon}$ factor.
<span style="font-variant:small-caps;">LeapsAndBounds</span> and <span style="font-variant:small-caps;">CapsAndRuns</span> {#sec:beyond}
--------------------------------------------------------------------------------------------------------------------------
@weisz2018leapsandbounds introduced a new algorithm, <span style="font-variant:small-caps;">LeapsAndBounds (LB)</span>, that improves upon Structured Procrastination in several ways. First, LB improves upon SP’s worst-case performance, matching its information-theoretic lower bound on running time by eliminating a log factor. Second, LB does not require the user to specify a runtime cap that they would never be willing to exceed on any run, replacing this term in the analysis with the runtime of the optimal configuration, which is typically much smaller. Third, and most relevant to our work here, LB includes an adaptive mechanism, which takes advantage of the fact that when a configuration exhibits low variance across instances, its performance can be estimated accurately with a smaller number of samples. However, the easiest algorithm configuration problems are probably those in which a few configurations are much faster on average than all other configurations. (Empirically, many algorithm configuration instances exhibit just such non-worst-case behaviour; see our empirical investigation in the Supplementary Materials.) In such cases, it is clearly unnecessary to obtain high-precision estimates of each bad configuration’s runtime; instead, we only need to separate these configurations’ runtimes from that of the best alternative. LB offers no explicit mechanism for doing this. LB also has a key disadvantage when compared to SP: it is not anytime, but instead must be given fixed values of ${\varepsilon}$ and $\delta$. Because LB is adaptive, there is no way for a user to anticipate the amount of time that will be required to prove $({\varepsilon},\delta)$-optimality, forcing a tradeoff between the risks of wasting available compute resources and of having to terminate LB before it returns an answer.
<span style="font-variant:small-caps;">CapsAndRuns (CR)</span> is a refinement of LB that was developed concurrently with the current paper; it has not been formally published, but was presented at an ICML 2018 workshop [@weisz2018capsandruns]. CR maintains all of the benefits of LB, and furthermore introduces a second adaptive mechanism that does exploit variation in configurations’ mean runtimes. Like LB, it is not anytime.
Our Contributions
-----------------
Our main contribution is a refined version of SP that maintains the anytime property while aiming to observe only as many samples as necessary to separate the runtime of each configuration from that of the best alternative. We call it “Structured Procrastination with Confidence” (SPC). SPC differs from SP in that it maintains a novel form of lower confidence bound as an indicator of the quality of a particular configuration, while SP simply uses that configuration’s sample mean. The consequence is that SPC spends much less time running poorly performing configurations, as other configurations quickly appear better and receive more attention. We initialize each lower bound with a trivial value: each configuration’s runtime is bounded below by the fastest possible runtime, $\kappa_0$. SPC then repeatedly evaluates the configuration that has the most promising lower bound.[^1] We perform these runs by “capping” (censoring) runs at progressively doubling multiples of $\kappa_0$. If a run does not complete, SPC “procrastinates”, deferring it until it has exhausted all runs with shorter captimes. Eventually, SPC observes enough completed runs of some configuration to obtain a nontrivial upper bound on its runtime. At this point, it is able to start drawing high-probability conclusions that other configurations are worse.
Our paper is focused on a theoretical analysis of SPC. We show that it identifies an approximately optimal configuration using running time that is nearly the best possible in the worst case; however, so does SP. The key difference, and the subject of our main theorem, is that SPC also exhibits near-minimal runtime beyond the worst case, in the following sense. Define an $({\varepsilon},\delta)$-suboptimal configuration to be one whose average runtime exceeds that of the optimal configuration by a factor of more than $1+{\varepsilon}$, even when the suboptimal configuration’s runs are capped so that a $\delta$ fraction of them fail to finish within the time limit. A straightforward information-theoretic argument shows that in order to verify that a configuration is $({\varepsilon},\delta)$-suboptimal it is sufficient—and may also be necessary, in the worst case—to run it for $O({\varepsilon}^{-2} \cdot \delta^{-1} \cdot {\text{OPT}})$ time. The running time of SPC matches (up to logarithmic factors) the running time of a hypothetical “optimality verification procedure” that knows the identity of the optimal configuration, and for each suboptimal configuration $i$ knows a pair $({\varepsilon}_i,\delta_i)$ such that $i$ is $({\varepsilon}_i,\delta_i)$-suboptimal and the product ${\varepsilon}_i^{-2} \cdot \delta_i^{-1}$ is as small as possible.
SPC is anytime in the sense that it first identifies an $({\varepsilon},\delta)$-optimal configuration for large values of ${\varepsilon}$ and $\delta$ and then continues to refine these values as long as it is allowed to run. This is helpful for users who have difficulty setting these parameters up front, as already discussed. SPC’s strategy for progressing iteratively through smaller and smaller values of ${\varepsilon}$ and $\delta$ also has another advantage: it is actually faster than starting with the “final” values of ${\varepsilon}$ and $\delta$ and applying them to each configuration. This is because extremely weak configurations can be dismissed cheaply based on large $({\varepsilon}, \delta)$ values, instead of taking more samples to estimate their runtimes more finely.
Other Related Work {#sec:related}
------------------
There is a large body of related work in the multi-armed bandits literature, which does not attack quite the same problem but does similarly leverage the “optimism in the face of uncertainty” paradigm and many tools of analysis [@lai1985asymptotically; @auer2002finite; @bubeck2012regret]. We do not survey this work in detail as we have little to add to the extensive discussion by @ijcai17, but we briefly identify some dominant threads in that work. Perhaps the greatest contact between the communities has occurred in the sphere of hyperparameter optimization [@bergstra2011algorithms; @thornton2013auto; @li2016hyperband] and in the literature on bandits with correlated arms that scale to large experimental design settings [@kleinberg2006anytime; @kleinberg2008multi; @chaudhuri2009parameter; @bubeck2011x; @srinivas2012information; @cesa2012combinatorial; @munos2014bandits; @shahriari2016taking]. In most of this literature, all arms have the same, fixed cost; others [@guha2007approximation; @tran2012knapsack; @badanidiyuru2013bandits] consider a model where costs are variable but always paid in full. (Conversely, in algorithm configuration we can stop runs that exceed a captime, yielding a potentially censored sample at bounded cost.) Some influential departures from this paradigm include @kandasamy2016multi, @ganchev2010censored, and most notably @li2016hyperband; reasons why these methods are nevertheless inappropriate for use in the algorithm configuration setting are discussed at length by @ijcai17.
Recent work has examined the learning-theoretic foundations of algorithm configuration, inspired in part by an influential paper of @gupta2017pac that framed algorithm configuration and algorithm selection in terms of learning theory. This vein of work has not aimed at a general-purpose algorithm configuration procedure, as we do here, but has rather sought sample-efficient, special-purpose algorithms for particular classes of problems, including combinatorial partitioning problems (clustering, max-cut, etc) [@balcan2017learning], branching strategies in tree search [@balcan2018learning], and various algorithm selection problems [@Vitercik2018]. Nevertheless, this vein of work takes a perspective similar to our own and demonstrates that algorithm configuration has moved decisively from being solely the province of heuristic methods to being a topic for rigorous theoretical study.
Model
=====
We define an algorithm configuration problem by the 4-tuple $(N, \Gamma, R, \kappa_0)$, where these elements are defined as follows. $N$ is a family of (potentially randomized) algorithms, which we call *configurations* to suggest that a single piece of code instantiates each algorithm under a different parameter setting. We do not assume that different configurations exhibit any sort of performance correlations, and can so capture the case of $n$ distinct algorithms by imagining a “master algorithm” with a single, $n$-valued categorical parameter. Parameters are allowed to take continuous values: $|N|$ can be uncountable. We typically use $i$ to index configurations. $\Gamma$ is a probability distribution over input instances. When the instance distribution is given implicitly by a finite benchmark set, let $\Gamma$ be the uniform distribution over this set. We typically use $j$ to index (input instance, random seed) pairs, to which we will hereafter refer simply as instances. $R(i,j)$ is the execution time when configuration $i \in N$ is run on input instance $j$. Given some value of $\theta > 0$, we define $R(i,j,\theta) = \min\{R(i,j), \theta\}$, the runtime capped at $\theta$. $\kappa_0 > 0$ is a constant such that $R(i,j) \geq \kappa_0$ for all configurations $i$ and inputs $j$.
For any timeout threshold $\theta$, let $R_\theta(i) = {\mathrm{E}}_{j \sim \Gamma}[R(i,j,\theta)]$ denote the average $\theta$-capped running time of configuration $i$, over input distribution $\Gamma$. Fixing some running time $\bar{\kappa} = 2^{\beta} \kappa_0$ that we will never be willing to exceed, the quantity $R_{\bar{\kappa}}(i)$ corresponds to the expected running time of configuration $i$ and will be denoted simply by $R(i)$. We will write $OPT = \min_i R(i)$. Given $\epsilon > 0$, a goal is to find $i^* \in N$ such that $R(i^*) \leq (1+\epsilon) OPT$. We also consider a relaxed objective, where the running time of $i^*$ is [*capped*]{} at some threshold value $\theta$ for some small fraction of (instance, seed) pairs $\delta$.
\[def:eps-delta-opt\] A configuration $i^*$ is *$(\epsilon,\delta)$-optimal* if there exists some threshold $\theta$ such that $R_{\theta}(i^*) \leq
(1+\epsilon) OPT$, and $\Pr_{j \sim \Gamma} \big( R(i^*,j) > \theta \big) \leq \delta$. Otherwise, we say $i^*$ is *$(\epsilon, \delta)$-suboptimal*.
[^1]: While both SPC and CR use confidence bounds to guide search, they take different approaches. Rather than rejecting configurations whose lower bounds get too large, SPC focuses on configurations with small lower bounds. By allocating a greater proportion of total runtime to such promising configurations we both improve the bounds for configurations about which we are more uncertain and allot more resources to configurations with relatively low mean runtimes about which we are more confident.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose a new massive integrable model in quantum field theory. This model is obtained as a perturbed model of the minimal conformal field theories on the hyper-elliptic surfaces by a particular relavant operator $V_{(1,1)}^{(t)}$. The non-local conserved charges of the model and their $q$-deformed algebra are also constructed explicitly.'
author:
- |
S. A. Apikyan$^\dagger$\
Theoretical Physics Department\
Yerevan Physics Institute\
Alikhanyan Br.st.2, Yerevan, 375036 Armenia\
\
C. J. Efthimiou$^\ddagger$\
Newman Laboratory of Nuclear Studies\
Cornell University\
Ithaca, NY 14853-5001, USA
title: |
$V_{(1,1)}^{(t)}$ -PERTURBED MODELS OF CFT\
AND\
THEIR QUANTUM GROUP SYMMETRY
---
------------------------------------------------------------------------
\
$\dagger$ e-mail address: apikyan@vx2.yerphi.am\
$\ddagger$ e-mail address: costas@hepth.cornell.edu
Introduction
============
During the last years an essential progress has been achieved in the investigation of integrable quantum field theories. Such a success owes much to the fact that these models are characterized by infinite dimensional Hopf algebra symmetries, known as affine quantum group symmetries. These symmetries are genereted by non-local conserved currents which in many cases can be constructed explicitly. Such an approach to the quantum field theory permits to obtain non-perturbative solutions in the quantum field theory using algebraic methods [@smi]-[@bab]. The situation is analogous to the one taking place in Conformal Field Theory (CFT). In particular, in CFT, as a result of the infinite-dimensional Virasoro algebra (or other extended algebras), exact solutions are successfully obtained with the help of the Ward identities [@BPZ].
Explicit currents that generate a $q$-deformation of affine Kac-Moody algebras [@dri],[@jim] were constructed for the Sine-Gordon theory and its generalization to imaginary coupling affine Toda theory in [@BL], and shown to completely characterize the $S$-matrices. At special values of the coupling where these quantum field theories have ordinary Lie group $G$ invariance, the quantum affine symmetry becomes the $G$-Yangian symmetry [@ber],[@lus].
The affine quantum group invariance fixes the $S$-matrices up to overall scalar factors, which in turn can be fixed using crossing symmetry, unitarity and analyticity. These quantum group invariant $S$-matrices, which are the specializations of the $R$-matrices satisfy the Yang-Baxter equation.
In the present work a series of new integrable models is identified and its $q$-deformed structure is studied. In particular, the organization of the paper is as follows. In section \[hyperelliptic-surfaces\], a brief description of the minimal conformal models on hyper-elliptic surfaces which can be represented as two-sheet coverings of a ramified sphere is given. In section \[new-model\], a model of perturbed CFT is proposed; the relevant perturbation is the highest weight vector of the Virasoro algebra at the branching points. The characters of this model are calculated and the existence of an infinite series of Integrals of Motion (IMs) is proved; the integrability of the model is thus established. Furthermore, the $\beta$-function of the model is calculated and it is shown that the theory is massive. In the last section, section \[nonlocal-charges\], the non-local currents are constructed. These are related by non-trivial braiding relations which lead to the $q$-deformed algebra of the conserved charges of the model.
CFT on Hyper-Elliptic Surfaces {#hyperelliptic-surfaces}
==============================
Conformal field theories on compact Riemann surfaces, and in particular on hyper-elliptic surfaces, have been considered by many authors. One of the pioneering works on hyper-elliptic surfaces was Zamolodchikov’s work for the Ashkin-Teller models [@zam87]; another important contribution was Knizhnik’s work [@kni] on two-loop calculations in string theory. Finally, in [@CSS], the minimal models on hyper-elliptic surfaces were thoroughly discussed.
Let $\Gamma$ be a compact Riemann surface of genus $g\geq 1$. If $\Gamma$ is a Riemann surface of an algebraic function $y=y(z)$ given by the equation $$R(y,z)=y^{n}+a_{1}(z)y^{n-1}+\ldots+a_{n}(z)=0~,$$ where $R(y,z)$ is a polynomial of the form shown above, then the affine part of $\Gamma$ coincides with the complex algebraic curve (1,1) in ${\Bbb C}^2$ in case this curve is ordinary (smooth). Of special importance to us is the example of hyper-elliptic curves given by equations of the form $$\label{form1}
y^2=P_{2g+1}(z)~,$$ or $$\label{form2}
y^2=P_{2g+2}(z)~,$$ where $P_h(z),~h=2g+1,2g+2,$ is a polynomial of degree $h$ without roots of multiplity $h$. In both cases, the genus of the corresponding Riemann surface is $g$. It is noteworthy that any Riemann surface of genus $g=1$ or $g=2$ has a representation in one of the forms [(\[form1\])]{} or [(\[form2\])]{}, while the same statement is not true for surfaces of genus $g=3$. We label the two sheets of the Riemann surface $\Gamma$ by the numbers $l=0,1$: $$y^{(l)}(z)= e^{i\pi l}\,P_h^{1/2}(z)=
e^{i\pi l}\,\prod_{i=1}^h\, (z-w_i)^{1/2}~.$$ Let $A_a,\, B_a,~a=1,2,\dots,g$ be the basic cycles of the surface. As we encircle the point $w_i$ along the contours $A_a,\, B_a$, in the case of an $A_a$ cycle we stay on the same sheet, while in the case of a $B_a$ cycle we pass from the $l$-th sheet to the $(l+1)$-th one. We shall denote the process of encircling the points $w_i$ on the cycles $A_a, \, B_a$ by the symbols $\hat{\pi}_{A_a}$, $\hat{\pi}_{B_a}$ respectively. Here these generators form a group of monodromy that in our case of two-sheet covering of the sphere coincides with the ${\Bbb Z}_{2}$ group.
We consider the energy-momentum tensor with representation $T^{(l)}(z)$ on each of these sheets. The above definition of the monodromy properties along the cycles $A_a,~B_a$ implies that the following boundary conditions should be satisfied by the energy-momentum tensor: $$\hat{\pi}_{A_{a}}T^{(l)}=T^{(l)} ,\quad \hat{\pi}_{B_{a}}T^{(l)}=T^{(l+1)}~.$$ It is convenient to pass to a basis, in which the operators $\hat{\pi}_{A_a}$, $\hat{\pi}_{B_a}$ are diagonal $$\begin{aligned}
T=T^{(0)}+T^{(1)}~,&&\quad T^{-}=T^{(0)}-T^{(1)}~,\\
\hat{\pi}_{A_{a}}T=T~,&& \quad \hat{\pi}_{A_{a}}T^{-}=T^{-}~,
\label{BC1}\\
\hat{\pi}_{B_{a}}T=T~,&& \quad \hat{\pi}_{B_{a}}T^{-}=-T^{-}~.
\label{BC2}\end{aligned}$$ The corresponding operator product expansions (OPEs) of the $T,~T^-$ fields can be determined by taking into account the OPEs of $T^{(l)},~T^{(l')}$. On the same sheet, the OPEs of the two fields $T^{(l)}(z_{1})T^{(l)}(z_{2}),$ are the same as that on the sphere, while on different sheets they do not correlate, i.e. $T^{(l)}(z_{1})T^{(l+1)}(z_{2})\sim {\rm reg}$. Thus, in the diagonal basis the OPEs can be found to be $$\begin{aligned}
T(z_{1})T(z_{2})&=&{c\over 2\,z_{12}^4}+
{2\,T(z_2)\over z_{12}^2}+
{T'(z_2)\over z_{12}} + {\rm reg}~,
\label{OPE1} \\
T^{-}(z_1)T^{-}(z_{2})&=&{c\over 2\,z_{12}^4}+
{2\,T(z_2)\over z_{12}^2}+
{T'(z_2)\over z_{12}} + {\rm reg}~,
\label{OPE2}\\
T(z_1)T^{-}(z_2)&=&{2\over z_{12}^2}\,T^{-}(z_2)+
{T'^{-}(z_2)\over z_{12}}+ {\rm reg}~,
\label{OPE3}\end{aligned}$$ where $c=2\hat{c}$, and $\hat{c}$ is the central charge in the OPE of $T^{(l)}(z_{1})T^{(l)}(z_{2})$. It is seen from [(\[OPE3\])]{} that $T^-$ is primary field with respect to $T$. To write the algebra [(\[OPE1\])]{}-[(\[OPE2\])]{} in the graded form we determine the mode expansion of $T$ and $T^-$: $$\begin{aligned}
T(z)V_{(k)}(0)&=&\sum_{n\in {\Bbb Z}}\, z^{n-2}L_{-n}V_{(k)}(0)~,\\
T^-(z)V_{(k)}(0)&=&\sum_{n\in {\Bbb Z}}\, z^{n-2-k/2}L_{-n+k/2}^-V_{(k)}(0)~,\end{aligned}$$ where $k$ ranges over the values 0,1 and determines the parity sector in conformity with the boundary conditions [(\[BC1\])]{} and [(\[BC2\])]{}. Standard calculations lead to the following algebra for the operators $L_{-n}$ and $L_{-n+k/2}^{-}$: $$\begin{aligned}
\lbrack L_n,L_m\rbrack &=& (n-m)\,L_{n+m}+\frac{c}{12}\,
(n^3-n)\,\delta_{m+n,0}~,\nonumber\\
\lbrack L_{m+k/2}^{-},L_{n+k/2}^{-}\rbrack
&=&(m-n)\,L_{n+m+k}+\frac{c}{12}\lbrack (m+k/2)^3-
(m+k/2)\rbrack \, \delta_{n+m+k,0}~,~~~~~~
\label{algebra} \\
\lbrack L_m,L_{n+k/2}^- \rbrack &=& \lbrack m-n-k/2\rbrack \, L_{m+n+k/2}^-~.
\nonumber\end{aligned}$$ The operators $\overline{L}_n,~ \overline{L}_{m+k/2},~\overline{L}_n^-,~
\overline{L}_{m+k/2}^-$ satisfy the same relations and $\overline{L}_n,$ $\overline{L}_{m+k/2},$ $\overline{L}_n^-,$ $\overline{L}_{m+k/2}^-$ commute with $L_n,~
L_{m+k/2},~L_n^-,~L_{m+k/2}^-$.
To describe the representations of the algebra [(\[algebra\])]{}, it is necessary to consider separately the non-twisted sector with $k=0$ and the twisted sector sector with $k=1$. In order to write the $\lbrack V_{(k)}\rbrack$ representation of the algebra [(\[algebra\])]{} in a more explicit form, it is convenient to consider the highest weight states. In the $k=0$ sector, the highest weight state $\vline\, \Delta , \Delta^-\rangle$ is determined with the help of a primary field $V_{(0)}$ by means of the formula $$\label{state1}
\vline \,\Delta , \Delta^-\rangle=V_{(0)}\, \vline\, \emptyset
\rangle ~.$$ Using the definition of vacuum, it is easy to see that $$\begin{array}{l}
L_0\,\vline\,\Delta, \Delta^-\rangle=\Delta
\, \vline\, \Delta ,\Delta^-\rangle~ ,\quad
L_0^-\,\vline\, \Delta, \Delta^-\rangle=
\Delta^-\,\vline\, \Delta ,\Delta^-\rangle~, \\
\nonumber\\
L_n\,\vline\, \Delta, \Delta^-\rangle =
L_n^-\,\vline\, \Delta, \Delta^-\rangle=0 ,
\quad n \geq 1~.
\end{array}$$ In the $k=1$ sector, we define the vector of highest weight $|\Delta\rangle$ of the algebra to be $$\label{state2}
\vline\, \Delta \rangle=V_{(1)}\,\vline \,\emptyset\rangle~,$$ where $V_{(1)}$ is a primary field with respect to $T$. In analogy with the non-twisted sector we obtain $$L_0\,\vline \,\Delta \rangle=\Delta \,\vline \,\Delta \rangle,\quad
L_n\,\vline\, \Delta \rangle=L_{n-1/2}^- \,\vline\, \Delta \rangle=0,
\quad n \geq 1~.$$ Thus, the Verma module over the algebra [(\[algebra\])]{} is obtained by the action of any number of $L_{-m}$ and $L_{-m+k/2}^-$ operators with $n,m>0$ on the states [(\[state1\])]{} and [(\[state2\])]{}. As was shown in ref. [@CSS] by means of GKO (coset construction) method, the central charge of a reducible unitary representation of the algebra [(\[algebra\])]{} has the form $$\label{ccharge}
c=2-\frac{12}{p(p+1)}=2\hat{c}~ ,\quad p=3,4,\ldots~.$$
Using ref. [@FF], Dotsenko and Fateev [@DF] gave the complete solution for the minimal model correlation functions on the sphere. They were able to write down the integral representation for the conformal blocks of the chiral vertices in terms of the correlation functions of the vertex operators of a free bosonic scalar field $\Phi$ coupled to a background charge $\alpha_0$. This construction has become known as the Coulomb Gas Formalism (CGF). In the present case, this approach is also applicable by considering a Coulomb gas for each sheet separately but coupled to the same bouckground charge: $$\begin{array}{l}
T^{(l)}=-\frac{1}{4}(\partial_z\Phi^{(l)})^{2} + i\alpha_0\partial_z^2
\Phi^{(l)}~,\quad
\langle\Phi^{(l)}(z)\Phi^{(l')}(z')\rangle=-\delta^{ll'}
\,\ln|z-z'|^2~,\\
\\
\hat{\pi}_{A_a}\partial_z\Phi^{(l)}=\partial_z\Phi^{(l)}~ ,\quad
\hat{\pi}_{B_a}\partial_z\Phi^{(l)}=\partial_z\Phi^{(l+1)}~,
\nonumber
\end{array}$$ where $c=2-24\alpha_0^2$ or $\alpha_0^2=1/2p(p+1)$.
Passing to the basis which diagonalizes the operators $\hat{\pi}_{A_a}$ , $\hat{\pi}_{B_a}$, i.e. $$\begin{aligned}
\Phi=\Phi^{(0)} + \Phi^{(1)}~,\quad \Phi^- = \Phi^{(0)} - \Phi^{(1)}
~,\nonumber\\
\hat{\pi}_{A_a}\partial_z\Phi = \partial_z\Phi~ ,\quad
\hat{\pi}_{B_a}\partial_z\Phi = \partial_z\Phi~,\\
\hat{\pi}_{A_a}\partial_z\Phi^- = \partial_z\Phi^-~ ,\quad
\hat{\pi}_{B_a}\partial_z\Phi^- = -\partial_z\Phi^-~,
\nonumber\end{aligned}$$ we finally obtain the bosonization rule for the operators $T$ , $T^-$ in the diagonal basis $$\begin{aligned}
T &=& -\frac{1}{4}(\partial_z\Phi)^2 + i\alpha_0\partial_z^2\Phi -
\frac{1}{4}(\partial_z \Phi^-)^2~,\nonumber \\
\\
T^- &=& -\frac{1}{2}\partial_z\Phi\partial_z\Phi^- +
i\alpha_0\partial_z^2\Phi^- ~.
\nonumber\end{aligned}$$ In conventions of ref. [@CSS], the vertex operator with charges $\alpha$, $\beta$ in the $k=0$ (non-twisted) sector is given by $$\label{vertex1}
V_{\alpha\beta}(z) = e^{i\alpha\Phi+i\beta\Phi^-}~,$$ with conformal weights $\Delta=\alpha^2-2\alpha_0\alpha
+\beta^2$ and $\Delta^-=2\alpha\beta-2\alpha_0\beta$.
In the $k=1$ (twisted) sector the situation is slightly different. Here we have an antiperiodic bosonic field $\Phi^-$, i.e. $\Phi^-(e^{2\pi i}z) = -\Phi^-$; this leads to the deformation of the geometry of space-time. If we recall that the circle is parametrized by $\Phi^- \in S^1 \lbrack 0,2\pi R\rbrack$, the condition $\Phi^- \sim -\Phi^-$ means that pairs of points of $S^1$ have been identified. Thus, $\Phi^-$ lives on the orbifold $S^1/{\Bbb Z}_2$; under the identification $\Phi^- \sim -\Phi^-$ the two points $\Phi^-=0$ and $\Phi^-=\frac{1}{2}(2\pi R)$ are fixed points. One can try to define the twist fields $\sigma_\epsilon(z),~\epsilon=0,1,$ for the bosonic field $\Phi^-$, with respect to which $\Phi^-$ is antiperiodic. Notice that there is a separate twist field for each fixed point. The OPE of the current $I^-=i\partial_z\Phi^-$ with the field $\sigma_\epsilon$ is then $$\begin{array}{l}
I^-(z)\sigma_{\epsilon}(0)=\frac{1}{2}z^{-1/2}\hat{\sigma}_{\epsilon}(0) +
\ldots~,\\
\nonumber\\
I^-(z)\hat{\sigma}_{\epsilon}(0)=\frac{1}{2}z^{-3/2}
\sigma_{\epsilon}(0) + 2z^{-1/2}\sigma'_{\epsilon}(0) + \ldots~.
\end{array}$$ The twist fields $\sigma_\epsilon$ and $\hat{\sigma}_\epsilon$ are primary fields for the $T_{\rm orb}=-\frac{1}{4}(\partial_z\Phi^-)^{2}$ with dimensions $\Delta_{\epsilon}=1/16$ and $\hat{\Delta}_{\epsilon}=
9/16$ respectively. So, in the twisted sector the highest weight vectors (or primary fields) can be written as follows $$\label{vertex2}
V_{\gamma\,\epsilon}^{(t)}=e^{i\gamma\Phi}\sigma_{\epsilon}~ ,\quad
\Delta^{(t)}=\gamma^2-2\alpha_0\gamma+{1\over 16}~.$$ In ref. [@CSS], the anomalous dimensions of the primary fields of the minimal models for the algebra [(\[algebra\])]{} were obtained both in the non-twisted and twisted sectors in conformity with the spectrum of the central charge [(\[ccharge\])]{}; in particular, it was found that the charges $\alpha,\beta,\gamma$ of the primary fields corresponding to $k=0$ and $k=1$ sectors have the form: $$\begin{array}{l}
\alpha_{n'm'}^{nm}={2-n-n'\over 2}\,\alpha_{+}+
{2-m-m'\over 2}\,\alpha_{-}~,\\
\nonumber\\
\beta_{n'm'}^{nm}={n-n'\over 2}\,\alpha_{+}+
{m-m'\over 2}\,\alpha_{-}~,\\
\nonumber\\
\gamma_{nm}={2-n\over 2}\,\alpha_{+}+
{2-m\over 2}\,\alpha_{-}~,\\
\nonumber\\
1\leq n,n'\leq p ,\quad 1\leq m,m'\leq p-1~,
\end{array}$$ where the constants $\alpha_{\pm}$ are expressed in terms of the background charge $\alpha_0$: $$\alpha_{\pm}=\alpha_{0}/2 \pm \sqrt{\alpha_{0}^{2}/4+1/2} ~.$$ We denote the corresponding fields by $V^{nm}_{n'm'}$, $V^{(t)}_{nm}$ and their conformal weights by $\Delta^{nm}_{n'm'}$, $\Delta^{(t)}_{nm}$.
We can thus represent the CFT on a hyper-elliptic surface as a CFT on the plane with an additional symmetry, exactly as described by the algebra [(\[algebra\])]{}. The corresponding highest weight vectors of the algebra are given by [(\[vertex1\])]{} and [(\[vertex2\])]{}; finally, the central charge is given by [(\[ccharge\])]{}.
We will confine ourselves to the minimal models on hyper-elliptic surfaces as presented above; keeping this in mind we pass to the construction of perturbed models of these CFTs.
Perturbation by $V_{nm}^{(t)}$ and Integrals of Motion {#new-model}
======================================================
Let $S_p$ be the action the $p$-th conformal minimal model on the hyper-elliptic surface $\Gamma$ $$S_p\lbrack\Phi,\Phi^-\rbrack\,\sim \, \int\,d^2z\,(
\, \partial_z \Phi \partial_{\overline z}\Phi - i\alpha_0R\Phi) +
\int\,d^2z\,\partial_z\Phi^-\partial_{\overline z}\Phi^-~.$$ We now consider the perturbation of this conformal field theory by the degenerate relevant operator $V_{nm}^{(t)}$. $$S_\lambda\,=\,S_p\lbrack\Phi,\Phi^-\rbrack
+\lambda\,\int\,d^2\,z\,e^{i\gamma_{nm}
\Phi(z,\overline{z})}\,\sigma_{\epsilon}(z,\overline{z})~.$$ The parameter $\lambda$ is a coupling constant with conformal weight $(1-\Delta_{nm}^{(t)}\, , \, 1-\Delta_{nm}^{(t)})$.
Obviously, for a generic perturbation the new action $S_\lambda$ does not describe an integrable model. We are going to choose the perturbation in a way that the corresponding field theory is integrable. To prove the integrability of this massive (this claim is proved at the end of the present section) theory, one must calculate the characters of the modules of the identity $I$ and $V_{nm}^{(t)}$.
The “basic" currents $T(z)$ and $T^-(z)$ generate an infinite-dimensional vector subspace $\Lambda$ in the representation space. This subspace can be constructed by successive applications of the generators $L_{-n}$ and $L_{-m}^-$ with $n,m>0$ to the identity operator $I$. $\Lambda$ can be decomposed to a direct sum of eigenspaces of $L_0$, i.e. $$\Lambda\,=\,\bigoplus_{s=0}^{\infty} \Lambda_{s}~,\quad
L_0\,\Lambda_s = s\,\Lambda_s~.$$ The space $\Lambda$ contains the subspace $\Lambda'=\partial_z\Lambda$. Therefore, in order to separate the maximal linearly independent set, one must take the factor space $\hat{\Lambda}=\Lambda/(L_{-1}\Lambda\,\bigoplus\,L_{-1}^{-}
\Lambda)$ instead of $\Lambda$. The space $\hat{\Lambda}$ admits a similar decomposition as a direct sum of eigenspaces of $L_0$. It follows that the formula of the character for $\hat{\Lambda}$ takes the form $$\chi_0 = (1-q)^2 \prod_{n=1}^{+\infty}\,\frac{1}{(1-q^n)^2}~.$$ The dimensionalities of the subspaces $\hat{\Lambda}_s$ can be determined from the character formula $$\sum_{s=0}^{\infty} \, q^s\, \dim(\hat{\Lambda}_s) = (1-q)\,\chi_0 + q~.$$ On the other hand, the module $V$ of the primary field $V_{nm}^{(t)}$ can be constructed by successively applying the generators $L_{-k}$ and $L_{1/2-l}^-$ with $k,l>0$ to the primary field $V_{nm}^{(t)}$. This space $V$ and the corresponding factor space $\widehat{V} =
V/L_{-1}V$ may also be decomposed in a direct sum of $L_0$ eigenspaces: $$V=\bigoplus_{s=0}^{\infty}\,V_s^{(t)}~,\quad L_0\,V_s^{(t)}=s\,
V_s^{(t)}~.$$ The dimensionalities of $V_s^{(t)}$ in this factor space associated with the relevant field $$V_{(1,1)}^{(t)}=e^{i\frac{\alpha_0}{2}\Phi}\sigma_{\epsilon}$$ are given by the character formula $$\sum_{s={\Bbb N}/2}^{+\infty}\, q^{s+\Delta_{(1,1)}^{(t)}}\,
\dim(\hat{V}_s^{(t)})=
\chi_{\Delta_{(1,1)}^{(t)}}\, (1-q)~,
\label{char1}$$ where $$\begin{aligned}
\label{char2}
\chi_{\Delta_{(1,1)}^{(t)}}&=&q^{\Delta_{(1,1)}^{(t)}}
\prod_{n=1}^{+\infty}\frac{1}{(1-q^{n})(1-q^{n-1/2})}~,\\
\Delta_{(1,1)}^{(t)}&=&\frac{1}{16}\left(1-{6\over p(p+1)}\right)~.\end{aligned}$$ When the dimensionalities of $\widehat{V}_s^{(t)}$ (calculated from [(\[char1\])]{}, [(\[char2\])]{}) are compared to those of $\hat{\Lambda}_{s+1}$, we see that for $s=1,3,5,\dots$ the $\dim(\widehat{\Lambda}_{s+1})$ exceeds the $\dim(\widehat{V}^{(t)}_s)$ at least by the unity, i.e. $\dim(\widehat{\Lambda}_{s+1})>
\dim(\widehat{V}^{(t)}_s),~s=1,3,5,\dots~.$ This proves that the model $$\label{action}
S_{\lambda}=S_p + \lambda\,\int\,d^2z\,e^{i\frac{\alpha_0}{2}
\Phi(z,\overline{z})}\,\sigma_{\epsilon'}(z,\overline{z})$$ possesses an infinite set of non-trivial IMs. We note here that there are no such IMs for perturbations by the operators $V_{nm}^{(t)}$ with $n,m>1$.
We now briefly study the renormalization group flow behaviour in the vicinity of the fixed point.
Solving the Callan-Symanzik equation [@IZ] up to third order, one can obtain the $\beta$-function $$\beta=\varepsilon\, g\, \left( 1 + \frac{Y}{6}\, g^2\right) + {\cal O}(g^4) ~.$$ In the above equation, we have denoted $$\varepsilon = 1-\Delta_{(1,1)}^{(t)}$$ and $$Y = \int d^2 z_1 \int d^2 z_2 \,\langle V_{(1,1)}^{(t)}(z_1,\overline{z}_1)
V_{(1,1)}^{(t)}(z_2,\overline{z}_2)V_{(1,1)}^{(t)}(1,1)
V_{(1,1)}^{(t)}(0,0) \rangle ~.$$ Since $Y>0$, we conclude that there is no reason to expect the exsistance of any non-trivial zeros of the $\beta$-function. In the absence of zeros, the field theory described by the action [(\[action\])]{} has a finite correlation length $R_c\sim \lambda^{-1/2\varepsilon}$ and the spectrum consists of particles with non-zero mass of order $m\sim R_c^{-1}$. In this case, the IMs force the scattering of the particles to be factorizable, i.e. there is particle production, the set of particle momenta is preserved, the $n$-particle $S$-matrix is a product of 2-particle $S$-matrices etc.
Infinite Quantum Group Symmetry {#nonlocal-charges}
===============================
In this section we briefly review the method developed in ref. [@BL] and then we apply it to our model.
We consider a CFT perturbed by a relevant operator with zero Lorentz spin. The Euclidean action is given by $$\label{pert-action}
S_\lambda=S_{\rm CFT}+\frac{\lambda}{2\pi}
\,\int\,d^2z\,V_{\rm pert}(z,\overline{z})~,$$ where the perturbation field can be written as $V_{\rm pert}(z,\overline{z})=
V_{\rm pert}(z)\overline{V}_{\rm pert}(\overline{z})$ (or a sum of such terms but in our case this is irrelevant). Let us assume that for the conformal invariant action $S_{\rm CFT}$ there exist the chiral currents $J(z)$, $\overline{J}(\overline{z})$ satisfying equations $\partial_{\overline
z}J(z)=0$, $\partial_z\overline{J}(\overline{z})=0$. Then for the action [(\[pert-action\])]{} $S_\lambda$, the perturbed currents, which are local with respect to the perturbing field, up to the first order, are given by Zamolodchikov’s equations [@zam89] $$\begin{array}{l}
\partial_{\overline z}J(z,\overline{z})=\lambda\oint_z\,
{d\omega\over 2\pi i}\, V_{\rm pert}
(\omega,\overline{z})J(z)~,\\ \\
\partial_z\overline{J}(z,\overline{z})=\lambda\oint_{\overline{z}}\,
{d\overline{\omega}\over 2\pi i}\,
V_{\rm pert}(z,\overline{\omega})\overline{J}(\overline{z})~.
\end{array}$$ The condition for the conservation of the currents up to first order in perturbation theory is that the residues of OPEs appearing in the above contour integrals are total derivatives: $$\begin{array}{l}
{\rm Res}\Big(V_{\rm pert}(\omega)J(z)\Big)=\partial_zh(z)~,
\\ \\
{\rm Res}\Big(\overline{V}_{\rm pert}(\overline{\omega})
\overline{J}(\overline{z})\Big)
=\partial_{\overline{z}}
\overline{h}(\overline{z})~.
\end{array}$$ Then Zamolodchikov’s equations for the currents are written in the form $$\label{continuity-equation}
\begin{array}{l}
\partial_{\overline{z}}J(z,\overline{z})=\partial_zH(z,\overline{z})~,\\
\\
\partial_z\overline{J}(z,\overline{z})=
\partial_{\overline{z}}\overline{H}(z,\overline{z})~,
\end{array}$$ where the fields $H$, $\overline{H}$ are $$\begin{array}{l}
H(z,\overline{z})=\lambda\, \lbrack h(z)\overline{V}_{\rm pert}(\overline{z})
+\dots\rbrack~,\\ \\
\overline{H}(z,\overline{z})=\lambda\,\lbrack
V_{\rm pert}(z)\overline{h}(\overline{z})+\dots\rbrack~,
\end{array}$$ where the dots represent contributions coming from terms in the OPEs which are more singular than the residue term. The conserved charges following from the conserved currents [(\[continuity-equation\])]{} are $$\label{charges}
\begin{array}{l}
Q=\int\,{dz\over 2\pi i}\,J+\int {d\overline{z}\over 2\pi i}\, H~,\\ \\
\overline{Q}=\int\,{d\overline{z}\over 2\pi i}\,\overline{J}
+\int\,{dz\over 2\pi i}\,\overline{H}~.
\end{array}$$ Using the non-trivial braiding relations between the conserved currents, one can obtain the $q$-deformed affine Lie algebra for the conserved charges [(\[charges\])]{}.
We are now going to implement the above construction of non-local charges for the theory described by the action [(\[action\])]{}. We will thus derive the $q$-deformed Lie algebra underlying the theory. Using the construction explained above, we can show that the action [(\[action\])]{} admits the following non-local conseved quantum currents: $$\label{continuity2}
\begin{array}{l}
\partial_{\overline{z}}J =\partial_zH~,\\
\nonumber\\
\partial_z\overline{J}=\partial_{\overline z}
\overline{H}~,
\end{array}$$ where $$\label{currents}
\begin{array}{l}
J=\colon e^{ia\varphi(z)}\,
e^{ib\varphi^-(z)}\colon\, \sigma(z)~,\\ \\
\overline{J}=
\colon e^{ia\overline{\varphi}(\overline{z})}e^{ib
\overline{\varphi}^-(\overline{z})}\colon
\,\overline{\sigma}(\overline{z})~,\\ \\
H(z,\overline{z})=\lambda\, A \, \colon
e^{i(a+\alpha_0/2) \varphi (z)}e^{i(b+k) \varphi^-(z)}
\overline{\sigma}(\overline{z})
e^{i\frac{\alpha_{0}}{2}
\overline{\varphi}(\overline{z})}\colon~,\\ \\
\overline{H}(z,\overline{z})=\lambda\, A\,
\colon e^{i(a+\alpha_0/2) \overline{\varphi}(\overline{z})}
e^{i(b+k) \overline{\varphi}^-(\overline{z})}
\sigma (z)e^{i\frac{\alpha_0}{2}\varphi(z)}
\colon~,
\end{array}$$ and $$\begin{aligned}
a &=& -(15/8+k^{2})/(\alpha_{0}+4k^{2}/\alpha_{0})~,
\nonumber\\
b &=& 2k a/\alpha_0~,
\label{constants}\\
A &=& \alpha_0/2(a + \alpha_0/2)~.\nonumber\end{aligned}$$ In the derivation of [(\[currents\])]{}, we used the OPEs $$\begin{array}{l}
\sigma(z)\, \sigma(x)=(z-x)^
{k^2-1/8}:e^{ik\varphi^{-}(x)}:+\ldots~,\\ \\
\overline{\sigma}(\overline z)\overline{\sigma}(\overline x)=
(\bar z-\bar x)^{\overline{k}^2-1/8}
\,:e^{i\overline{k}\overline{\varphi}^-(\overline{x})}:+\ldots~.
\end{array}$$ From the continuity equations [(\[continuity2\])]{} we define the conserved charges $$\begin{array}{l}
Q =\int\,\frac{dz}{2\pi i}\,J + \int\,\frac{d\overline{z}}{2\pi i}\,H
~,\\
\nonumber\\
\overline{Q} =\int\,\frac{dz}{2\pi i}\,\overline{H} +
\int\frac{d\overline{z}}{2\pi i}\,\overline{J}~.
\end{array}$$ To find the commutation relations between the charges $Q$ and $\overline{Q}$, we must first derive the braiding relations of the non-local conserved currents $J$, $\overline{J}$. To this end we will make use of the well known identity $$e^A\,e^B=e^B\,e^A\,e^{\lbrack A,B\rbrack}~,
\quad \lbrack A,\lbrack A, B\rbrack\rbrack=
\lbrack B,\lbrack A,B\rbrack\rbrack=0~.$$ We then obtain the following braiding relations $$\begin{array}{ll}
e^{ia\varphi(z)}e^{ib\varphi(z')}=
e^{\mp i\pi ab}\,e^{ib\varphi(z')}e^{ia\varphi(z)}~,
&\quad z\lessgtr z'~,\\ \\
e^{ia\varphi^{-}(z)}e^{ib\varphi^{-}(z')}=
e^{\mp i\pi ab}\,e^{ib\varphi^-(z')}e^{ia\varphi^-(z)}
~, &\quad z\lessgtr z'~,\\ \\
e^{ia\overline{\varphi}(\overline{z})}
e^{ib\overline{\varphi}(\overline{z}')}=
e^{\pm i\pi ab}\,e^{ib\overline{\varphi}(\overline{z}')}
e^{ia\overline{\varphi}(\overline{z})}~,
&\quad \overline{z}\lessgtr \overline{z}'~,\\ \\
e^{ia\overline{\varphi}^-(\overline{z})}
e^{ib\overline{\varphi}^-(\overline{z}')}=
e^{\pm i\pi ab}\,e^{ib\overline{\varphi}^-(\overline{z}')}
e^{ia\overline{\varphi}^-
(\overline{z})}~, &\quad \overline{z}\lessgtr \overline{z}'~,\\ \\
e^{ia\varphi(z)}e^{ib\overline{\varphi}(\overline{z}')}=e^{i\pi ab}
\,e^{ib\overline{\varphi}(\overline{z}')}
e^{ia\varphi(z)}~, &\quad \forall z,\overline{z}'~,\\ \\
e^{ia\varphi^-(z)}e^{ib\overline{\varphi}^-(\overline{z}')}=
e^{i\pi ab}\,e^{ib\overline{\varphi}^-(\overline{z}')}e^{ia\varphi^-(z)}~,
&\quad \forall z,\overline{z}'~.
\end{array}$$ Using the representation of the twist fields $\sigma,
\overline{\sigma}$ in terms of scalar bosonic fields which was proposed in ref. [@AZ], we can derive the following braiding relations: $$\begin{array}{ll}
\sigma(z)\sigma(z')=e^{\mp i\pi/8}\,\sigma(z')\sigma(z)~,&\quad z\lessgtr z'~,
\\ \\
\overline{\sigma}(\overline{z})\overline{\sigma}
(\overline{z}')=e^{\pm i\pi/8}\,\overline{\sigma}(\overline{z}')
\overline{\sigma}(\overline{z})~,&\quad \overline{z}\lessgtr \overline{z}'~,
\\ \\
\sigma(z)\overline{\sigma}(\overline{z}')=
e^{+i\pi/8}\,\overline{\sigma}(\overline{z}')\sigma(z)~,&
\quad \forall z,\overline{z}'~.
\nonumber
\end{array}$$ Consequently the non-local conserved currents have the non-trivial braiding relations $$J(x,t)\overline{J}(y,t)=
q^{\nu}\,\overline{J}(y,t)J(x,t)~,$$ where $$q=e^{-i\pi}~,\quad
\nu = 1/8-aa-bb~.$$ Using the above braiding relations and the expressions [(\[currents\])]{}, one finds that the conserved charges satisfy the relations $$\begin{aligned}
Q\overline{Q}-q^{\nu}\,\overline{Q}Q
=\frac{\lambda}{2\pi i}\,\int_t\,
(dz\partial_z+d\overline{z}\partial_{\overline{z}})\,
A\, e^{i(a+\alpha_0/2)\varphi(z)}
e^{i(b+k)\varphi^{-}(z)}\times \nonumber\\
\times A\,e^{i(a+\alpha_{0}/2)\overline{\varphi}(\overline{z})}
e^{i(b+k)\overline{\varphi}^-(\overline{z})}~.
\label{QQ}\end{aligned}$$
Now let us recall that the scalar field $\varphi^-$ lives on the orbifold $S^1 / {\Bbb Z}_2$ and hence the momentum $k$ must be quantized. Therefore, the above relations must be transformed to $$\begin{aligned}
\widehat Q_{\epsilon}\widehat{\overline{Q}}_{\overline{\epsilon}}-
q^{\nu_{\epsilon\overline{\epsilon}}}\,
\widehat{\overline{Q}}_{\overline{\epsilon}}\widehat Q_{\epsilon}&=&
{\lambda\over 2\pi i}\, \sum\,
A_L^{nm}A_R^{nm}\, \int_t \, (dz\,\partial_z+
d\overline{z}\,\partial_{\overline{z}})\times \nonumber\\
&\times & e^{i(a_L^{nm}+\alpha_0/2)\varphi(z)+
i(a_R^{nm}+\alpha_0/2)\overline{\varphi}(\overline{z})}\times\nonumber\\
&\times & e^{i(b_L^{nm}+k_L^{nm})\varphi^-(z)+
i(b_R^{nm}+k_R^{nm})\overline{\varphi}^-(\overline{z})}~,\end{aligned}$$ where $$\begin{array}{l}
\nu_{\overline{\epsilon}\epsilon}=
1/8-a_L^{nm}a_R^{nm}-b_L^{nm}b_R^{nm}
\\ \\
k_L^{nm}=k_L^{nm}(\epsilon,\epsilon')=
{n\over R} + \left( m+{\epsilon+\epsilon'\over 2}
\right)\,{R\over 2}~,
\\ \\
k_R^{nm}=k_R^{nm}(\overline{\epsilon},\epsilon')=
{n\over R}-\left( m+
{\overline{\epsilon}+\epsilon'\over 2}\right) \,{R\over 2}~.
\nonumber
\end{array}$$ The constants $a_L^{nm}$, $a_R^{nm}$, $b_L^{nm}$, $b_R^{nm}$, $A_L^{nm}$, $A_R^{nm}$ are obtained from the relations [(\[constants\])]{} and $\epsilon,\overline{\epsilon},
\epsilon'\in\{0,1\}$.
Finally, the topological charge for the model [(\[action\])]{} is defined as follows: $$\begin{aligned}
{\cal T}_{\rm top}&=&\int_{-\infty}^{+\infty}\,dx\,\partial_x\Phi(x)+
\int_{-\infty}^{+\infty}\,dx\,\partial_x\Phi^-(x)\nonumber\\
&=&\int_{-\infty}^{+\infty}\,dx\,\partial_x\,(\varphi +
\overline{\varphi})+
\int_{-\infty}^{+\infty}\,dx\,\partial_x(\varphi^- +
\overline{\varphi}^-)
\nonumber\\
&=&T_{\rm top}+\overline{T}_{\rm top}+
T_{\rm top}^-+\overline{T}_{\rm top}^-~,
\label{top-charg}\end{aligned}$$ where $\Phi$, $\Phi^-$ and the quasi-chiral components $\varphi, \overline{\varphi},
\varphi^-,\overline{\varphi}^-$ are related by the following equations: $$\begin{array}{l}
\varphi(x,t)=\frac{1}{2}\,
\left(\Phi(x,t)+\int_{-\infty}^x\, dy\, \partial_t
\Phi(y,t)\right)~,\\
\nonumber\\
\overline{\varphi}(x,t)=\frac{1}{2}\,\left(\Phi(x,t)-
\int_{-\infty}^x\, dy\,\partial_t
\Phi(y,t)\right)~,\\
\nonumber\\
\varphi^-(x,t)=\frac{1}{2}\,\left(\Phi^-(x,t)+
\int_{-\infty}^x\,dy\,\partial_t\Phi^-(y,t)\right)~,\\
\nonumber\\
\overline{\varphi}^-(x,t)=\frac{1}{2}\,\left(\Phi^-(x,t)-
\int_{-\infty}^x\,dy\,
\partial_t\Phi^-(y,t)\right)~,
\end{array}$$ These equations guarantee that $\Phi=\varphi+\overline{\varphi}$ and $\Phi^-=\varphi^-+
\overline{\varphi}^-$. Taking into account all these, the right hand side of the equation [(\[QQ\])]{} can be reexpressed in terms of the usual topological charges charge in [(\[top-charg\])]{}: $$\begin{aligned}
\widehat{Q}_\epsilon\widehat{\overline{Q}}_{\overline{\epsilon}} -
q^{\nu_{\overline{\epsilon}\epsilon}}\,
\widehat{\overline{Q}}_{\overline{\epsilon}}\widehat{Q}_\epsilon
= \frac{\lambda}{2\pi i}\,
\sum\, A_L^{nm}A_R^{nm}\,
\lbrack 1-e^{i(a_L^{nm}+\alpha_0/2)T_{\rm top}+
i(a_R^{nm}+\alpha_0/2)\overline{T}_{\rm top}}\times\nonumber\\
\times e^{i(b_L^{nm}+k_L^{nm})T_{\rm top}^-+
i(b_R^{nm}+k_R^{nm})\overline{T}_{\rm top}^-}\rbrack~.~~~~~
\label{QQ2}\end{aligned}$$ Then, one can easily calculate the commutators $$\label{TQ}
\begin{array}{l}
\lbrack T_{\rm top},Q_\epsilon^{nm}\rbrack=
a_L^{nm}\, Q_{\epsilon}^{nm}~,\quad
\lbrack \overline{T}_{\rm top},
\overline{Q}_{\overline{\epsilon}}^{nm}\rbrack=
a_R^{nm}\,\overline{Q}_{\overline{\epsilon}}^{nm}~, \\ \\
\lbrack T_{\rm top}^-,Q_{\epsilon}^{nm}\rbrack=
b_{L}^{nm}\, Q_{\epsilon}^{nm}~,\quad
\lbrack\overline{T}_{\rm top}^-,
\overline{Q}_{\overline{\epsilon}}^{nm}\rbrack=
b_R^{nm}\,\overline{Q}_{\overline{\epsilon}}^{nm}~.
\end{array}$$ Thus, these commutation relations [(\[TQ\])]{} together with the relations [(\[QQ2\])]{} constitute the algebra, to the lowest non-trival order in perturbation theory, which is the symmetry of the $S$-matrix of the theory.
Unfortunately, the isomorphism between the algebra [(\[QQ2\])]{},[(\[TQ\])]{} and the Hopf algebra has not been established yet, and, hence, the universal $R$-matrix of this hidden Hopf algebra has not been studied. However, we are going to make some additional comments about these open questions in the near future.
Conclusions
===========
To summarize, in the present paper we have introduced a new integrable model in quantum field theory. The novelty of the model resides in the fact that it is built on a hyper-elliptic surface instead of the usual Euclidean plane. The quantum symmetry of the model has been identified in terms of the non-local conserved charges. This has led to a generalization of the method first introduced by Bernard and LeClair [@BL] for the affine Toda field theories where only boson fields are involved. As is understood very well by now, the quantum non-local conserved charges provide a quantum field theoretic basis for understanding quantum groups. Unfortunately, the mapping from the physical algebra satisfied by the non-local charges to the $q$-deformed Lie algebra has not been discovered yet. If this mapping is found, one will be able to study the universal $R$-matrix and consequently uncover the structure of the $S$-matrix.
We would like to thank A. LeClair, F. Smirnov and R. Poghossian for helpful discussions.
[99]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We find the exact value of the Ramsey number $R(C_{2\ell},K_{1,n})$, when $\ell$ and $n=O(\ell^{10/9})$ are large. Our result is closely related to the behaviour of Turán number ${\mathrm{ex}}(n, C_{2\ell})$ for an even cycle whose length grows quickly with $n$.'
address:
- |
Adam Mickiewicz University\
Faculty of Mathematics and Computer Science\
Umultowska 87, 61-614 Poznań, Poland
- |
Adam Mickiewicz University\
Faculty of Mathematics and Computer Science\
Umultowska 87, 61-614 Poznań, Poland
- |
Hebei Normal University\
18. School of Mathematical Sciences\
Shijiazhuang, P.R.China
author:
- Tomasz Łuczak
- Joanna Polcyn
- Yanbo Zhang
date: 'March 6th, 2020'
title: |
The Ramsey number of\
a long even cycle versus a star
---
[^1]
Introduction
============
For a graph $H$ by $${\mathrm{ex}}(n,H)=\max\{|E|: G=(V,E)\not\supseteq H \ \&\ |V|=n\}$$ we denote its Turán number. Let us recall that for graphs $H$ with chromatic number at least three the asymptotic value of ${\mathrm{ex}}(n,H) $ was determined over fifty years ago by Erdős and Stone [@ESt], and Erdős and Simonovits [@ESi], while for most of bipartite graphs $H$ the behaviour of ${\mathrm{ex}}(H,n)$ is not well-understood. Let us recall some results on the case when $H$ is an even cycle $C_{2\ell}$. The best upper bound for ${\mathrm{ex}}(n, C_{2\ell})$ for general $\ell$ is due to Bukh and Jiang [@BJ] who improved the classical theorem of Bondy and Simonovits [@BS] to $${\mathrm{ex}}(n, C_{2\ell})\le 80\sqrt{\ell} \ln \ell n^{1+1/\ell}+ 10\ell^2 n.$$ The best lower bound which holds for all $\ell$ follows from the construction of a regular graph of large girth by Lubotzky, Phillips, and Sarnak [@LPS], which gives $${\mathrm{ex}}(n, C_{2\ell})\ge n^{1+(2+o(1))/3 \ell}.$$ The correct exponent $\alpha_\ell$ for which ${\mathrm{ex}}(n,C_{2\ell})=n^{\alpha_\ell+o(1)}$ is known only for $\ell=2,3,5$, when it is equal to $1+1/\ell$ (see the survey of Füredi and Simonovits [@FS] and references therein), and finding it for every $\ell$ is one of the major open problems in extremal graph theory. Can it become easier when we allow the length of an even cycle to grow with $n$? This paper was inspired by this question. However, instead of the original problem we consider its, nearly equivalent, partition version. Thus, instead of ${\mathrm{ex}}(n,C_{2\ell})$, we study the Ramsey number $R(C_{2\ell}, K_{1,n})$. Note that from the result of Bukh and Jiang and the construction of Lubotzky, Phillips, and Sarnak mentioned above we get $$\label{eq1}
n+ n^{(2+o(1))/3\ell} \le R(C_{2\ell}, K_{1,n})\le n+ 81\sqrt{\ell} \ln \ell n^{1/\ell}+ 11\ell^2.$$ Since a graph on $N$ vertices with minimum degree at least $N/2$ is hamiltonian (Dirac [@D]), and if its minimum degree is larger than $N/2$, it is pancyclic (Bondy [@B]), for $\ell \ge n\ge 2 $, we have $R(C_{2\ell},K_{1,n})=2\ell$. Moreover, Zhang, Broersma, and Chen [@ZBC] showed that if $n/2<\ell<n$ then $R(C_{2\ell},K_{1,n})=2n$, while for $3n/8+1\le \ell\le n/2$, we get $R(C_{2\ell},K_{1,n})=4\ell-1$. Our main result determines the value of $R(C_{2\ell},K_{1,n})$ for all large $\ell$ and $n \le 0.1\ell^{10/9}$.
\[thm1\] For every $t\ge 2$, $\ell\ge (19.1t)^9$, and $n$ such that $(t-1)(2\ell-1)\le n-1 < t(2\ell-1)$, we have $$\label{eq:main}
R(C_{2\ell}, K_{1,n}) =f_t(\ell,n)+1,$$ where $$\label{eq:main2}
f_t(\ell,n)=\max\{t(2\ell-1), n+ \lfloor (n-1)/t\rfloor \}.$$
We do not know how much one can relax the condition $n\le 0.1\ell^{10/9}$ in Theorem \[thm1\]. We suspect that the result holds for $n$ growing polynomially with $\ell$, but it is conceivable that it remains true even for $n$ which grows exponentially with $\ell$. On the other hand, because of , the assertion of Theorem \[thm1\] fails for, say, $n\ge \ell^{2\ell}$.
We remark that, as we mentioned above, one can use similar technique to find the value of ${\mathrm{ex}}(n,C_{2\ell})$ when $n$ is not much larger than $\ell$. The difference between this problem, when we try to maximize the number of edges in the graph, and the Ramsey setting we chose, when we maximize its minimum degree, is not substantial. However, the result for ${\mathrm{ex}}(n,C_{2\ell})$ is more predictable, since in this case one needs to maximize the number of blocks of size $2\ell-1$ and supplement it with at most one smaller block. The behaviour of $R(C_{2\ell},K_{1,n})$ seems to us more intriguing. Indeed, for a given $\ell$ and $(t-1)(2\ell-1)\le n-1< \frac{t^2}{t+1}(2\ell-1)$ we have $$f_t(\ell,n) =(2\ell-1)t,$$ i.e. for this range of $n$ the value of $R(C_{2\ell}, K_{1,n})$ does not depend on the size of the star. On the other hand, as it is shown in the next section, for $\frac{t^2}{t+1}(2\ell-1)\le n-1< t(2\ell-1) $, when $$f_t(\ell,n) = n+ \lfloor (n-1)/t\rfloor\,$$ the ‘extremal graphs’ which determine the value of $R(C_{2\ell}, K_{1,n})$ typically have all blocks much smaller than $2\ell-1$.
The lower bound for $R(C_{2\ell}, K_{1,n})$ {#sec:lower}
===========================================
In this section we show that for given integers $t$, $\ell$, and $n$ such that $(t-1)(2\ell-1) \le n-1 < t(2\ell - 1)$, we have $$\label{eq:lower}
R(C_{2\ell},K_{1,n}) >f_t(\ell,n)=\max\{t(2\ell-1), n+ \lfloor (n-1)/t\rfloor \}.$$
Let us consider first the graph $H_1$ which consists of $t$ vertex-disjoint copies of the complete graph $K_{2\ell-1}$. Clearly, $|V(H_1)| = t(2\ell-1)$ and $H_1\nsupseteq C_{2\ell}$. Moreover, $\Delta({ \sbox{\myboxA}{$\m@thH$} \setbox\myboxB\null \ht\myboxB=\ht\myboxA \dp\myboxB=\dp\myboxA \wd\myboxB=0.75\wd\myboxA \sbox\myboxB{$\m@th\overline{\copy\myboxB}$} \setlength\mylenA{\the\wd\myboxA} \addtolength\mylenA{-\the\wd\myboxB} \ifdim\wd\myboxB<\wd\myboxA \rlap{\hskip 0.5\mylenA\usebox\myboxB}{\usebox\myboxA} \else
\hskip -0.5\mylenA\rlap{\usebox\myboxA}{\hskip 0.5\mylenA\usebox\myboxB} \fi}_1) = (t-1)(2\ell-1) \le n-1$ yielding ${ \sbox{\myboxA}{$\m@thH$} \setbox\myboxB\null \ht\myboxB=\ht\myboxA \dp\myboxB=\dp\myboxA \wd\myboxB=0.75\wd\myboxA \sbox\myboxB{$\m@th\overline{\copy\myboxB}$} \setlength\mylenA{\the\wd\myboxA} \addtolength\mylenA{-\the\wd\myboxB} \ifdim\wd\myboxB<\wd\myboxA \rlap{\hskip 0.5\mylenA\usebox\myboxB}{\usebox\myboxA} \else
\hskip -0.5\mylenA\rlap{\usebox\myboxA}{\hskip 0.5\mylenA\usebox\myboxB} \fi}_1 \nsupseteq K_{1,n}$. Hence $$R(C_{2\ell},K_{1,n}) >t(2\ell-1)\,.$$
Now let $k=n-1 - t\lfloor (n-1)/t\rfloor$ and $m=\lfloor (n-1)/t\rfloor + 1$. We define a graph $H_2$ as a union of $k$ vertex-disjoint complete graphs $K_m$ and $t+1-k$ other copies of $K_m$ which are ‘almost’ vertex-disjoint except that they share exactly one vertex. Then $$\begin{aligned}
|V(H_2)|&=km+(t+1-k)(m-1) + 1=(t+1)m-(t-k)\\
&=(t+1)(\lfloor (n-1)/t\rfloor + 1) -t + n-1 - t\lfloor (n-1)/t\rfloor\\
&=n+\lfloor (n-1)/t\rfloor. \end{aligned}$$ Note also that $n-1< t(2\ell-1)$, and so $m=\lfloor (n-1)/t\rfloor +1\le 2\ell-1$. Hence $H_2\not\supseteq C_{2\ell}$. Finally, $$\begin{aligned}
\Delta({ \sbox{\myboxA}{$\m@thH$} \setbox\myboxB\null \ht\myboxB=\ht\myboxA \dp\myboxB=\dp\myboxA \wd\myboxB=0.75\wd\myboxA \sbox\myboxB{$\m@th\overline{\copy\myboxB}$} \setlength\mylenA{\the\wd\myboxA} \addtolength\mylenA{-\the\wd\myboxB} \ifdim\wd\myboxB<\wd\myboxA \rlap{\hskip 0.5\mylenA\usebox\myboxB}{\usebox\myboxA} \else
\hskip -0.5\mylenA\rlap{\usebox\myboxA}{\hskip 0.5\mylenA\usebox\myboxB} \fi}_2) = |V| - m= n+\lfloor (n-1)/t\rfloor -\lfloor (n-1)/t\rfloor - 1 =n-1.\end{aligned}$$ Therefore $$R(C_{2\ell},K_{1,n}) >|V(H_2)|= n+ \lfloor (n-1)/t\rfloor,$$ and follows.
Let us remark that the two graphs $H_1$ and $H_2$ we used above are by no means the only ‘extremal graphs’ with $R(C_{2\ell},K_{1,n})-1$ vertices. Let us take, for example, $n=4.1\ell$. Then $R(C_{2\ell},K_{1,n})=3(2\ell-1)+1$ and the lower bound for $R(C_{2\ell},K_{1,n})$ is ‘certified’ by the graph $H'_1$ which consists of three vertex disjoint cliques $K_{2\ell-1}$. However, if we replace each of these cliques by a graph on $2\ell-1$ vertices and the minimum degree $1.91\ell$, the complement of the resulting graph will again contain no $K_{1,n}$, so each such graph shows that $R(C_{2\ell},K_{1,n})>3(2\ell-1)$ as well. On the other hand, adding to $H'_1$ a triangle with vertices in different cliques does not result in a copy of $C_{2\ell}$, so $H'_1$ is not even a maximal extremal graph certifying that $R(C_{2\ell},K_{1,n})>3(2\ell-1)$.
Cycles in 2-connected graphs
============================
In order to show the upper bound for $R(C_{2\ell},K_{1,n})$ we have to argue that large graphs with a high enough minimum degree contain $C_{2\ell}$. In this section we collect a number of results on cycles in 2-connected graphs we shall use later on.
Let us recall first that the celebrated theorem of Dirac [@D] states that each 2-connected graph $G$ on $n$ vertices contains a cycle of length at least $\min\{ 2\delta(G),n\}$, and, in particular, each graph with the minimum degree at least $n/2$ is hamiltonian. Below we mention some generalizations of this result. Since we are interested mainly in even cycles, we start with the following observation due to Voss and Zuluaga [@VZ].
\[l:VZ\] Every 2-connected graph $G$ on $n$ vertices contains an even cycle $C$ of length at least $\min\{2\delta(G), n-1\}$.
The following result by Bondy and Chvátal [@BC] shows that the condition $\delta(G)\ge n/2$, sufficient for hamiltonicity, can be replaced by a somewhat weaker one. Recall that the closure of a graph $G=(V,E)$ is the graph obtained from $G$ by recursively joining pairs of non-adjacent vertices whose degree sum is at least $|V|$ until no such pair remains.
\[l:BC\] A graph $G$ is hamiltonian if and only if its closure is hamiltonian.
If we allow $\delta(G)>n/2$, then, as observed by Bondy [@B], $G$ becomes pancyclic. We use the following strengthening of this result, proved under slightly stronger assumptions, due to Williamson [@W].
\[l:W\] Every graph $G=(V,E)$ on $n$ vertices with $\delta(G)\ge n/2+1$ has the following property. For every $v,w\in V$ and every $k$ such that $2\le k\le n-1$, $G$ contains a path of length $k$ which starts at $v$ and ends at $w$. In particular, $G$ is pancyclic.
Finally, we state a theorem of Gould, Haxell, and Scott [@GHS], which is crucial for our argument. Here and below $\textrm{ec}(G)$ denotes the length of the longest even cycle in $G$.
\[l:GHS\] Let $a>0$, $\hat K = 75\cdot 10^4 a^{-5}$, and $G$ be a graph with $n\ge 45 \hat K/a^4$ vertices and minimum degree at least $an$. Then for every even $r\in [4,\textrm{ec}(G)-\hat K]$, $G$ contains a cycle of length $r$.
Let us also note the following consequence of the above results.
\[l:small\] For $c\ge 1$ we set $$\label{eq:ks}
K(c) = 24\cdot 10^6 c^5 = 75\cdot 10^4 (1/2c)^{-5},$$ and let $\ell \ge 360c^4K(c)$. Then for every 2-connected $C_{2\ell}$-free graph $H=(V,E)$ such that $|V|\le 2\ell c$ and $\delta(H) \ge \ell + K(c)$, we have $$|V|\le 2\ell - 1.$$
Let us consider first the case $|V| < 2\ell + 2K(c)-2$. Then, since $$\delta(H) \ge \ell +K(c) > { |V|}/{2}+1,$$ from Lemma \[l:W\] we infer that $H$ is pancyclic. But $C_{2\ell} \nsubseteq H$ meaning that $|V|\le 2\ell - 1$, as required.
On the other hand, for $|V|\ge 2\ell + 2K(c)-2$ Lemma \[l:VZ\] implies that $$\textrm{ec}(H) \ge 2\ell + 2K(c)-2>2\ell+K(c)$$ Moreover, as $|V|\le 2\ell c$ and $\ell \ge 360c^4K(c)$, one gets $$\delta(H)> \ell \ge \frac{1}{2c}|V|\quad \textrm{and}\quad |V| > 2\ell \ge 45\left(\frac{1}{2c}\right)^{-4}K(c).$$ Therefore, from Lemma \[l:GHS\] applied to $H$ with $a=1/(2c)$, we infer that $H$ contains a cycle of length ${2\ell}$, contradicting $C_{2\ell}$-freeness of $H$.
Proof of the main result
========================
The two examples of graphs we used to verify the lower bound for $R(C_{2\ell}, K_{1,n})$ (see Section \[sec:lower\]) suggest that a natural way to deal with the upper bound for $R(C_{2\ell}, K_{1,n})$ is to show first that each $C_{2\ell}$-free graph $G$ with a large minimum degree has all blocks smaller than $2\ell$. However, most results on the existence of cycles in 2-connected graphs are using the minimum degree condition, and even if the minimum degree of $G$ is large, some of its blocks may contain vertices of small degree. Nonetheless we shall prove that the set of vertices in each such $G$ contains a ‘block-like’ family of 2-connected subgraphs without vertices of very small degree. Then, based on the results of the last section, we argue that each subgraph in such family is small. In the third and final part of our proof we show that if this is the case, then $G$ has at most $f_t(\ell, n)$ vertices.
Before the proof of Theorem \[thm1\] we state two technical lemmata. The first one will become instrumental in the first part of our argument, when we decompose the graph $G$ into 2-connected subgraphs without vertices of small degree.
\[l:dec\] Let $n\ge k\ge 2$. For each graph $G$ with $n$ vertices and minimum degree $\delta(G)\ge n/k + k$, there exists an $s<k$ and a set of vertices $U\subset V(G)$, $|U| \le s-1$, such that $G-U$ is a union of $s$ vertex-disjoint 2-connected graphs.
Consider a sequence $U_0,U_1,\dots, U_{t}=U$ of subsets of $V$ which starts with $U_0=\emptyset$ and, if $G-U_i$ contains a cut vertex $v_i$, we put $U_{i+1}=U_i\cup \{v_i\}$. The process terminates when each component of $G-U_i$ is 2-connected. Note that in each step the number of components of a graph increases by at least one, so $G-U_i$ has at least $i+1=|U_i|+1$ components. Moreover, the process must terminate for $t< k-1$ since otherwise the graph $G-U_{k-1}$ would have $n-k+1$ vertices, at least $k$ components, and the minimum degree at least $n/k+1$ which, clearly, is impossible. Hence the graph $G-U=G-U_t$ has $n-t$ vertices, $s\ge |U|+1= t+1$ components, and the minimum degree larger than $n/k+1$. Finally, let us notice that, again, since each component has more than $n/k$ vertices, we must have $s<k$.
The following result is crucial for the final stage of our argument, when we show that each graph $G$ with a large minimum degree, which admits a certain block-like decomposition into small 2-connected subgraphs, cannot be too large.
\[l:V\] For a given set $V$ and positive integers $\ell, s,t,n\ge 2$, satisfying $(t-1)(2\ell-1)\le n-1 < t(2\ell-1)$, let $V_1, V_2, \dots, V_s$ be subsets of $V$ such that
1. \[it:1\] $V = V_1\cup V_2\cup \dots \cup V_s$,
2. \[it:2\] $|V_i|\le 2\ell-1$ for $i=1,2,\dots,s$,
3. \[it:3\] $|V\setminus V_i|\le n-1$ for $i=1,2,\dots,s$,
4. \[it:4\] $|V_1| + |V_2| + \cdots + |V_s| \le |V| + s -1$.
Then $$|V|\le f_t(\ell,n)= \max \{t(2\ell-1),n+\lfloor (n-1)/t\rfloor\}\,.$$
Note first that if $s\le t$, then \[it:1\] and \[it:2\] imply that $|V|\le t(2\ell-1)$. Thus, let us assume that $s\ge t+1$. Then, $$\begin{aligned}
s(n-1)
&\overset{\textrm{\ref{it:3}}}{\ge}&
\sum_{i=1}^s |V\setminus V_i|=s|V| - (|V_1|+|V_2|+\dots+|V_s|) \\
&\overset{\textrm{\ref{it:4}}}{\ge }&
s|V| - (|V|+s-1) = (s-1)|V| - (s-1),
\end{aligned}$$ and thereby $$\hfill |V| \le \frac{s}{s-1}(n-1) +1 = n + \frac{n-1}{s-1}\le n+ \frac{n-1}{t}.$$ Since $|V|$ is an integer, the assertion follows.
Since we have already bound $R(C_{2\ell},K_{1,n})$ from below in Section \[sec:lower\], we are left with the task of showing that $$\label{l:up}
R(C_{2\ell},K_{1,n}) \le f_t(\ell,n)+1.$$ For this purpose, let $t\ge 2$, $$\label{eq:el}
\ell \ge (19.1t)^9> 360(t+1)^4\cdot K(t+1),$$ where $K(t+1) = 24\cdot 10^6 (t+1)^5$ is a function defined in , and $$(t-1)(2\ell-1) \le n-1 < t(2\ell-1).$$ Moreover, let $G=(V,E)$ be a $C_{2\ell}$-free graph on $$\label{eq:Nf}
|V|
=f_t(\ell,n)+1$$ vertices such that ${ \sbox{\myboxA}{$\m@thG$} \setbox\myboxB\null \ht\myboxB=\ht\myboxA \dp\myboxB=\dp\myboxA \wd\myboxB=0.75\wd\myboxA \sbox\myboxB{$\m@th\overline{\copy\myboxB}$} \setlength\mylenA{\the\wd\myboxA} \addtolength\mylenA{-\the\wd\myboxB} \ifdim\wd\myboxB<\wd\myboxA \rlap{\hskip 0.5\mylenA\usebox\myboxB}{\usebox\myboxA} \else
\hskip -0.5\mylenA\rlap{\usebox\myboxA}{\hskip 0.5\mylenA\usebox\myboxB} \fi} \nsupseteq K_{1,n}$ (or equivalently, $\Delta({ \sbox{\myboxA}{$\m@thG$} \setbox\myboxB\null \ht\myboxB=\ht\myboxA \dp\myboxB=\dp\myboxA \wd\myboxB=0.75\wd\myboxA \sbox\myboxB{$\m@th\overline{\copy\myboxB}$} \setlength\mylenA{\the\wd\myboxA} \addtolength\mylenA{-\the\wd\myboxB} \ifdim\wd\myboxB<\wd\myboxA \rlap{\hskip 0.5\mylenA\usebox\myboxB}{\usebox\myboxA} \else
\hskip -0.5\mylenA\rlap{\usebox\myboxA}{\hskip 0.5\mylenA\usebox\myboxB} \fi}) \le n-1$).
Recall that $f_t(\ell, n) = \max\{t(2\ell-1), n+ \lfloor (n-1)/t\rfloor \}$ and observe that $$\label{eq:estimate}
(n-1)+\frac{ t(2\ell-1)}{t+1} <
f_t(\ell,n) < (t+1)(2\ell-1)\,.$$ Indeed, the upper bound follows immediately from the fact that $n-1 < t(2\ell - 1)$, so it is enough to verify the lower bound for $f_t(\ell,n)$. If $$(n-1) + \frac{t(2\ell-1)}{t+1} < {t(2\ell-1)}$$ then we are done, otherwise we have $$\frac{t(2\ell-1)}{t+1}\le\frac{n-1}{t}$$ and, since $f_t(\ell,n)\ge n+\lfloor \frac{n-1}{t}\rfloor$, holds as well.
Our aim is to show that $G$ contains a family of 2-connected subgraphs $G_i=(V_i,E_i)$, $i=1,2,\dots,s$, such that their vertex sets fulfil the conditions \[it:1\]-\[it:4\] listed in Lemma \[l:V\]. We first apply Lemma \[l:dec\] to $G$ with $k = \frac{(t+1)^2+1}{t}$. We are allowed to do this, because tells us that $$\label{eq:dd}
\begin{aligned}
\delta(G) &= |V|-1 - \Delta({ \sbox{\myboxA}{$\m@thG$} \setbox\myboxB\null \ht\myboxB=\ht\myboxA \dp\myboxB=\dp\myboxA \wd\myboxB=0.75\wd\myboxA \sbox\myboxB{$\m@th\overline{\copy\myboxB}$} \setlength\mylenA{\the\wd\myboxA} \addtolength\mylenA{-\the\wd\myboxB} \ifdim\wd\myboxB<\wd\myboxA \rlap{\hskip 0.5\mylenA\usebox\myboxB}{\usebox\myboxA} \else
\hskip -0.5\mylenA\rlap{\usebox\myboxA}{\hskip 0.5\mylenA\usebox\myboxB} \fi}) \ge f_t(\ell,n)-(n-1)> \frac{t(2\ell-1)}{t+1}\ge \frac{t|V|}{(t+1)^2}
\end{aligned}$$ However, both $|V|$ and $\ell$ are much larger than $t$, in particular, $|V|\ge 2\ell> (19.1t)^9$. Hence, $$\delta(G) \ge \frac{t|V|}{(t+1)^2}
>
\frac{t}{(t+1)^2+1}|V| + \frac{(t+1)^2+1}{t}$$ and the assumptions of Lemma \[l:dec\] hold with $k= \frac{(t+1)^2+1}{t}\le t+3$. Thus, there exists $s\le t+2$ and a set of vertices $U\subset V$, $|U|\le s-1$, such that $G-U$ is a union of $s$ vertex-disjoint, 2-connected graphs, $G'_i=(V'_i, E'_i)$. Note that since $|U|\le t+1$ and $\ell > 4K(t+1)$ are large, $$\label{eq:deltaup}
\delta(G'_i) \ge \delta(G)-|U| > \frac{2(2\ell-1)}{3} - (t+1) > \ell + K(t+1).$$ Moreover, clearly, $|V'_i|\le |V| <(t+1)2\ell$, so Lemma \[l:small\] applied to $G'_i$, with $c=t+1$, gives $$\label{eq:Gi}
|V'_i| \le 2\ell - 1 \quad \textrm{for}\quad i=1,2,\dots,s.$$ Now, for every $i=1,2,\dots,s$, we define $$U_i = \{u\in U: \deg_G(u, V'_i) \ge 4t\},\quad V_i = V'_i\cup U_i, \quad \textrm{and}\quad\ G_i = G[V_i].$$ We will show that the sets $V_1, V_2, \dots, V_s$ satisfy the conditions \[it:1\]-\[it:4\] of the hypothesis of Lemma \[l:V\].
In order to verify \[it:1\] observe that since the minimum degree of $G$ is large, i.e. $\delta(G)\ge 8t^2$, every vertex $u\in U$ belongs to at least one of the sets $U_i$, and therefore $V = V_1 \cup V_2 \cup \dots\cup V_s$.
To prove that $|V_i|\le 2\ell -1$, let us assume that $|V_i|\ge 2\ell$. Now take any subset $\hat U_i$ of $U_i$, with $|\hat U_i| = 2\ell - |V_i'|$ elements and set $H_i=G[V'_i\cup \hat U_i]$. Note that $H_i$ has $2\ell$ vertices. We will argue that $H_i$ is hamiltonian. To this end, consider the closure of $H_i$. From we know that all vertices from $V'_i$ have degree at least $\delta(G'_i)> \ell+K(t+1)$, so in the closure of $H_i$ the set $V'_i$ spans a clique of size at least $2\ell-|U|\ge 2\ell -t-1$. On the other hand, each vertex from $\hat U_i$ has in $V'_i$ at least $4t$ neighbours, so the closure of $H_i$ is the complete graph and therefore, by Lemma \[l:BC\], $H_i$ is hamiltonian. However it means that $C_{2\ell}\subseteq H_i\subseteq G$ which contradicts our assumption that $G$ is $C_{2\ell}$-free. Consequently, for every $i=1,2,\dots, s$, we have $|V_i|\le 2\ell-1$, as required by \[it:2\].
Note that from it follows that $|V'_i|> \delta(G'_i)> \ell$. Since $U\setminus U_i$ sends at most $4t|U|\le 4t(t+1)<\ell$ edges to the set $V'_i$, there exists a vertex $v_i\in V'_i\subseteq V_i$ which has all its neighbours in $G_i$. It means however that, since ${ \sbox{\myboxA}{$\m@thG$} \setbox\myboxB\null \ht\myboxB=\ht\myboxA \dp\myboxB=\dp\myboxA \wd\myboxB=0.75\wd\myboxA \sbox\myboxB{$\m@th\overline{\copy\myboxB}$} \setlength\mylenA{\the\wd\myboxA} \addtolength\mylenA{-\the\wd\myboxB} \ifdim\wd\myboxB<\wd\myboxA \rlap{\hskip 0.5\mylenA\usebox\myboxB}{\usebox\myboxA} \else
\hskip -0.5\mylenA\rlap{\usebox\myboxA}{\hskip 0.5\mylenA\usebox\myboxB} \fi}\not\supseteq K_{1,n}$, the set $V\setminus V_i$, which contains only vertices which are not adjacent to $v_i$, has at most $n-1$ elements, and so \[it:3\] holds.
Finally, to verify \[it:4\] consider an auxiliary bipartite graph $F=(V_F, E_F)$, where $V_F=\{V'_1, V'_2, \dots, V'_s\}\cup U$ and $$E_F=\{uV'_i:u\in U_i\}.$$ We claim that $F$ is a forest. Indeed, assume for a sake of contradiction that $F$ contains a cycle $C=V'_{i_1}u_{j_1}\dots V'_{i_w}u_{j_w}V'_{i_{w+1}}$, ${i_1} = {i_{w+1}}$. Observe that every vertex $u_{j_x}$, $x= 1,2,\dots,w$, has at least two neighbours in both sets $V'_{i_x}$ and $V'_{i_{x+1}}$. Moreover, $\delta(G'_i) > \ell+1$ and $|V'_i| \le 2\ell - 1$, so from Lemma \[l:W\] it follows that any two vertices of $V'_i$ can be connected by a path of length $y$ for every $y=2,3,\dots, |V_i|-1$. Therefore, since $w\le |U|\le t+1\le \ell/4$, the existence of $C$ in $F$ implies the existence of a cycle $C_{2\ell}$ in $G$, contradicting the fact that $G$ is $C_{2\ell}$-free.
Since $F$ is a forest it contains at most $|U|+s-1$ edges, i.e. $$\sum_{u\in U}\deg_F(u) \le |U|+s-1.$$ Note that in the sum $|V_1|+|V_2|+\cdots+|V_s|$ each vertex from $\bigcup_i V'_i = V\setminus U$ is counted once, and each vertex $u\in U$ is counted precisely $\deg_F(u)$ times, so $$|V_1|+\dots +|V_s| = |V| -|U|+\sum_{u\in U}\deg_F(u) \le |V|+s-1,$$ as required by \[it:4\].
Now we can apply Lemma \[l:V\] and infer that $|V|\le f_t(\ell,n)$ while we have assumed that $|V|=f_t(\ell,n)+1$. This final contradiction completes the proof of the upper bound for $R(C_{2\ell}, K_{1,n})$ and, together with , concludes the proof of Theorem \[thm1\].
[^1]: The first author was partially supported by National Science Centre, Poland, grant 2017/27/B/ST1/00873. The third author was partially supported by NSFC under grant numbers 11601527, 11971011, and 11801520.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Let $A$ and $B$ be commutative rings with unity, $f:A\to B$ a ring homomorphism and $J$ an ideal of $B$. Then the subring $A\bowtie^fJ:=\{(a,f(a)+j)|a\in A$ and $j\in J\}$ of $A\times B$ is called the amalgamation of $A$ with $B$ along $J$ with respect to $f$. In this paper, we study the property of Cohen-Macaulay in the sense of ideals which was introduced by Asgharzadeh and Tousi, a general notion of the usual Cohen-Macaulay property (in the Noetherian case), on the ring $A\bowtie^fJ$. Among other things, we obtain a generalization of the well-known result that when the Nagata’s idealization is Cohen-Macaulay.'
address:
- 'Department of Mathematics, University of Tabriz, Tabriz, Iran.'
- 'Department of Mathematics, University of Tabriz, Tabriz, Iran.'
- 'Department of Mathematics, University of Tabriz, Tabriz, Iran.'
author:
- 'Y. Azimi, P. Sahandi, and N. Shirmohammadi'
title: 'Cohen-Macaulay properties under the amalgamated construction'
---
Introduction {#intro}
============
The theory of Cohen-Macaulay rings is a major area of study in commutative algebra and algebraic geometry. From the appearance of the notion of Cohen-Macaulayness, this notion admits a rich theory in commutative *Noetherian* rings. There have been attempts to extend this notion to commutative *non-Noetherian* rings, since Glaz raised the question that whether there exists a generalization of the notion of Cohen-Macaulayness with certain desirable properties to non-Noetherian rings [@G0], [@G]. In order to provide an answer to the question of Glaz [@G Page 220], recently several notions of Cohen-Macaulayness for non-Noetherian rings and modules were introduced in [@H], [@HM], and [@AT]. Among those is the Cohen-Macaulay in the sense of $\mathcal{A}$, introduced by Asgharzadeh and Tousi [@AT], where $\mathcal{A}$ is a non-empty subclass of ideals of a commutative ring (the definition will be given later in Section 2).
In [@DFF] and [@DFF2], D’Anna, Finocchiaro, and Fontana have introduced the following new ring construction. Let $A$ and $B$ be commutative rings with unity, let $J$ be an ideal of $B$ and let $f:A\to B$ be a ring homomorphism. The *amalgamation of $A$ with $B$ along $J$ with respect to $f$* is the following subring $$A\bowtie^fJ:=\{(a,f(a)+j)|a\in A\text{ and }j\in J\}$$ of $A\times
B$. This construction generalizes the amalgamated duplication of a ring along an ideal (introduced and studied in [@D], [@DF]). Moreover, several classical constructions such as the Nagata’s idealization (cf. [@Na page 2], [@Hu Chapter VI, Section 25]), the $A + XB[X]$ and the $A+XB[[X]]$ constructions can be studied as particular cases of this new construction (see [@DFF Examples 2.5 and 2.6]).
Below, we review briefly some known results about the behavior of Cohen-Macaulayness under the amalgamated construction and its particular cases.
Let $M$ be an $A$-module. In 1955, Nagata introduced a ring extension of $A$ called the *trivial extension* of $A$ by $M$ (or the *idealization* of $M$ in $A$), denoted here by $A\ltimes M$. Now, assume that $A$ is Noetherian local and that $M$ is finitely generated. It is well known that the trivial extension $A\ltimes M$ is Cohen-Macaulay if and only if $A$ is Cohen-Macaulay and $M$ is maximal Cohen-Macaulay, see [@AW Corollary 4.14].
Let $A$ be a Noetherian local ring and $I$ be an ideal of $A$. Consider the amalgamated duplication $A\bowtie I:=\{(a,a+i)|a\in
A\text{ and }i\in I\}$ as in [@D], [@DF]. The properties of being Cohen-Macaulay, generalized Cohen-Macaulay, Gorenstein, quasi-Gorenstein, $(S_n)$, $(R_n)$ and normality under the construction of amalgamated duplication were studied further in many research papers such as [@D], [@DFF1], [@BSTY], and [@SSh].
In [@DFF1], under the condition that $A$ is Cohen-Macaulay (Noetherian local) and $J$ is a finitely generated $A$-module, it is observed that $A\bowtie^f J$ is a Cohen-Macaulay ring if and only if it is a Cohen-Macaulay $A$-module if and only if $J$ is a maximal Cohen-Macaulay module. Then, in [@SSS], assuming $(A,{\mathfrak m})$ is Noetherian local, $J$ is contained in the Jacobson radical of $B$ such that ${\operatorname{depth}}_AJ<\infty$ and that $f^{-1}({\mathfrak q})\neq{\mathfrak m}$, for each ${\mathfrak q}\in{\operatorname{Spec}}(B)\backslash V(J)$, it is shown that $A\bowtie^f J$ is Cohen-Macaulay if and only if $A$ is Cohen-Macaulay and $J$ is a big Cohen-Macaulay $A$-module (i.e. ${\operatorname{depth}}_AJ=\dim A$).
The next natural step is to seek when the amalgamated algebra $A\bowtie^fJ$ is Cohen-Macaulay without Noetherian assumption.
In this paper, we investigate the property of Cohen-Macaulay in the sense of ideals (resp. maximal ideals, finitely generated ideals) on the amalgamation. More precisely, in Section 2, we recall some essential definitions and results on which we base our approach. In Section 3, we fix our notation and give some elementary results on the behavior of the Koszul grade with respect to amalgamation. In Section 4, we classify some necessary and sufficient conditions for the amalgamated algebra $A\bowtie^f J$ to be Cohen-Macaulay in the sense of ideals (resp. maximal ideals, finitely generated ideals) (Theorems \[m\], \[jzirenil\] and \[gd\]). Among the applications of our results are the classification of when the trivial extension $A\ltimes M$ and the amalgamated duplication $A\bowtie I$ are Cohen-Macaulay in the sense of ideals (Corollaries \[t\] and \[d\]).
Preliminaries
=============
To facilitate the reading of the paper, we recall in this section some preliminary definitions and properties to be used later.
Let ${\mathfrak b}$ be a finitely generated ideal of a commutative ring $A$ and $M$ be an $A$-module. Assume that ${\mathfrak b}$ is generated by the sequence ${{\bf x}}= x_1,\ldots ,x_\ell$. We denote the Koszul complex related to ${{\bf x}}$ by $\mathbb{K}_{\bullet}({{\bf x}})$. The *Koszul grade of ${\mathfrak b}$ on $M$* is defined by $$\operatorname{K.grade}_A ({\mathfrak b},M):= \inf \{i\in {\mathbb{N}}\cup \{0\}|H^i( {\operatorname{Hom}}_A(\mathbb{K}_{\bullet}({{\bf x}}),M))\neq 0\}.$$ It follows from [@BH98 Corollary 1.6.22] and [@BH98 Proposition 1.6.10(d)] that this does not depend on the choice of generating sets of ${\mathfrak b}$.
Let ${\mathfrak a}$ be an arbitrary ideal of $A$. One can then define the Koszul grade of ${\mathfrak a}$ on $M$ by setting $$\operatorname{K.grade}_A ({\mathfrak a}, M) := \sup \{ \operatorname{K.grade}_A ({\mathfrak b}, M)| {\mathfrak b}\text{ is a finitely generated subideal of }{\mathfrak a}\}.$$ In view of [@BH98 Proposition 9.1.2(f)], this definition coincides with the original one for finitely generated ideals. In particular, when $(A,{\mathfrak m})$ is local Noetherian, ${\operatorname{depth}}_AM$ was defined by $\operatorname{K.grade}_A ({\mathfrak m}, M)$ in [@BH98 Section 9.1].
The *Čech grade of ${\mathfrak b}$ on $M$* is defined by $$\operatorname{\check{C}.grade}_A ({\mathfrak b}, M):= \inf \{i\in {\mathbb{N}}\cup \{0\}| H^i_{{{\bf x}}} (M) \neq 0\}.$$ Here $H^i_{{{\bf x}}}(M)$ denotes the $i$-th cohomology of the $\check{C}ech$ complex of $M$ related to ${{\bf x}}$. It follows from [@HM Proposition 2.1(e)] that $H^i_{{{\bf x}}}(M)$ is independent of the choice of sequence of generators for ${\mathfrak b}$. One can then define $$\operatorname{\check{C}.grade}_A ({\mathfrak a}, M) := \sup \{ \operatorname{\check{C}.grade}_A ({\mathfrak b}, M)| {\mathfrak b}\text{ is a finitely generated subideal of }{\mathfrak a}\}.$$ By virtue of [@HM Proposition 2.7], one has $\operatorname{\check{C}.grade}_A ({\mathfrak a},
M)=\operatorname{K.grade}_A ({\mathfrak a},M)$.
Let ${\mathfrak p}$ a prime ideal of $R$. By $\operatorname{ht}_M{\mathfrak p}$, we mean the Krull dimension of the $R_{\mathfrak p}$-module $M_{\mathfrak p}$. Also, $$\operatorname{ht}_M{\mathfrak a}:=\inf \{\operatorname{ht}_M{\mathfrak p}|{\mathfrak p}\in {\operatorname{Supp}}_A(M) \cap V({\mathfrak a}) \}.$$
Let $\mathcal{A}$ be a non-empty subclass of the class of all ideals of the ring $A$ and $M$ be an $A$-module. We say that $M$ is *Cohen-Macaulay in the sense of $\mathcal{A}$* if $\operatorname{ht}_M({\mathfrak a})=\operatorname{K.grade}_A({\mathfrak a},M)$ for all ideals ${\mathfrak a}$ in $\mathcal{A}$, see [@AT Definition 3.1]. The classes we are interested in are the class of all maximal ideals, the class of all ideals and the class of all finitely generated ideals. Assume that $A$ is Noetherian. It is well-known that $A$ is Cohen-Macaulay (in the sense of the original definition in the Noetherian setting) if and only if it is Cohen-Macaulay in the sense of ideals (resp. maximal ideals, finitely generated ideals) see [@BH98 Corollary 2.1.4].
The Koszul grade on amalgamation
================================
Let us fix some notation which we shall use frequently throughout the paper: $A,\ B$ are two commutative rings with unity, $f:A\to B$ is a ring homomorphism, and $J$ denotes an ideal of $B$. So that $J$ is an $A$-module via the homomorphism $f$. In the sequel, we consider the contraction and extension with respect to the natural embedding $\iota _A: A\to A\bowtie^f J$ defined by $\iota _A
(x)=(x,f(x))$, for every $x\in A$. In particular, for every ideal ${\mathfrak a}$ of $A$, ${\mathfrak a}^e$ means ${\mathfrak a}(A\bowtie^f J)$.
This section is devoted to prove some lemmas on the behavior of the Koszul grade on amalgamation. These lemmas provide the key for some crucial arguments later in this paper. In the proof of the next lemma, we use $H_{i}({{\bf x}},M)$ to denote the $i$th Koszul homology of an $A$-module $M$ with respect to a finite sequence ${{\bf x}}\subset A$.
\[kgr1\] Let the notation and hypotheses be as in the beginning of this section. Then
1. for any finitely generated ideal ${\mathfrak b}$ of $A$, one has the equality $$\operatorname{K.grade}_{A\bowtie^f J} ({\mathfrak b}^e,A\bowtie^f J)= \min
\{\operatorname{K.grade}_A({\mathfrak b},A),\operatorname{K.grade}_A({\mathfrak b},J)\}.$$
2. for any ideal ${\mathfrak a}$ of $A$, one has the inequality $$\operatorname{K.grade}_{A\bowtie^f J} ({\mathfrak a}^e,A\bowtie^f J)\le \min
\{\operatorname{K.grade}_A({\mathfrak a},A),\operatorname{K.grade}_A({\mathfrak a},J)\}.$$
Assume that ${\mathfrak b}$ is a finitely generated ideal of $A$ and that ${\mathfrak b}$ is generated by a finite sequence ${{\bf x}}$ of length $\ell$. Then, using [@AT Proposition 2.2(iv)] together with [@HM Proposition 2.7], we have $$\begin{aligned}
& \operatorname{K.grade}_{A\bowtie^f J} ({\mathfrak b}^e,A\bowtie^f J)\\
=&\operatorname{K.grade}_A ({\mathfrak b},A\bowtie^f J)\\
=&\sup \{k\ge 0 | H_{\ell -i}({{\bf x}}, A\bowtie^f J)=0\ \text{for all}\ i<k \}\\
=&\sup \{k\ge 0 | H_{\ell -i}({{\bf x}}, A)\oplus H_{\ell -i}({{\bf x}}, J)=0\ \text{for all}\ i<k \}\\
=&\min \{\operatorname{K.grade}_A({\mathfrak b},A),\operatorname{K.grade}_A({\mathfrak b},J)\}.\end{aligned}$$ For the third equality, one notices that the amalgamation $A\bowtie^f J$, as an $A$-module, is isomorphic to the direct sum of $A\oplus J$ using [@DFF Lemma 2.3(4)]. This proves (1). To obtain (2), assume that ${\mathfrak a}$ is an ideal of $A$. Let $\Sigma$ be the class of all finitely generated subideals of ${\mathfrak a}$. It follows from the definition that $$\begin{aligned}
&\operatorname{K.grade}_A ({\mathfrak a},A\bowtie^f J)\\
=&\sup \{ \operatorname{K.grade}_A ({\mathfrak b},A\bowtie^f J) | {\mathfrak b}\in \Sigma \}\\
=& \sup \{ \min \{\operatorname{K.grade}_A({\mathfrak b},A),\operatorname{K.grade}_A({\mathfrak b},J)\} | {\mathfrak b}\in \Sigma \}\\
\le& \min \{\sup \{\operatorname{K.grade}_A({\mathfrak b},A)| {\mathfrak b}\in \Sigma \}, \sup \{\operatorname{K.grade}_A({\mathfrak b},J)| {\mathfrak b}\in \Sigma \}\}\\
=&\min \{\operatorname{K.grade}_A({\mathfrak a},A),\operatorname{K.grade}_A({\mathfrak a},J)\}.\end{aligned}$$ Again, using this in conjunction with [@AT Proposition 2.2(iv)], one deduces that $$\begin{aligned}
\operatorname{K.grade}_{A\bowtie^f J}({\mathfrak a}^e,A\bowtie^f J)
&=\operatorname{K.grade}_A({\mathfrak a},A\bowtie^f J)\\
&\le \min \{\operatorname{K.grade}_A({\mathfrak a},A),\operatorname{K.grade}_A({\mathfrak a},J)\}.\end{aligned}$$
\[kgr\] Assume that $A$ is Cohen-Macaulay in the sense of (finitely generated) ideals and $\operatorname{K.grade}_A({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every (finitely generated) ideal ${\mathfrak a}$ of $A$. Then $$\operatorname{K.grade}_{A\bowtie^f J} ({\mathfrak a}^e,A\bowtie^f J)=\operatorname{K.grade}_A({\mathfrak a},A)\le \operatorname{K.grade}_A({\mathfrak a},J)$$ for any (finitely generated) ideal ${\mathfrak a}$ of $A$.
Assume that ${\mathfrak a}$ is a (finitely generated) ideal of $A$ and let $\Sigma$ be the class of all finitely generated subideals of ${\mathfrak a}$. Then, as in the proof of Lemma \[kgr1\], again, using [@AT Proposition 2.2(iv)], we have $$\begin{aligned}
&\operatorname{K.grade}_{A\bowtie^f J}({\mathfrak a}^e,A\bowtie^f J)\\
=&\operatorname{K.grade}_A ({\mathfrak a},A\bowtie^f J)\\
=&\sup \{ \operatorname{K.grade}_A ({\mathfrak b},A\bowtie^f J) | {\mathfrak b}\in \Sigma \}\\
=&\sup \{ \min \{\operatorname{K.grade}_A ({\mathfrak b},A),\operatorname{K.grade}_A ({\mathfrak b},J)\} | {\mathfrak b}\in \Sigma \}\\
=&\sup \{ \operatorname{K.grade}_A ({\mathfrak b},A) | {\mathfrak b}\in \Sigma \}\\
=&\operatorname{K.grade}_A ({\mathfrak a},A).\end{aligned}$$ The forth equality follows from [@AT Lemma 3.2] and our assumption. This completes the proof.
The following lemma is a slight modification of [@AT Lemma 3.2].
\[tamime 3.2\] Let ${\mathfrak a}$ be an ideal of $A$ and $M$ be an $A$-module.
1. Let $A$ be quasi-local with the maximal ideal ${\mathfrak m}$. If $\operatorname{K.grade}_A
({\mathfrak m}, M)<\infty$, then $\operatorname{K.grade}_A ({\mathfrak m}, M)\leq\dim A$.
2. If, for every minimal prime ideal ${\mathfrak p}$ over ${\mathfrak a}$, $\operatorname{K.grade}_A({\mathfrak p}R_{\mathfrak p}, M_{\mathfrak p})< \infty$ (e.g. when $M$ is finitely generated), then $\operatorname{K.grade}_A({\mathfrak a}, M)\le \operatorname{ht}{\mathfrak a}.$
\(1) Using [@HM Proposition 2.7], it is enough for us to show that $\operatorname{\check{C}.grade}_A ({\mathfrak m}, M)\leq\dim A$. In order to prove this, assume that $\dim A<\infty$ and let ${\bf x}$ be a finite sequence of elements in ${\mathfrak m}$. It follows from [@HM Proposition 2.4] that $\operatorname{\check{C}.grade}_A({\bf x},M)\le \dim A$. Therefore $\operatorname{\check{C}.grade}_A({\mathfrak m}, M)\leq\dim
A$. (2) Notice, by [@AT Proposition 2.2(iii)], that $\operatorname{K.grade}_A({\mathfrak a}, M)< \infty$. Then, by [@AT Proposition 2.2(ii) and (iii)], one may assume that $A$ is quasi-local with the maximal ideal ${\mathfrak m}$. Now (1) completes the proof.
Main results
============
Assume that $A$ is Noetherian local, and that $J$ is contained in the Jacobson radical of $B$ and it is a finitely generated $A$-module. Recall that a finitely generated module $M$ over $A$ is called a *maximal Cohen-Macaulay $A$-module* if ${\operatorname{depth}}_AM=\dim
A$. Note that, in this circumstance, ${\operatorname{depth}}_AM$ equals the common length of the maximal $M$-regular sequences in the maximal ideal of $A$. In [@SSS Corollary 2.5], it is shown that $A\bowtie^f J$ is Cohen-Macaulay if and only if $A$ is Cohen-Macaulay and $J$ is a maximal Cohen-Macaulay $A$-module. Our first main result improves this corollary by removing the Noetherian assumption.
The reader should be aware that when we say $A\bowtie^f J$ is Cohen-Macaulay in the sense of a non-empty class of ideals, we mean $A\bowtie^f J$ is Cohen-Macaulay as a ring.
\[m\] Assume that $(A,{\mathfrak m})$ is quasi-local such that ${\mathfrak m}$ is finitely generated. Assume that $J$ is contained in the Jacobson radical of $B$ and it is finitely generated as an $A$-module. Then $A\bowtie^f
J$ is Cohen-Macaulay (ring) in the sense of maximal ideals if and only if $A$ is Cohen-Macaulay in the sense of maximal ideals and $\operatorname{K.grade}_A({\mathfrak m},J)=\dim A$.
Assume that ${\mathfrak m}$ is generated by the sequence ${\bf
a}=a_1,\ldots,a_n$ and that $J$ is generated by the sequence ${{\bf b}}=b_1,\ldots,b_m$. Hence ${\mathfrak m}^{\prime_f}={\mathfrak m}\bowtie^f J$, the unique maximal ideal of $A\bowtie^f J$ [@DFF1 Corollary 2.7(3)], is generated by the sequence ${\bf
c}=(a_1,f(a_1)),\ldots, (a_n,f(a_n)),(0,b_1),\ldots,(0,b_m)$. Notice, by [@DFF1 Corollary 3.2 and Remark 3.3], that one has $\sqrt{\iota_A({{\bf a}})(A\bowtie^f J)}=\sqrt{{\mathfrak m}(A\bowtie^f
J)}={\mathfrak m}^{\prime_f}={\bf c}(A\bowtie^f J)$. Therefore $$\begin{aligned}
\operatorname{K.grade}_{A\bowtie^f J}({\mathfrak m}^{\prime_f},A\bowtie^f J)
&=\operatorname{\check{C}.grade}_{A\bowtie^f J}({\mathfrak m}^{\prime_f},A\bowtie^f J)\\
& = \inf \{i|H^i_{\bf c}(A\bowtie^f J) \neq 0\}\\
& = \inf \{i|H^i_{\iota_A({{\bf a}})}(A\bowtie^f J) \neq 0\} \\
& = \inf \{i|H^i_{{{\bf a}}}(A\bowtie^f J) \neq 0\} \\
& = \inf \{i|H^i_{{{\bf a}}}(A)\oplus H^i_{{{\bf a}}}(J) \neq 0\} \\
& = \min \{\operatorname{\check{C}.grade}_A({\mathfrak m},A),\operatorname{\check{C}.grade}_A({\mathfrak m},J)\}\\
& = \min \{\operatorname{K.grade}_A({\mathfrak m},A),\operatorname{K.grade}_A({\mathfrak m},J)\}.\end{aligned}$$ The first equality obtains by [@HM Proposition 2.7], the third equality follows from [@HM Proposition 2.1(e)] in conjunction with $\sqrt{\iota_A({{\bf a}})(A\bowtie^f J)}={\bf c}(A\bowtie^f J)$, the forth equality deduces from [@HM Proposition 2.1(f)], and the fifth equality holds since as an $A$-module $A\bowtie^f J\cong
A\oplus J$ [@DFF Lemma 2.3(4)].
Consequently, the conclusion yields by the equality $$\operatorname{K.grade}_{A\bowtie^f J}({\mathfrak m}^{\prime_f},A\bowtie^f J)=\min \{\operatorname{K.grade}_A({\mathfrak m},A),\operatorname{K.grade}_A({\mathfrak m},J)\}$$ together with $\dim A\bowtie^f J=\dim A$. This last equality holds true, since $A\bowtie^f J$ is integral over $A$ (see [@DFF2 Proposition 4.2]).
(See [@SSS Corollary 2.5]) Assume that $A$ is Noetherian local, and that $J$ is contained in the Jacobson radical of $B$ and it is finitely generated as an $A$-module. Then $A\bowtie^f J$ is Cohen-Macaulay (ring) if and only if $A$ is Cohen-Macaulay and $J$ is a maximal Cohen-Macaulay $A$-module.
The key to the next theorem is given by the following elementary lemmas. Their proofs are straightforward; so that we omit them. Recall from [@DFF1 Corollary 2.5] that the prime ideals of $A\bowtie^fJ$ are of the type $\overline{{\mathfrak q}}^f$ or ${\mathfrak p}^{\prime_f}$, for ${\mathfrak q}$ varying in ${\operatorname{Spec}}(B)\backslash V(J)$ and ${\mathfrak p}$ in ${\operatorname{Spec}}(A)$, where $$\begin{aligned}
{\mathfrak p}^{\prime_f}:= & {\mathfrak p}\bowtie^fJ:=\{(p,f(p)+j)|p\in {\mathfrak p}, j\in J\}, \\[1ex]
\overline{{\mathfrak q}}^f:= & \{(a,f(a)+j)|a\in A, j\in J, f(a)+j\in {\mathfrak q}\}.\end{aligned}$$
\[e\] Assume that ${\mathfrak a}$ is an ideal of $A$, ${\mathfrak p}$ is a prime ideal of $A$ and that ${\mathfrak q}$ is a prime ideal of $B$. Then
1. ${\mathfrak a}^e\subseteq{\mathfrak p}^{\prime_f}$ if and only if ${\mathfrak a}\subseteq{\mathfrak p}$.
2. ${\mathfrak a}^e\subseteq\bar{{\mathfrak q}}^{f}$ if and only if $f({\mathfrak a})\subseteq{\mathfrak q}$.
In the sequel, we use ${\operatorname{Nil}}(B)$ to denote the nil radical of the ring $B$.
\[ht\] Assume that ${\mathfrak a}$ is an ideal of $A$, $J\subseteq {\operatorname{Nil}}(B)$ and that ${\mathfrak p}$ is a prime ideal of $A$. Then
1. ${\mathfrak p}\in{\operatorname{Min}}({\mathfrak a})$ if and only if ${\mathfrak p}^{\prime_f}\in{\operatorname{Min}}({\mathfrak a}^e)$.
2. $\operatorname{ht}{\mathfrak a}=\operatorname{ht}{\mathfrak a}^e$.
3. ${\operatorname{Min}}({\mathfrak p}^e)=\{{\mathfrak p}^{\prime_f}\}$. In particular $\operatorname{ht}{\mathfrak p}^e=\operatorname{ht}{\mathfrak p}^{\prime_f}$.
\[j\] Let $\mathcal{A}$ be a non-empty class of ideals of $A$. Assume that $\operatorname{ht}{\mathfrak a}^e \ge \operatorname{ht}{\mathfrak a}$ for each ${\mathfrak a}\in \mathcal{A}$. If $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of $\mathcal{A}^e:=\{{\mathfrak a}^e|{\mathfrak a}\in\mathcal{A}\}$, then $A$ is Cohen-Macaulay in the sense of $\mathcal{A}$ and $\operatorname{K.grade}_A({\mathfrak a},J)\ge
\operatorname{ht}{\mathfrak a}$ for each ${\mathfrak a}\in \mathcal{A}$.
Assume that ${\mathfrak a}\in \mathcal{A}$. Then, by Lemma \[kgr1\](2), we have $$\begin{aligned}
\operatorname{K.grade}_A({\mathfrak a},A)
&\geq \operatorname{K.grade}_{A\bowtie^f J}({\mathfrak a}^e,A\bowtie^f J)\\
&=\operatorname{ht}{\mathfrak a}^e\\
&\geq\operatorname{ht}{\mathfrak a}\\
&\ge \operatorname{K.grade}_A({\mathfrak a},A).\end{aligned}$$ Thus $\operatorname{K.grade}_A({\mathfrak a},A)=\operatorname{ht}{\mathfrak a}$. This means that $A$ is Cohen-Macaulay in the sense of $\mathcal{A}$. Similarly, one obtains $\operatorname{K.grade}_A({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$.
It is not clear for us whether, in general, the inequality $\operatorname{ht}{\mathfrak a}^e \ge \operatorname{ht}{\mathfrak a}$ holds for each ${\mathfrak a}\in \mathcal{A}$. However, under the assumption $J\subseteq {\operatorname{Nil}}(B)$, for each ideal ${\mathfrak a}$, one has the equality $\operatorname{ht}{\mathfrak a}^e=\operatorname{ht}{\mathfrak a}$ by Lemma \[ht\].
The second main result of the paper is the following theorem.
\[jzirenil\] Assume that $J\subseteq {\operatorname{Nil}}(B)$. Then $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of ideals if and only if $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$.
One implication follows from Proposition \[j\] and Lemma \[ht\](2). Then, to prove the converse, assume that $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A({\mathfrak a},J)\ge
\operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$. Let ${\mathfrak a}$ be an ideal of $A$. First observe, by Lemmas \[kgr\] and \[ht\](2), that $$\begin{aligned}
\operatorname{K.grade}_{A\bowtie^f J}({\mathfrak a}^e,A\bowtie^f J)
&=\operatorname{K.grade}_A ({\mathfrak a},A)\\
&=\operatorname{ht}{\mathfrak a}\\
&=\operatorname{ht}{\mathfrak a}^e.\end{aligned}$$ Now, let $I$ be an arbitrary proper ideal of $A\bowtie^f J$. Then, by [@N Theorem 16 of Chapter 5], there exists a prime ideal $\mathcal{P}$ of $A\bowtie^f J$ containing $I$ such that $\operatorname{K.grade}_{A\bowtie^f J}(I,A\bowtie^f J)=\operatorname{K.grade}_{A\bowtie^f
J}(\mathcal{P},A\bowtie^f J)$. Notice that $\mathcal{P}={\mathfrak p}^{\prime_f}$ for some prime ideal ${\mathfrak p}$ of $A$ by [@DFF1 Corollaries 2.5 and 2.7]. Hence, by Lemma \[ht\](3), one has $$\begin{aligned}
\operatorname{ht}I
&\geq \operatorname{K.grade}_{A\bowtie^f J}(I,A\bowtie^f J)\\
&=\operatorname{K.grade}_{A\bowtie^f J}({\mathfrak p}^{\prime_f},A\bowtie^f J)\\
&\geq \operatorname{K.grade}_{A\bowtie^f J}({\mathfrak p}^e,A\bowtie^f J)\\
&= \operatorname{ht}{\mathfrak p}^e\\
&= \operatorname{ht}{\mathfrak p}^{\prime_f}\\
&\geq \operatorname{ht}I.\end{aligned}$$ Therefore $A\bowtie^f J$ is Cohen-Macaulay in the sense of ideals.
The next example shows that, if, in the above theorem, the hypothesis $J\subseteq {\operatorname{Nil}}(B)$ is dropped, then the corresponding statement is no longer always true.
\[ex\] Let $k$ be a field and $X,Y$ are algebraically independent indeterminates over $k$. Set $A:=k[[X]]$, $B:=k[[X,Y]]$ and let $J:=(X,Y)$. Let $f: A\to B$ be the inclusion. Note that $A$ is Cohen-Macaulay and $\operatorname{K.grade}_A({\mathfrak a},J)=\operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$. Indeed, if ${\mathfrak a}$ is a non-zero proper ideal of $A$, and $a$ is a non-zero element of ${\mathfrak a}$, then one has $$1\leq\operatorname{K.grade}_A(aA,J)\leq\operatorname{K.grade}_A({\mathfrak a},J)\leq\operatorname{ht}_J{\mathfrak a}\leq\operatorname{ht}{\mathfrak a}\leq1.$$ The first and second inequalities follow from [@BH98 Proposition 9.1.2(a),(f)], respectively. While the third inequality follows from Lemma \[tamime 3.2\](ii), the others are obvious. However, $A\bowtie^f J$ which is isomorphic to $k[[X,
Y,Z]]/(Y,Z)\cap(X-Y )$ is not Cohen-Macaulay.
Let $M$ be a $A$-module. Then $A\ltimes M$ denotes the *trivial extension* of $A$ by $M$. It should be noted that $0\ltimes M$ is an ideal in $A\ltimes M$ and $(0\ltimes M)^2=0$. As in [@DFF Example 2.8], if $B:=A\ltimes M$, $J:=0\ltimes M$, and $f:A\to B$ be the natural embedding, then $A\bowtie^f J\cong A\ltimes M$. Hence the next result follows from Theorem \[jzirenil\]. With it, we not only offer an application of Theorem \[jzirenil\], but we also provide a generalization of the well-known characterization of when the trivial extension is Cohen-Macaulay in the Noetherian (local) case, see [@AW Corollary 4.14].
\[t\] Let $M$ be an $A$-module. Then $A\ltimes M$ is Cohen-Macaulay (ring) in the sense of ideals if and only if $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A ({\mathfrak a},M)\ge \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$.
Assume that $A$ is Noetherian. In [@SSS Corollary 2.7], the authors showed that $A$ is Cohen-Macaulay if $A\bowtie^f J$ is Cohen-Macaulay provided that $f^{-1}({\mathfrak q})\neq {\mathfrak m}$ for each ${\mathfrak q}\in{\operatorname{Spec}}(B)\backslash V(J)$ and each ${\mathfrak m}\in{\operatorname{Max}}(A)$. In the following corollary we improve the conclusion of the mentioned result in the circumstance that $J\subseteq {\operatorname{Nil}}(B)$.
Assume that $A$ is Noetherian and $M$ is a finitely generated $A$-module. It can be seen that $\operatorname{ht}{\mathfrak a}\leq{\operatorname{grade}}_A({\mathfrak a},M)(=\operatorname{K.grade}_A({\mathfrak a},M))$ for every ideal ${\mathfrak a}$ of $A$ if and only if $M_{{\mathfrak p}}$ is maximal Cohen-Macaulay for every prime ideal ${\mathfrak p}\in{\operatorname{Supp}}_A(M)$. Indeed, assume that $M_{{\mathfrak p}}$ is maximal Cohen-Macaulay for every prime ideal ${\mathfrak p}\in{\operatorname{Supp}}_A(M)$, and ${\mathfrak a}$ is an ideal of $R$. There is nothing to prove if ${\mathfrak a}M=M$, since in this case ${\operatorname{grade}}_A({\mathfrak a},M)=\infty$. So assume that ${\mathfrak a}M\neq M$. Then using [@BH98 Proposition 1.2.10(a)], there is a prime ideal ${\mathfrak p}$ containing ${\mathfrak a}$ such that ${\operatorname{grade}}_A({\mathfrak a},M)={\operatorname{depth}}M_{{\mathfrak p}}$. Hence by assumption one has ${\operatorname{grade}}_A({\mathfrak a},M)={\operatorname{depth}}M_{{\mathfrak p}}=\dim R_{{\mathfrak p}}=\operatorname{ht}{\mathfrak p}\geq \operatorname{ht}{\mathfrak a}$. To prove the converse assume that ${\mathfrak p}\in{\operatorname{Supp}}_A(M)$. Then again in view of [@BH98 Proposition 1.2.10(a)], one has $\dim
R_{{\mathfrak p}}=\operatorname{ht}{\mathfrak p}\leq{\operatorname{grade}}_A({\mathfrak p},M)\leq{\operatorname{depth}}M_{{\mathfrak p}}$. Thus $M_{{\mathfrak p}}$ is maximal Cohen-Macaulay.
Assume that $A$ is Noetherian, and that $J\subseteq {\operatorname{Nil}}B$ is finitely generated as an $A$-module. Then $A\bowtie^f J$ is Cohen-Macaulay if and only if $A$ is Cohen-Macaulay and $J_{{\mathfrak p}}$ is maximal Cohen-Macaulay for every prime ideal ${\mathfrak p}\in{\operatorname{Supp}}_A(J)$.
The next proposition provides other sufficient and necessary condition for $A\bowtie^f J$ to be Cohen-Macaulay in the sense of ideals.
\[int\] With the notation and hypotheses of the beginning of Section 3, one has
1. Let $\mathcal{A}$ be a non-empty class of ideals of $A$. Assume that $\operatorname{ht}f^{-1}({\mathfrak q})\leq\operatorname{ht}{\mathfrak q}$ for every ${\mathfrak q}\in{\operatorname{Spec}}(B)\backslash V(J)$. If $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of $\mathcal{A}^e:=\{{\mathfrak a}^e|{\mathfrak a}\in\mathcal{A}\}$, then $A$ is Cohen-Macaulay in the sense of $\mathcal{A}$ and $\operatorname{K.grade}_A
({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every ${\mathfrak a}\in\mathcal{A}$.
2. Assume that $\operatorname{ht}\mathcal{P}\leq\operatorname{ht}\mathcal{P}^c$ for every $\mathcal{P}\in{\operatorname{Spec}}(A\bowtie^f J)$, where the contraction $\mathcal{P}^c$ is given with respect to $\iota_A$. If $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A
({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$, then $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of ideals.
\(1) Assume that $A\bowtie^f J$ is Cohen-Macaulay ring in the sense of $\mathcal{A}^e$. In order to prove the assertion, by Proposition \[j\], it is enough for us to show that $\operatorname{ht}{\mathfrak a}^e \ge \operatorname{ht}{\mathfrak a}$ for each ideal ${\mathfrak a}\in\mathcal{A}$. To this end, assume that ${\mathfrak a}\in\mathcal{A}$ and that $\mathcal{P}$ is a prime ideal of $A\bowtie^f J$ containing ${\mathfrak a}^e$. In view of [@DFF1 Corollaries 2.5 and 2.7], one has the following three cases to consider.
**Case 1.** If ${\mathcal{P}}={\mathfrak p}^{\prime_f}$ for some prime ideal ${\mathfrak p}$ of $A$ such that $f^{-1}(J)\nsubseteq{\mathfrak p}$, then $$\operatorname{ht}\mathcal{P}=\operatorname{ht}{\mathfrak p}^{\prime_f}=\dim(A\bowtie^f
J)_{{\mathfrak p}^{\prime_f}}=\dim A_{{\mathfrak p}}=\operatorname{ht}{\mathfrak p}\geq\operatorname{ht}{\mathfrak a},$$ by [@DFF1 Proposition 2.9] and Lemma \[e\](1).
**Case 2.** If ${\mathcal{P}}={\mathfrak p}^{\prime_f}$ for some prime ideal ${\mathfrak p}$ of $A$ such that $f^{-1}(J)\subseteq{\mathfrak p}$, then $$\begin{aligned}
\operatorname{ht}\mathcal{P}
&= \operatorname{ht}{\mathfrak p}^{\prime_f}\\
&=\dim(A\bowtie^f J)_{{\mathfrak p}^{\prime_f}}\\
&=\dim(A_{{\mathfrak p}}\bowtie^{f_{{\mathfrak p}}} J_{S_{{\mathfrak p}}})\\
&=\max\{\dim A_{{\mathfrak p}},\dim (f_{\mathfrak p}(A_{\mathfrak p})+J_{S_{\mathfrak p}})\}\\
&\geq \dim A_{{\mathfrak p}}\\
&= \operatorname{ht}{\mathfrak p}\\
&\geq\operatorname{ht}{\mathfrak a},\end{aligned}$$ by [@DFF1 Proposition 2.9], [@DFF2 Proposition 4.1] and Lemma \[e\](1), where $S_{\mathfrak p}:=f(A\backslash{\mathfrak p})+J$.
**Case 3.** If ${\mathcal{P}}=\bar{{\mathfrak q}}^f$ for some prime ideal ${\mathfrak q}$ of $B$, then $$\begin{aligned}
\operatorname{ht}\mathcal{P}
&=\operatorname{ht}\bar{{\mathfrak q}}^f\\
&= \dim(A\bowtie^fJ)_{\bar{{\mathfrak q}}^f}\\
&=\dim B_{{\mathfrak q}}\\
&=\operatorname{ht}{{\mathfrak q}}\\
&\geq \operatorname{ht}f^{-1}({\mathfrak q})\\
&\geq \operatorname{ht}{\mathfrak a}.\end{aligned}$$ The third equality follows by [@DFF1 Proposition 2.9], the first inequality holds by assumption, and the second one follows by Lemma \[e\]. This completes the proof of the first assertion.
\(2) Assume that $A$ is Cohen-Macaulay in the sense of ideals and that $\operatorname{K.grade}_A({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$. As indicated by [@AT Theorem 3.3], it is enough to show that $$\operatorname{K.grade}_{A\bowtie^f J}({\mathcal{P}},A\bowtie^f J)= \operatorname{ht}{\mathcal{P}}$$ for every prime ideal ${\mathcal{P}}$ of $A\bowtie^f J$. Let ${\mathcal{P}}$ be a prime ideal of $A\bowtie^f J$. Then $$\begin{aligned}
\operatorname{ht}{\mathcal{P}}&\leq \operatorname{ht}{\mathcal{P}}^c\\
&=\operatorname{K.grade}_A({\mathcal{P}}^c,A)\\
&=\operatorname{K.grade}_{A\bowtie^f J}({\mathcal{P}}^{ce},A\bowtie^f J)\\
&\le \operatorname{K.grade}_{A\bowtie^f J}({\mathcal{P}},A\bowtie^f J)\\
&\le \operatorname{ht}{\mathcal{P}}.\end{aligned}$$ The first inequality holds by assumption, the second inequality is by [@BH98 Proposition 9.1.2(f)], and the last one is by Lemma \[tamime 3.2\](2), and the second equality follows from Lemma \[kgr\].
We are now in a position to present our third main result.
\[gd\] With the notation and hypotheses of the beginning of Section 3, the following statements hold:
1. Let $\mathcal{A}$ be a non-empty class of ideals of $A$. Assume that the homomorphism $f:A\to B$ satisfies the going-down property. If $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of $\mathcal{A}^e:=\{{\mathfrak a}^e|{\mathfrak a}\in\mathcal{A}\}$, then $A$ is Cohen-Macaulay in the sense of $\mathcal{A}$ and $\operatorname{K.grade}_A ({\mathfrak a},J)\ge
\operatorname{ht}{\mathfrak a}$ for every ${\mathfrak a}\in\mathcal{A}$.
2. Assume that $\iota _A: A\to A\bowtie^f J$ is an integral ring extension. If $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A ({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$, then $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of ideals.
It is well-known that $\operatorname{ht}f^{-1}({\mathfrak q})\leq\operatorname{ht}{\mathfrak q}$ for every ${\mathfrak q}\in{\operatorname{Spec}}(B)$ if the homomorphism $f:A\to B$ satisfies the going-down property by [@M Exercise 9.9]. In the light of Proposition \[int\], this proves (1). To prove (2), keeping in mind Proposition \[int\], notice that, for every $\mathcal{P}\in{\operatorname{Spec}}(A\bowtie^f J)$, the inequality $\operatorname{ht}\mathcal{P}\leq\operatorname{ht}\mathcal{P}^c$ holds since $\iota _A: A\to
A\bowtie^f J$ is an integral ring extension [@M Exercise 9.8], where the contraction $\mathcal{P}^c$ is given with respect to $\iota_A$.
Note that Example \[ex\] also shows that we can not neglect the integral assumption in part two of the above theorem.
1. Assume that $A$ is an integral domain with $\dim A\leq1$ and that $B$ is an integral domain containing $A$. Assume that $J$ is an ideal of $B$ which is finitely generated $A$-module. Hence, as in Example \[ex\], one has $\operatorname{K.grade}_A({\mathfrak a},J)=\operatorname{ht}{\mathfrak a}$ for every proper ideal ${\mathfrak a}$ of $A$. Notice that $A$ is Cohen-Macaulay in the sense of ideals by [@AT Page 2305]. Therefore one obtains that $A\bowtie^f J$ is Cohen-Macaulay in the sense of ideals by Theorem \[gd\].
2. To construct a concrete example for (1), set $A:=\mathbb{Q}+X\mathbb{R}[X]$, where $\mathbb{Q}$ is the field of rational numbers, $\mathbb{R}$ is the field of real numbers and $X$ is an indeterminate over $\mathbb{R}$. It is easy to see that $A$ is a one dimensional non integrally closed domain. Put $B:=A[\sqrt{2}]$, which is finitely generated as an $A$-module. Let $J$ be a finitely generated ideal of $B$. Consequently, by (1), $A\bowtie^f J$ is Cohen-Macaulay in the sense of ideals.
3. Assume that $A$ is a valuation domain, $B$ an arbitrary integral domain containing $A$ and that $J$ is an ideal of $B$. Then by [@D1 Corollary 4] and [@D2 Theorem 1], the inclusion homomorphism $f:A\hookrightarrow B$ satisfies the going-down property. Also notice, by [@AT Proposition 3.12], that $A$ is Cohen-Macaulay in the sense of ideals if and only if $\dim A\leq1$. Further, assume that $\dim A>1$. Then $A\bowtie^f J$ can never be Cohen-Macaulay in the sense of ideals by Theorem \[gd\]. In particular, the composite ring extensions $A+XB[X]$ and $A+XB[[X]]$ can never be Cohen-Macaulay in the sense of ideals.
Note that if $J$ is finitely generated as an $A$-module, then $\iota
_A: A\to A\bowtie^f J$ is an integral ring extension, and that, in this case, $\operatorname{K.grade}_A({\mathfrak a}, J)\le \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$ by Lemma \[tamime 3.2\]. Hence we can make the following corollaries right away.
Assume that the homomorphism $f:A\to B$ satisfies the going-down property and that $J$ is finitely generated as an $A$-module. Then $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of ideals if and only if $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A ({\mathfrak a},J)= \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$.
\[int1\] Assume that $f:A\to B$ is a monomorphism of integral domains, and $A$ is integrally closed and that $B$ is integral over $A$. Then $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of ideals if and only if $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A
({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$.
By [@M Theorem 9.4], $f:A\to B$ satisfies the going-down property. Also, $\iota _A: A\to A\bowtie^f J$ is an integral ring extension by assumption and [@DFF2 Lemma 3.6].
\[int2\] Assume that $f:A\to B$ is a flat and integral homomorphism. Then $A\bowtie^f J$ is Cohen-Macaulay (ring) in the sense of ideals if and only if $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$.
By [@M Theorem 9.5], $f:A\to B$ satisfies the going-down property. Also, $\iota _A: A\to A\bowtie^f J$ is an integral ring extension by assumption and [@DFF2 Lemma 3.6].
In concluding, we apply Corollary \[int2\] on amalgamated duplication. Recall that if $f:=id_A$ is the identity homomorphism on $A$, and $J$ is an ideal of $A$, then $A\bowtie
J:=A\bowtie^{id_A} J$ is called the amalgamated duplication of $A$ along $J$. Assume that $(A,{\mathfrak m})$ is Noetherian local. In [@D Discussion 10], assuming that $A$ is Cohen-Macaulay, D’Anna showed that $A\bowtie J$ is Cohen-Macaulay if and only if $J$ is maximal Cohen-Macaulay. Next in [@SSh Corollary 2.7], the authors improved D’Anna’s result as $A\bowtie J$ is Cohen-Macaulay if and only if $A$ is Cohen-Macaulay and $J$ is maximal Cohen-Macaulay. Our final corollary generalizes these results.
\[d\] Let $J$ be an ideal of $A$. Then $A\bowtie J$ is Cohen-Macaulay (ring) in the sense of ideals if and only if $A$ is Cohen-Macaulay in the sense of ideals and $\operatorname{K.grade}_A({\mathfrak a},J)\ge \operatorname{ht}{\mathfrak a}$ for every ideal ${\mathfrak a}$ of $A$.
This immediately follows from Corollary \[int2\], since $f=id_A:A\to A$ is flat and integral.
[**Acknowledgements.**]{} The authors is deeply grateful to the referee for a very careful reading of the manuscript and many valuable suggestions.
[99]{} D. D. Anderson, M. Winders, [*Idealization of a module*]{}, J. Commut. Algebra, [**1**]{}, (2009), 3–56.
M. Asgharzadeh and M. Tousi, [*On the notion of Cohen-Macaulayness for non-Noetherian rings*]{}, J. Algebra, [**322**]{}, (2009), 2297–2320.
A. Bagheri, M. Salimi, E. Tavasoli and S. Yassemi, [*A construction of quasi-Gorenstein rings*]{}, J. Algebra Appl. [**11**]{}, No. 1, (2012), 1250013, (9 pages).
M. P. Brodmann and R. Y. Sharp, [*Local Cohomology: An Algebraic Introduction with Geometric Applications*]{}, Cambridge Studies in Advanced Mathematics, [**136**]{}, Cambridge University Press, Cambridge, 2013.
W. Bruns and J. Herzog, [*Cohen-Macaulay rings. Rev. ed.*]{} Cambridge Studies in Advanced Mathematics [**39**]{}, Cambridge, Cambridge University Press 1998.
M. D’Anna, [*A construction of Gorenstein rings*]{}, J. Algebra, [**306**]{}, (2006), 507–519.
M. D’Anna, C. A. Finocchiaro, and M. Fontana, [*Amalgamated algebras along an ideal*]{}, in: Commutative Algebra and Applications, Proceedings of the Fifth International Fez Conference on Commutative Algebra and Applications, Fez, Morocco, 2008, W. de Gruyter Publisher, Berlin, 2009, pp. 155–172.
M. D’Anna, C. A. Finocchiaro, and M. Fontana, [*Properties of chains of prime ideals in an amalgamated algebra along an ideal*]{}, J. Pure Appl. Algebra, [**214**]{}, (2010), 1633–1641.
M. D’Anna, C. A. Finocchiaro, and M. Fontana, [*New algebraic properties of an amalgamated algebra along an ideal*]{}, Commun. Alg. [**44**]{}, (2016), 1836–1851.
M. D’Anna and M. Fontana, [*An amalgamated duplication of a ring along an ideal: the basic properties*]{}, J. Algebra Appl. [**6**]{}, No.3, (2007), 443–459.
D. E. Dobbs, [*On going-down for simple overrings*]{}, Proc. Amer. Math. Soc. [**39**]{}, (1973), 515–519.
D. E. Dobbs and I. J. Papick, [*On going-down for simple overrings III*]{}, Proc. Amer. Math. Soc. [**54**]{}, (1976), 35–38.
S. Glaz, [*Coherence regularity and homological dimensions of commutative fixed rings*]{}, in: Ngô Viêt Trung, Aron Simis, Guiseppe Valla (Eds.), Commutative Algebra, World Scientific, Singapore, 1992, pp. 89–106.
S. Glaz, [*Homological dimensions of localizations of polynomial rings*]{}, in: Zero-Dimensional Commutative Rings, Knoxville, TN, 1994, in: Lect. Notes Pure Appl. Math., vol. 171, Marcel Dekker, New York, 1995, pp. 209–222.
T. D. Hamilton, [*Weak Bourbaki unmixed rings: A step towards non-Noetherian Cohen-Macaulayness*]{}, Rocky Mountain J. Math. [**34**]{}, (2004), 963–977.
T. D. Hamilton and T. Marley, [*Non-Noetherian Cohen-Macaulay rings*]{}, J. Algebra, [**307**]{}, (2007), 343–360.
J. Huckaba, [*Commutative Rings with Zero Divisors*]{}, M. Dekker, New York, 1988.
H. Matsumura, [*Commutative Ring Theory*]{}, Cambridge Stud. Adv. Math., vol. [**8**]{}, Cambridge University Press, 1986.
M. Nagata, [*Local Rings*]{}, Interscience, New York, 1962.
D. G. Northcott, [*Finite Free Resolutions*]{}, Cambridge Tracts in Math., vol. [**71**]{}, 1976.
P. Sahandi and N. Shirmohammadi, [*Notes on amalgamated duplication of a ring along an ideal*]{}, Bull. Iranian Math. Soc. [**41**]{}, (2015), 749–757.
P. Sahandi, N. Shirmohammadi and S. Sohrabi, [*Cohen-Macaulay and Gorenstein properties under the amalgamated construction*]{}, Commun. Alg. [**44**]{}, (2016), 1096–1109.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Let $\Ome=G/K$ be a bounded symmetric domain in a complex vector space $V$ with the Lebesgue measure $dm(z)$ and the Bergman reproducing kernel $h(z, w)^{-p}$. Let $d\mu_{\a}(z)=h(z, \bar z)^{\a}dm(z)$, $\a>-1$, be the weighted measure on $\Ome$. The group $G$ acts unitarily on the space $L^2(\Ome, \mu_{\a})$ via change of variables together with a multiplier. We consider the discrete parts, also called the relative discrete series, in the irreducible decomposition of the $L^2$-space. Let $\bar D=B(z, \bar z)\partial $ be the invariant Cauchy-Riemann operator. We realize the relative discrete series as the kernels of the power $\bar D^{m+1}$ of the invariant Cauchy-Riemann operator $\bar D$ and thus as nearly holomorphic functions in the sense of Shimura. We prove that, roughly speaking, the operators $\bar D^m$ are intertwining operators from the relative discrete series into the standard modules of holomorphic discrete series (as Bergman spaces of vector-valued holomorphic functions on $\Ome$).'
address:
- 'Department of Mathematics, University of Karlstad, S-651 88 Karlstad, Sweden'
- '(current address) Department of Mathematics, Chalmers University of Technology and Göteborg University, S-412 96 Göteborg, Sweden'
author:
- Genkai Zhang
title: 'Nearly Holomorphic Functions and Relative Discrete Series of Weighted $L^2$-Spaces on Bounded Symmetric Domains'
---
1.40pc
[Introduction]{}
Let $\Ome$ be a bounded symmetric domain in a complex vector space $V$ with the Lebesgue measure $dm(z)$. The Bergman reproducing kernel is up to a constant $h(z, \bar w)^{-p}$, where $h(z, \bar w)$ is an irreducible polynomial holomorphic in $z$ and antiholomorphic in $w$. We consider the weighted measure $d\mu_{\a}(z)=h(z, \bar z)^{\a}dm(z)$ for $\a>-1$ and corresponding $L^2$-space $L^2(\Ome, \mu_{\a})$ on $\Ome$. The group $G$ of biholomorphic mappings of $\Ome$ acts unitarily on the $L^2$-space via change of variables together with a multiplier, and the weighted Bergman space is then an irreducible invariant subspace. The irreducible decomposition of the $L^2$-space under the $G$-action has been given by Shimeno [@Shimeno]. It is proved there abstractly (via identifying the infinitesimal characters) that all the discrete parts (called relative discrete series) appearing in the decomposition are holomorphic discrete series. In this paper we consider their explicit realization.
To illustrate our main results we consider the case of the unit disk. The Bergman reproducing kernel is $(1-z\bar w)^{-2}$, and the weighted measure in question is $d\mu_{\a}(z)=(1-|z|^2)^\a dm(z)$. The group $G=SU(1, 1)$ acts unitarily on $L^2(D, \mu_\a)$ via a projective representation $$\pi_{\nu}(g)f(z)= f(g^{-1}z)(cz +d)^{-\nu}, \quad
g^{-1}=\begin{pmatrix}
a& b\\c&d\end{pmatrix}$$ where $\nu=\a+2$. To study the relative discrete series we introduce the invariant Cauchy-Riemann operator $\bar D=(1-|z|^2)^2\bar \partial $. The operator $\bar D$ intertwines the action $\pi_{\nu}$ with the action $\pi_{\nu-2}$, which can be proved by direct calculation. The kernel $\ker \bar D$ of $\bar D$ on the weighted $L^2$-space is the weighted Bergman space $L^2_a(\Ome, \mu_\a)$ of holomorphic functions, which gives one of the relative discrete series. It is naturally to expect that the kernel $\ker \bar D^{m+1}$ of the iterate of $\bar D$ will give us the other relative discrete series. The functions that are in the kernel $\ker D^{m+1}$ can be written as polynomial of $q(z)=\frac{\bar z}{1-|z|^2}$ of degree $\le m$ with coefficients being holomorphic functions. Those functions, following Shimura, are called *nearly holomorphic functions*. The function $q(z)$ actually is the holomorphic differential of the Khler potential $\log (1-|z|^2)^{-2}
$. Indeed $q(z)=\frac 12 \partial_z \log (1-|z|^2)^{-2}
$. Moreover it has a Jordan theoretic meaning as the *quasi-inverse* of $\bar z$ with respect to $z$ in $\bar\mathbb C$ with the Jordan triple product $\{\bar u z \bar v\}= 2\bar u z \bar v$. The key result is that each power $q(z)^m=\frac{\bar z^m}{(1-|z|^2)^m}$, for $0\le m< \frac{\a+1}2$ generates a relative discrete series. Denote corresponding the relative discrete series by $A^{2, \a}_m$. Then the operator $\bar D^m$ is an intertwining operator from $A^{2, \a}_m$ into the weighted Bergman space in $L^2(\Ome, \mu_{\a-2m})$, namely $L^2_a(\Ome, \mu_{a-2m})$. Moreover all relative discrete series are obtained in this way.
When $\Ome=G/K $ is a general bounded symmetric domain the corresponding function $q(z)$, defined as the differential of the Kähler potential, can indeed be expressed in term of quasi-inverse in the Jordan triple $V$. See Proposition 3.1. Let $\bar D$ be the invariant Cauchy Riemann operator. Then it is proved in [@pz-cr] that the iterate $\bar D^m$ maps a function on $\Ome$ to a function with value in the symmetric subtensor space $S_m(V)$ of $\otimes^m V$. Decompose $S_m(V)$ into irreducible subspaces under $K$. Let $\m$ be the signature of an irreducible subspace and $\bar \Del_{\m}$ the highest weight vector in that space, considered as a polynomial function on $V^\prime$. Now the function $q(z)$ is a $V^\prime$-valued function on $\Ome$, thus $\bar\Del_{\m}(q(z))$ is a scalar-valued function on $\Ome$. We prove that $\bar\Del_{\m}(q(z))$ is in the space $L^2(\Ome, \mu_{\a})$ when $\m$ satisfies certain condition; see Proposition 4.1. We further prove that it generates an irreducible subspace, namely a relative discrete series, and is the highest weight vector, and that the operators $\bar D^m$ are intertwining operators from the relative discrete series onto the weighted Bergman space of holomorphic functions with values in the irreducible subspace $S_{\m}(V)$ of the symmetric tensor $S_m(V)$, the later being a standard module of holomorphic discrete series. We thus realize the relative discrete series in the kernel of the power $\bar D^{m+1}$ and as nearly holomorphic functions in the sense of Shimura ([@Shimura-annmath-86] and [@Shimura-mathann-87]).
Finally in the last section we consider as an example the unit ball in $\mathbb C^n$. We calculate directly, via the adjoint operator $\bar D^\ast$, the highest weight vector in the relative discrete series. The realization of the relative discrete series has also been studied in [@pz-cr].
Our results explain geometrically why the relative discrete series are equivalent to the weighted Bergman spaces with values in *symmetric tensor space* of the tangent space. Moreover, since the highest weight vectors are quite explicitly given we understand better the analytic nature of the functions in the discrete series. We hope that our result will be helpful in understanding the $L^p$-spectral properties of the irreducible decomposition, for example, the $L^p$-boundedness of the orthogonal projection into the relative discrete series.
Acknowledgments. {#acknowledgments. .unnumbered}
-----------------
I would like to thank Jaak Peetre and Harald Upmeier for some illuminating discussions. I would also like to thank the Erwin Schrödinger Institute for mathematical physics, Vienna, for providing a stimulating environment.
[Invariant Cauchy-Riemann Operator $\bar D$ and Nearly Holomorphic Functions on Khler manifolds ]{}
We recall in this section briefly some preliminary results on invariant Cauchy-Riemann operators and nearly holomorphic functions on Khler manifolds; see [@Shimura-annmath-86], [@Englis-Peetre], [@gz-shimura], and [@gz-invdiff].
Let $\Ome$ be a Khler manifold with the Khler metric locally given by the matrix $(h_{i\bar
j})$, with $h_{i\bar j}=\frac{\partial^2 \Psi}{\partial z_i\partial
\bar z_j}$ and a potential $\Psi$. Let $T^{(1, 0)}$ be its holomorphic tangent bundle. Let $W$ be a Hermitian vector bundle over $\Ome$, and $C^\infty(\Ome, W)$ its smooth sections. The invariant Cauchy-Riemann operator $\bar D$ locally defined as follows. If $f=\sum_{\a}f_\a e_\a$ is any section of $W$, then $$\bar D f=\sum_{\a, i, j} h^{\bar \jmath i} \frac{\partial f_{\a}} {\partial
{\bar z^j}}\partial_i \otimes e_{\a}.$$It maps $f\in C^\infty(\Ome, W)$ to $\bar Df\in C^\infty(\Ome,
T^{(1, 0)}\otimes W)$. Denote $S_m(T^{(1, 0)})
$ the symmetric tensor subbundle of $\otimes^mT^{(1, 0)}$. We recall some known properties of the operator $\bar D$. See [@pz-cr].
The following assertions hold.
(1)
: The operator $\bar D$ is an intertwining operator: If $g$ is a biholomorphic mapping of $\Ome$, then $$\label{cov-pro}
\bar D(g_W f)=((dg)^{-1}\otimes g_W) Df,$$ where $g_W$ is the induced action of $g$ on sections of $W$ and $dg(z)$: $T^{(1, 0)}_z\mapsto T^{(1, 0)}_{gz}$ is the differential of $g$.
(2)
: The iterate $\bar D^m$ of $\bar D$ maps $C^{\infty}(\Ome, W)$ to $C^{\infty}(\Ome, W\otimes S_m(T^{(1, 0)}))$.
For our later purpose we can assume that $\Ome$ is some domain in a vector space $V$ with coordinates $\{z_j\}$, and that all the bundles are trivial. The space $T^{(1, 0)}_z$ will be identified with $V$. So let $W$ be a vector space and we will consider the $C^\infty(\Ome, W)$ of $W$-valued $C^\infty$-functions on $\Ome$.
Let $$q(z)=\partial \Psi =\sum_j \frac{\partial\Psi}{\partial z_j} dz_j.$$ Here $\Psi$ is the Kähler potential, and $\{dz_j\}$ is the dual basis for the holomorphic cotangent space $V^\prime$. Thus $q(z)$ is a function with values in $V^\prime$. Following Shimura [@Shimura-annmath-86] we call a $W$-valued function $f\in C^\infty(\Ome, W)$ nearly holomorphic if $f$ is a polynomial of $q(z)$ with holomorphic coefficients. We denote $\mathcal N_m$ the space of scalar-valued nearly holomorphic functions that are polynomial of degree $\le m$, namely those functions $f(z)=\sum_{|\underline{\beta}|\le m}c_{\underline{\beta}}(z) q(z)^{\underline{\beta}
}
$ where $c_{\underline{\beta}}(z)$ are holomorphic functions.
We denote $\text{Id}$ the identity tensor in the tensor product $V\otimes V^{\prime}$. By the direct calculation we have $$\label{dn}
\bar D q(z)= \text{Id};$$ see [@gz-shimura]. We generalize this formula as follows; the proof of it is quite straightforward and we omit it.
We have the following differentiation formula $$\label{dnm}
\bar D^m (\otimes^m q(z))= m!\text{Id},$$ where $\text{Id}$ in the right hand denotes the identity tensor in the tensor product $(S_mV)\otimes S_m (V^{\prime})=
(S_m V)\otimes(S_m V)^{\prime}$.
The formula (\[dn\]) was observed earlier by Shimura [@Shimura-annmath-86] and Peetre [@Peetre-cr-manu]; in the later paper explicit formulas were given for the Laplace operators on weighted $L^2$-spaces on bounded symmetric domains, where the function $q(z)$ also appears.
We consider the case of the unit disk. The operator $\bar D=(1-|z|^2)^2\bar \partial$ The function $q(z)$ is $\frac{\bar z}{1-|z|^2}$ (or exactly it is $\frac{\bar z}{1-|z|^2}dz$). The above formula amounts to $$\bar D^m
(\frac{\bar z}{1-|z|^2})^m
=m!,$$ which can be proved by direct calculations. It can also be proved by using the formula $$\bar D^m= (1-|z|^2)^{m+1}
(\frac{\partial}{\partial \bar z})^m
(1-|z|^2)^{m-1},$$ see [@gz-shimura]. Indeed, $$\begin{split}
&\quad \bar D^m(\frac
{\bar z}{1-|z|^2})^m\\
&= (1-|z|^2)^{m+1}
(\frac{\partial}{\bar \partial z})^m
(1-|z|^2)^{m-1}(\frac{\bar z}{1-|z|^2})^m\\
&= (1-|z|^2)^{m+1}
(\frac{\partial}{\bar \partial z})^m
\frac{\bar z^m}{1-|z|^2}\\
&= (1-|z|^2)^{m+1}\sum_{l=0}^m\binom{m}{l} m(m-1)\cdots(m-l+1)\bar z^{m-l}
\frac{(m-l)! z^{m-l}}{(1-|z|^2)^{m-l+1}}\\
&=m!\sum_{l=0}^m\binom{m}{l}(z \bar z)^{m-l}(1-|z|^2)^{l}
\\
&=m!
\end{split}$$ The calculations are somewhat combinatorially intriguing.
Using the above result we get immediately the following characterization of nearly holomorphic functions. This is proved in [@Shimura-annmath-86], Proposition 2.4, for classical domains. It can be proved for all Khler manifolds via the same methods.
\[ker-d-m\]Consider the operator $\bar D^{m+1}$ on the space $C^{\infty}(D)$ of $C^\infty$-functions on $D$. Then $$Ker \bar D^{m+1} =\mathcal N_m.$$
We recall the identification of polynomial functions with symmetric tensors. This will clarify conceptually some calculations in the next section. There is a pairing $$(\phi, \psi)\in S_m(V)\times S_m(V^\prime)\mapsto [\phi, \psi]\in \mathbb C,$$ between the symmetric tensor spaces $S_m(V)$ and $S_m(V^\prime)$, via the natural pairing between $\otimes^m V$ and $\otimes^m V^\prime$. Now for each element $\phi$ in the symmetric tensor space $S_m(V)$ there corresponds a homogeneous polynomial function of degree $m$ on the space $V^\prime$, also denoted by $\phi$, such that $$\label{pol-sym-iden}
[\phi, v^\prime\otimes v^\prime\otimes \cdots v^\prime]=\phi(v^\prime)$$ for any $v^\prime\in V^\prime$.
Using this convention we see that a function $f\in C^\infty(\Ome)$ is in $\mathcal N_m$ if and only if there exist holomorphic functions $g_k$ with values in the tensor product $ S_k(V)$, $k=0, 1, \dots, m,$ such that $$\label{exp-nhf}
f(z)=\sum_{k=0}^m g_k(q(z)).$$
[Nearly Holomorphic Functions on Bounded Symmetric Domains]{}
In this section we assume that $\Ome=G/K$ is a bounded symmetric domain of rank $r$ in a complex vector space $V$. Here $G$ is the identity component of the group of biholomorphic mappings of $\Ome$ and $K$ is the isotropy group at $0\in V$. Let $\mathfrak g =\mathfrak k +\mathfrak p$ be the Cartan decomposition of $\mathfrak g$. The space $V$ has a Jordan triple structure so that the space $\mathfrak p$ is explicitly described; see [@Loos-bsd], whose notation and results will be incorporated here. So let $Q(z): \bar V\to V$ be the quadratic operator. The $$\mathfrak p=\{\xi_v=v-Q(z)\bar v\}$$ viewed as holomorphic vector fields on $\Ome$. Let $D(z, \bar v)w=\{z\bar v w\}=(Q(z+w)-Q(z)-Q(w))\bar v$ be the Jordan triple product. We normalize the $K$-invariant Hermitian inner product $\langle z,w \rangle$ on $V$ so that a minimal tripotent has norm $1$. This can also be calculated by $$\label{normal-V}
\langle z,w \rangle=\frac 1p \Tr D(z, \bar w)$$ where $p$ is an integer called the genus of $\Ome$. We identify then the vector space $V^\prime $ with $\bar V$ via this scalar product.
Let $dm(z)$ be the corresponding Lebesgue measure on $V$. The Bergman reproducing kernel on $D$ is the $ch(z, w)^{-p}$ for some positive constant $c$. Let $$B(z, \bar w)=I-D(z, \bar w)+Q(z)Q(\bar w)$$ the Bergman operator. $B(z, \bar w)$ is holomorphic in the first argument and anti-holomorphic in the second. (We write $B(z, \bar w)$ instead of $B(z, w)$ as in [@Loos-bsd] in order to differ it from $B(\bar z, w)$ which is acting on the space $\bar V$.) The Bergman metric at $z\in \Ome$ defined by the metric $\partial_j\bar \partial_k
\log h(z, \bar z)^{-p}$ on $\Ome$ is then $$p\langle B(z, \bar z)^{-1}z, w\rangle;$$ and $$\det B(z, \bar z)= h(z, \bar z)^p$$ See [@Loos-bsd]. For some computational convenience we will choose and fix the metric on $\Ome$ to be $$\boxed{\langle B(z, \bar z)^{-1}z, w\rangle}.$$
The invariant Cauchy-Riemann operator is $$\bar D =B(z,\bar z)\bar\partial,$$ and the $N$-function defined in the previous section is now (with a normalizing constant) $$\label{N-bsd}
\boxed{q(z)=\frac 1p \partial \log \det B(z, \bar z)^{-1}}$$
We shall find an explicit formula for the function $q(z)$ on $\Ome$. Recall first the notion of *quasi-inverse* in the Jordan triple $V$; see [@Loos-bsd]. Let $z\in V$ and $\bar w\in \bar V$. The element $z$ is called quasi-invertible with respect to $w$ if $B(z, \bar w)$ is invertible and its quasi-inverse is given by $$z^{\bar w}=B(z, \bar w)^{-1}(z-Q(z)\bar w).$$ Similarly we define the quasi-inverse of an element $\bar z\in \bar V$ with respect to $w\in V$.
The function $q(z)$ on $\Ome$ is given by $$q(z)=\bar z^{z}
=B(\bar z, z)^{-1}(\bar z-Q(\bar z) z)$$
For some computational convenience we consider, instead of the holomorphic differential in (\[N-bsd\]), the anti-holomorphic differential $$\frac 1p\bar \partial\log \det B(z, \bar z)^{-1}=
-\frac 1p\bar \partial\log \det B(z,\bar z).$$ Let $\bar v\in \bar V$. By the definition of $B$-operator we have $$\begin{split}
&\quad\, B(z, \bar z+t\bar v)\\
&=1-D(z, \bar z+t\bar v)+Q(z)Q(\bar z+t\bar v)\\
&=1-D(z, \bar z)+Q(z)Q(\bar z)+ t (-D(z, \bar v)+Q(z)Q(\bar z, \bar v)
+t^2 Q(z)Q(\bar v)\\
&=B(z, \bar z)\left(I + tB(z, \bar z)^{-1}(-D(z, \bar v)
+Q(\bar z, \bar v)) +t^2B(z, \bar z)^{-1}Q(z)Q(\bar v)\right).
\end{split}$$ Thus the first order term in $t$ in $\log \det B(z, \bar z +t \bar v)$ is $$\label{1-order}
\Tr (B(z, z)^{-1}(-D(z, \bar v)+ Q(z)Q(\bar z, \bar v)).$$
We recall a formula in [@Loos-bsd] (see (JP30)) $$B(z, \bar z)D(z^{\bar z}, v)=D(z, \bar v)- Q(z)Q(\bar z, \bar v).$$ Therefore (\[1-order\]) is $$-\Tr D(z^{\bar z}, v)=-p\langle z^{\bar z}, v\rangle$$ by the formula (\[normal-V\]). Summarizing we find $$\frac 1p
\bar\partial_{v}\log \det B(z, \bar z)^{-1}=
\langle z^{\bar z}, v\rangle,$$ which is the desired formula.
Now the group $K$ acts on $\Ome$ and keeps the function $h(z, \bar z)$-invariant. Thus we get, in view of the formula (\[N-bsd\]), $$\label{k-N}
q(kz)=(k^{-1})^\prime q(z),$$ where $(k^{-1})^\prime $ on $q(z)\in V^\prime$ is the dual of $k^{-1}$ on $V$.
In particular, since the function $q(z)$ is a $V^\prime$-valued function on $\Ome$, we have, for any homogeneous polynomial function $f$ on $V^\prime$, a scalar-valued function $f(q(z))$. The following lemma then follows from (\[k-N\]) and the $K$-invariance of the pairing between $S_m(V^\prime)$ and $S_m(V)$.
\[k-intert\] The map $$v\in S_m(V)\mapsto v(q(z))= [v, \otimes^m q(z)]$$ is an invertible $K$-intertwining operator between the $K$-action on $S_m(V)$ and its regular action on functions on $\Ome$.
We recall now the decomposition of $S_m(V)$ under $K$. To state the result we fix some notation. The complexification $\fg^{\mathbb C}$ of the Lie algebra $\fg$ has a decomposition $\fg^{\mathbb C}=\fp^{+}+
\fk^{\mathbb C}+\fp^{-}$, with $\fk^{\mathbb C}$ the complexification of the Lie algebra $\fk$ of $K$ and $\fp^{+}=V$. Let $\{e_1, \dots, e_r\}$ be a frame of tripotents in $V$. Fix an Cartan subalgebra of of $\mathfrak k^{\mathbb C}$, and let $\ga_1> \cdots >\ga_r$ be the Harish-Chandra strongly roots so that $e_1, \dots, e_r$ are the corresponding root vectors. The ordering of the roots of $\fg^{\mathbb C}$ is so that $\fp^{+}$ is the sum of positive non-compact root vectors. We shall then speak of *highest weight modules* of $\fg^{\mathbb C}$ with respect to this ordering.
\[Hua\] ([@Hua], [@Schmid] and [@FK]) The space $S_m(V)$ (respectively $S_m(V^\prime)$) under $K$ is decomposed into irreducible subspaces with multiplicity one as $$S_m(V)=\sum_{\m}S_{\m}(V), \qquad (\text{resp.}\, S_m(V^\prime)=
\sum_{\m}S_{\m}(V^\prime))$$ where each $S_{\m}(V)$ (resp. $S_{\m}(V^\prime)$) is of highest weight $\m=m_1\ga_1 +\cdots +m_r
\ga_r$ (resp. lowest weight $-(m_1\ga_1+\cdots + m_r\ga_r)$) with $ m_1\ge m_2\ge\cdots\ge m_r\ge 0$, and the summation is over all $\m$ with $|\m|=m_1+m_2 +\cdots +m_r =m$.
The highest weight vectors of $S_{\m}(V)$ (respectively lowest weight vectors of $S_{\m}(V^\prime)$) have constructed explicitly; see [@FK] and reference therein. Let $\Del_j$ be the lowest weight vector of the fundamental representation $\m=\ub1j=\ga_1+\dots+\ga_j$, $j=1, \dots, r$. The polynomial $\Del=\Del_r$ is the determinant function of the Jordan triple $V$. Then the lowest weight vector of $S_{\m}(V^\prime)$ is $$\Del_{\underline{\bold{m}}}(v)=\Del_1(v)^{m_1-m_2}
\cdots \Del_{r-1}(v)^{m_{r-1}-m_r}
\Del_{r}(v)^{m_r},$$ viewed as polynomial of $v\in V$. Via the natural pairing between $S_{\m}(V^\prime)$ and $S_{\m}(V)$ we find that the highest weight vector of $S_{\m}(V)$ is $\bar \Del_{\m}$ and $$\label{con-hwv}
\bar \Del_{\underline{\bold{m}}}(w)=\bar \Del_1(w)^{m_1-m_2}
\cdots \bar \Del_{r-1}(w)^{m_{r-1}-m_r}
\bar\Del_{r}(w)^{m_r},$$ viewed as polynomial of $w\in V^\prime=\bar V$.
[The Relative Discrete Series of $L^2(\Ome, \mu_{\a})$]{}
In this section we find a family of relative discrete series by constructing some vectors that are in $L^2$-space and are highest weight vectors, namely annihilated by the positive vectors in $\fg^{\mathbb C}$ via the induced action of (\[pi-nu\]) (see below).
Let $\alpha >-1$. and consider the weighted measure $$d\mu_{\a}=h(z,z)^{\a}dm(z).$$ The group $G$ acts unitarily on the space $L^2(\Ome, d\mu_{\a})$ via $$\label{pi-nu}
\pi_{\nu}(g)f(z)=f(g^{-1}z){J_{g^{-1}}(z)}^{\frac{\nu}{p}}, \quad g\in G.$$ where $\nu =\a +p$ and $J_g$ is the Jacobian determinant of $g$. We denote $L^2_a(\Ome, \mu_\a)$ the weighted Bergman space of holomorphic functions in $L^2(\Ome, \mu_\a)$.
We introduce now the weighted Bergman spaces of vector-valued holomorphic functions that will be used to realize the relative discrete series in $L^2(\Ome, d\mu_{\a})$. Fix a signature $\m$ with $m=m_1+\dots+m_r$. We denote $L^2_a(\Ome, S_{\m}(V), \mu_\a)$ the weighted Bergman space of $S_{\m}(V)$-valued holomorphic functions such that the following norm is finite $$\Vert f\Vert^2
=\int_{\Ome} \ainnerp{ (\otimes^m B(z, \bar z) ^{-1}) f(z)}{ f(z)}\,
d\mu_{\a}(z).$$ The group $G$ acts unitarily on $L^2_a(\Ome, S_{\m}(V), \mu_\a)$ via $$\label{g-on-berg}
g\in G: f(z)\mapsto (J_{g^{-1}}(z))^{\frac \nu p}\otimes ^m (dg^{-1}(z))^{-1} f(g^{-1}z).$$ This space is non trivial and forms an irreducible representation of $G$ when $\m$ satisfies the following condition: $$\label{con-rds}
\frac{\a+1}2>m_1\ge m_2 \ge \cdots \ge m_r \ge 0.$$ This follows directly from Theorem 6.6 in [@Kn-book]; see also [@Shimeno]. (We note here that non-triviality of the space can also be proved directly by expressing the inverse $B(z, \bar z)^{-1}$ of the Bergman operator via the quasi-inverse developed in [@Loos-bsd], quite similar to the proof of Proposition 4.1 below. However we will not go into the details here.)
Our first result is a construction of certain vectors in $L^2(\Ome, \mu_{\a})$.
\[inL2\] Suppose $\m$ satisfies the condition \[con-rds\]. Then the functions $\bar{\Del_{\m}}(q(z))$ is in $L^2(\Ome, \mu_{\a})$ and in $\text{Ker} \bar D^{m+1} $.
We begin with fundamental representations $S_{\m}(V)$ with signatures $\m=\ub1j=\ga_1+\dots +\ga_j$ and highest weight vectors $\bar\Del_j$, $j=1, \dots, r$.
Then the function $\Del_{j}(q(z))$ is of the form $$\label{del-n-1}
\bar\Del_{j}(q(z))=\frac{P(z, \bar z)}{h(z, \bar z)}$$ where $P(z, \bar z)$ is a polynomial in $(z, \bar z)$ of total degree not exceeding $2r$. In particular if $j=r$, $$\label{del-n-2}
\bar\Del(q(z))= \frac{\bar \Del(z)}{h(z, \bar z)}.$$
It follows from the Faraut-Koranyi expansion that $$\label{fk-h}
h(v, \bar w)=\sum_{s=0}^{r}(-1)^rc_sK_{\ub1s}(v,\bar w),$$ where $K_{\m}$ is the reproducing kernel of the subspace $\mathcal P^{\m}(V)$ of $\mathcal P(V)$ with signature $\m$ with the Fock-norm $\langle\cdot, \cdot\rangle_{\mathcal F}$ and $c_s$ are positive constants; see [@FK]. Performing the inner product in the Fock space of the element $h(v, \bar w)$ with the function $\Del_{1^j}(v)$ and using (\[fk-h\]) we find that $$\langle h(\cdot, \bar w), \Del_{j}\rangle_{\mathcal F} =
(-1)^s c_s\Vert\Del_j\Vert_{\mathcal F}^2 \bar \Del_j(\bar w);$$ namely, $$\label{del-n-3}
\bar\Del_j(\bar w)=
\frac 1{(-1)^s c_{s}}\Vert\Del_j\Vert^2_{\mathcal F}
\langle h(\cdot, \bar w), \Del_{j}\rangle_{\mathcal F}.$$ We take now $\bar w =\bar z^z$. Recall [@Loos-bsd], Lemma 7.5, that $$\label{loos-h-h}
h(v, \bar z^{z})
=\frac{h(v+z, \bar z)}{h(z, \bar z)}.$$ Substituting this into the previous formula we get $$\label{del-n-4}
\bar \Del_j(\bar z^z)=
\frac 1{(-1)^s c_{s} h(z, \bar z)}
\Vert\Del_j\Vert^2_{\mathcal F}
\langle h(\cdot +z, \bar z), \Del_{j}\rangle_{\mathcal F}$$ Since $h(v+z, z)$ is a polynomial in $z$ and $\bar z$ of degree $2r$, we see that $\Del_j(\bar z^{z})$ is of the declared form.
If $j=r$, we can then calculate $\langle h(\cdot, \bar z), \Del_{r}\rangle_{\mathcal F}
$ further. Expand $h(v+z, \bar z)$ again using (\[fk-h\]). We have $$\begin{split}
\langle h(\cdot+z, \bar z), \Del_{r}\rangle_{\mathcal F}
&=\sum_{s=0}^{r}(-1)^s c_{s}\langle K_{\ub1s}(\cdot+z, \bar z), \Del_{r}\rangle_{\mathcal F}\\
\\
&=(-1)^r c_{s}
\langle K_{\ub1r}(\cdot+z, \bar z), \Del_{r}\rangle_{\mathcal F},
\end{split}$$ because $\Del_r$ is of degree $r$ and it is orthogonal to those terms of lower degree. But $$K_{\ub1r}(z+v, z)=
K_{\ub1r}(v, z) + \dots$$ where the rest term is of lower order. Therefore by the same reason and by the reproducing property, $$\langle h(z+\cdot, z), \Del_{r}\rangle_{\mathcal F}
=(-1)^r c_{r}
\langle K_{\ub1r}(\cdot, \bar z), \Del_{r}\rangle_{\mathcal F}
=(-1)^r c_{r}\Vert\Del_r\Vert^2_{\mathcal F}\overline{\Del_r(z)}.$$ Substituting this into (\[del-n-4\]) we then get (\[del-n-2\]).
The norm $\Vert\Del_{\m}\Vert_{\mathcal F}$ is calculated in [@FK], though we will not need it in the present paper.
Recall formula (\[con-hwv\]) for the highest weight vector $\bar\Del_{\m}$. As a corollary we find immediately that
\[del-n\] Then the function $\bar \Del_{\m}(q(z))$ is of the form $$\bar \Del_{\m}(q(z))=
\frac{P(z, \bar z)}{h(z, \bar z)^{m_1}}$$ where $P(z, \bar z)$ is a polynomial in $(z, \bar z)$.
We prove now the Proposition \[inL2\].
We estimate the norm of $\bar \Del_{\m}(q(z))$ in $L^2(\Ome, \mu_\a)$ by using the above Corollary. The polynomial $P(z, z)$ on $\Ome$ is bounded, say $|P(z, z)|\le C$. We have $$\int_{\Ome}|\frac {P(z, z)}{h(z, \bar z)^{m_1}}|^2d\mu_{\a}
\le C \int_{\Ome} h(z, \bar z)^{\a-2m_1}dm(z).$$ By the condition (\[con-rds\]) we see that $\a-2m_r > -1$, thus the above integral is finite (see [@FK]), namely the function is in the $L^2$-space. That $\bar \Del_{\m}(N(z))$ is in $\text{Ker}D^{m+1}$ follows directly from Lemma \[ker-d-m\].
The action $\pi_{\nu}$ of $G$ on $L^2(\Ome, \mu_\a)$ induces an action of $\fg^{\mathbb C}$ on the space of $C^\infty$-functions. We prove next that the function $\bar\Del_{\m}(q(z))$ is annihilated by the positive root vectors in $\fg^{\mathbb C }$. The element in $\fp$, when viewed as holomorphic vector fields, are of the form $\xi_v=v-Q(z)\bar v$; thus when acting on $C^\infty$-functions on $\Ome$ induced from the regular action of $G$, they are $$(\partial_v
-\partial_{Q(z)\bar v})f
+(\partial_{\bar v} -\partial_{Q(\bar z) v})f$$ >From this it follow that the element $v\in \fp^+=V$ acts on $C^\infty$-functions induced from $\pi_{\nu}$ of $G$ is $$\label{pi-nu-v}
\pi_{\nu}(v)f=\partial_v f -\partial_{Q(\bar z) v}f,$$ since the infinitesimal action of $v\in \fp^+$ is a translation and it will not contribute in the determinant factor in (\[pi-nu\]).
To study the action of $\fp^+$ on $\bar \Del_{\m}(q(z))$, we calculate first the differentiation of $q(z)$.
The following differentiation formulas hold $$\label{d-v-q}
\partial_v q(z) =Q(q(z))v,
\quad
\partial_{\bar w}q(z)=B(\bar z, z)^{-1}\bar w.$$ In particular if $\bar w=Q(\bar z)v$, $$\label{d-v-q-1}
\partial_{Q(\bar z)v}q(z)=B(\bar z, z)^{-1}Q(\bar z)v=Q(q(z))v,$$ and $$\label{van-q}
(\partial_v -\partial_{Q(\bar z)v })q(z)= 0$$
We use the addition formulas in [@Loos-bsd], Appendix, for the quasi-inverses. As special cases we have $$\label{q-i-1}
\bar z^{z+t v} = (\bar z^{z})^{tv}=
B(\bar z^z, t v)^{-1}(\bar z^z-tQ(\bar z^z)v),$$ and $$\label{q-i-2}
(\bar z+ t\bar w)^{z} = \bar z^z + B(\bar z, z)^{-1}B(t\bar w, z^{\bar z})^{-1}
(t \bar w -Q(t\bar w) z^{\bar z}).$$ The first order term in $t$ in (\[q-i-1\]) is easily seen to be $$D(\bar z^z, v)\bar z^z -Q(\bar z^z)v
= Q(\bar z^z)v,$$ which proves the first formula in (\[d-v-q\]). Similarly we can calculate the first order term in (\[q-i-2\]) and prove the second formula; using this formula and $$B(\bar z, z)^{-1}Q(\bar z)=Q(\bar z^z) = Q(q(z)),$$ we get then (\[d-v-q-1\]).
We can thus calculate $\pi_{\nu}(v)$ on $\bar\Del_{\m}(q(z))$ by using (\[pi-nu-v\]). In view of (\[van-q\]) we have $$\pi_\nu(v)\bar\Del_{\m}(q(z))=0.$$ This, together with Lemma \[k-intert\], implies that
The vector $\Del_{\m}(q(z))
$ under the action of $\pi_{\nu}$ of $\fg^{\mathbb C}$ is annihilated by the positive root vectors.
We let $A^{2, \a}_{\m}(\Ome)$ be the subspace of $L^2(\Ome, \mu_{\a})$ generated by the function $\bar\Del_{\m}(q(z))$, for $\m$ given by (4.1). Thus it is a highest weight representation of $G$. Now it follows from Lemma 2.2 that $$\bar D^m (\bar \Del_{\m}(q(z)))=m! \bar\Del_{\m}$$ The vector $\Del_{\m} $ is the highest weight vector of the weighted Bergman space $L^2_a(\Ome, S_{\m}(V), \mu_{\a})$, and $\bar D^m$ intertwines the $G$-action $\pi_{\nu}$ on $A^{2, \a}_{\m}(\Ome)$ with that on $L^2_a(\Ome, S_{\m}(V), \mu_{\a})$ (see (4.2)), by Lemma 2.1. Thus it is a non-zero intertwining operator of the two spaces. We summarize our results in the following
\[main-th\] The relative discrete series $A^{2, \alpha}_{\m}(\Ome)$ is $G$-equivalent to the weighted Bergman space $L^2_a(\Ome, V^{\m}, \mu_{\a})
$ and the corresponding intertwining operator is given by $\bar D^{|\m|}$. The highest weight vector of $A^{2, \alpha}_{\m}(\Ome)$ is given by $\Del_{\m}(q(z))$. In particular, the space $A^{2, \alpha}_{\m}(\Ome)$ consists of nearly holomorphic functions.
By the results of Shimeno [@Shimeno] we see that all the relative discrete series are obtained in this way.
When $\Ome$ is of tube type and when $\m
=m\ub1r=(m, m, \dots, m)$, the above is also proved in [@ahd+bo+gkz] by considering the tensor products of Bergman spaces holomorphic functions with polynomial space of anti-holomorphic functions, and in [@oz-reldis] by Capelli identity.
[An example: The case of the unit ball in $\mathbb C^n$]{}
In this section we consider the example of the unit ball in $V=\mathbb C^n$. The rank $r=1$, $h(z)=1- |z|^2$, and symmetric tensor $S_{m}(V)$ is itself irreducible under $K$. We study the adjoint operator $\bar D^\ast$ of $\bar D$ instead of $\bar D$. The operator $(\bar D^\ast)^m$ is thus an intertwining operator from the weighted Bergman space $L^2_a(\Ome, S_{m}(V), \mu_{\a})$ of vector-valued holomorphic functions into the relative discrete series $A^{2, \a}_{m}(\Ome)$. Now the $L^2_a(\Ome, S_{m}(V), \mu_{\a})$ has highest weight vector $\otimes^m e_1$. Thus $(\bar D^\ast)^m (\otimes^m e_1)$ is the highest weight vector in $A^{2, \a}_{m}$. We calculate directly here this vector.
Let $D=\bar D^\ast $. It has the following expression on a function $f$ with values in $\otimes^m V$: $$Df=h(z)^{-\a} \otimes^{m-1}B(z, \bar z) \Tr \partial\,
\left[h(z)^{\a}(I\otimes \otimes^{m-1}B(z,\bar z)^{-1})f\right].$$ To explain the formula we note that, the operator $\partial$ acting on a $\otimes^m V$-valued function gives a functions with values in $V^\prime\otimes(\otimes^m V)
=(V^\prime\otimes V)
\otimes(\otimes^{m-1} V)$; the operator $\Tr$ is the bilinear pairing between the first factor $V^\prime\otimes V$. Recall that the Bergman operator on the unit ball is $$B(z, \bar z)=(1-|z|^2)(1-z\otimes z^\ast),$$ where $z\otimes z^\ast$ is the rank one operator on $V$, $z\otimes z^\ast(v)=\langle v, z\rangle z$; see [@pz-cr]. Take $f =\otimes^m e_1$. The above formula then reads $$\begin{split}
&\quad \, D\otimes^m e_1\\
& =
h(z)^{-\a-(m-1)} \otimes^{m-1}(1-z\otimes z^\ast)
\Tr \partial
\left[h(z)^{\a-2(m-1)}(e_1\otimes \otimes^{m-1}((1-|z|^2)e_1+\bar z_1z))\right]
\end{split}$$ Performing the differentiation using the Leibniz rule we first differentiate the term $h(z)^{\a-2(m-1)}$, and get $$(2(m-1)-\a)
h(z)^{\a-2(m-1)-1}(\sum_j \bar z_j dz_j)\otimes
(e_1\otimes \otimes^{m-1}((1-|z|^2)e_1+\bar z_1z).$$ Taking the trace $\Tr$, it is $$(2(m-1)-\a) (1-|z|^2)^{-1}\bar z_1\otimes^{m-1} e_1.$$ Next we differentiate each factor $(1-|z|^2)e_1+\bar z_1z$ in the tensor, and get $$(-\sum_j \bar z_j dz_j)\otimes e_1+\bar z_1
\sum_j dz_j\otimes e_j.$$ We perform the operation $Tr$ and observe that each term is vanishing: $$\Tr e_1\otimes
((-\sum_j \bar z_j dz_j)\otimes e_1+\bar z_1
\sum_j dz_j\otimes e_j)
=0.$$ Thus only the first differentiation contributes to the final result, that is $$D (\otimes^m e_1)
=(2(m-1)-\a) (1-|z|^2)^{-1}\bar z_1\otimes^{m-1} e_1.$$ By induction we get $$D^m e_1^m
=C(1-|z|^2)^{-m}\bar z_1^m$$ where[**]{} $$C=\prod_{l=0}^{{m-1}}(2(m-1-j)-\a +j).$$
The function $(1-|z|^2)^{-m}\bar z_1^m$ is in $L^2(\Ome, \mu_{\a})$ if and only if $0\le m<\frac {\a+1}2$. In that case $D^m (\otimes^m e_1)$ is a non-zero multiple of $(1-|z|^2)^{-m}\bar z_1^m$. The quasi-inverse is $q(z)={(1-|z|^2)^{-1}}{\bar z}
$, and the vector constructed in Theorem \[main-th\] is $[e_1^m, \otimes q(z)]=(1-|z|^2)^{-m}\bar z_1^m$, and thus the two methods give the same result.
One might also in the beginning work with the operator $D=-(\bar D)^\ast$ instead of $\bar D$. However we note that for a general bounded symmetric domain the formula for the operator $D$ is much more involved.
[10]{}
A. H. Dooley, B. Ørsted, and G. Zhang, *Relative discrete series of line bundles over bounded symmetric domains*, Annales de l'Institut Fourier **46** (1996), 1011–1026.
M. Englis and J. Peetre, *Covariant [Laplacean]{} operators on [Kähler]{} manifolds*, J. Reine Angew. Math. **478** (1996), 17–56.
J. Faraut and A. Koranyi, *Function spaces and reproducing kernels on bounded symmetric domains*, J. Funct. Anal. **88** (1990), 64–89.
L. K. Hua, *Harmonic analysis of functions of several complex variables in the classical domains*, Amer. Math. Soc., Providence, Rhode Island, 1963.
A. Knapp, *Representation theory of semisimple groups*, Princeton University Press, Princeton, New Jersey, 1986.
O. Loos, *Bounded symmetric domains and [Jordan]{} pairs*, University of California, Irvine, 1977.
B. Ørsted and G. Zhang, *Capelli identity and relative discrete series of line bundles over tube domains*, in preparation.
J. Peetre, *Covariant Laplaceans and Cauchy-Riemann operators for a Cartan domain*, manuscript, 1993.
J. Peetre and G. Zhang, *Invariant [Cauchy-Riemann]{} operators and realization of relative discrete series of line bundle over the unit ball of [$\mathbf C^n$]{}*, Michigan Math. J. **45** (1998), 387–397.
W. Schmid, *Die [Randwerte]{} holomorpher [Funktionen]{} auf hermitesch symmetrischen [Rumen]{}*, Invent. Math **9** (1969), 61–80.
N. Shimeno, *The [Plancherel]{} formula for spherical functions with one-dimensional [$K$]{}-type on a simply connected simple [Lie]{} group of hermitian type*, J. Funct. Anal. **121** (1994), 331–388.
G. Shimura, *On a class of nearly holomorphic automorphic forms*, Ann. Math. **123** (1986), no. 2, 347–406.
, *Nearly holomorphic functions on hermitian symmetric spaces*, Math. Ann. **278** (1987), 1–28.
G. Zhang, *Shimura invariant differential operators and their eigenvalues*, (1999), preprint.
, *Invariant differential operators on hermitian symmetric spaces and their eigenvalues*, Israel J. Math., to appear.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We discuss the forward and inverse problems between the potential $V(x)$ measured in a heart chamber and its sources represented by a dipole density $d(y)$ located on the heart wall. We show that the mapping from $d(y)$ to $V(x)$ is a compact integral operator. Its inverse is unbounded which makes the inverse problem ill-posed in the mathematical sense. We investigate methods to solve the inverse problem approximately in view of the mapping of complicated cardiac arrhythmias. We point out an analogy between phase mapping and 2-dimensional hydrodynamics.'
author:
- |
Günter Scharf[^1]\
Physics Institute, University of Zürich\
Lam Dang[^2]\
HerzGefässZentrum, Klinik im Park\
8022 Zürich
date:
title: DIPOLE DENSITY INSTEAD OF POTENTIALS IN ELECTROCARDIOLOGY
---
Introduction
============
Electrocardiology rests upon the study of electric fields generated by the heart. As a physicist one immediately asks: What are the sources of these fields. This question has two answers. (i) On the microscopic scale the sources are $K^+$ and $Na^+$ ions and negatively $Cl^-$ ions and proteins. But there is no large separation of positive and negative charges. The membrane of the active cells is able to open channels for the positive ions only, the negative ones remain confined in the cell, the result is a microscopic dipole. (ii) On the macroscopic scale, where individual cells cannot be resolved, we then have a macroscopic dipole density but no charge density (monopole). The variation of this dipole density in space and time spreads through the tissue as a propagating wave of depolarization.
If the dipole density as the source is known it is straightforward to calculate the corresponding electric potential which can be measured. This is the so-called forward problem which in mathematical terms is the solution of Poisson’s equation. However, the desired medical information is given by the sources, i.e. by the macroscopic dipole density. Therefore we must solve the inverse problem of calculating the dipole density from potential measurements. In the past most authors have considered a different inverse problem, namely the determination of the potential near the heart wall from potential measurements in the heart chamber or on the body surface. In this case no assumption about the sources is made, one solves a boundary-value problem for Laplace’s equation. However, the potential near or on the heart wall is a superposition of the fields generated by near and distant active regions. As a consequence the potential only presents a broad and smooth depiction of the local electric activity.
From these arguments it is clear that the dipole density is better suited for the mapping of complicated arrhythmias than the potential. One may ask why this approach has not been tried before, to be correct: almost not. There is a pioneering work by A. van Oosterom \[1\] who always tried to model the sources instead of calculating potentials. He already discusses the scalar dipole density as a model of cardiac activity under the name “equivalent double layer model” (EDL). The reason why other authors did not follow this route may be the fact that the dipole density is harder to obtain than the contact potential at the heart wall, we shall discuss this in the next section. To compute the dipole density requires much better input data. Indeed, four years ago we have made test calculation with data which were collected by the EnSite system of St.Jude. The results were unsatisfactory. Only the new device from Acutus Medical Inc. called AcQMap System seems to be suited for our purpose. It has a recording catheter with 48 electrodes plus 48 ultrasound transducers for distance measurements. It allows determination of the heart chamber geometry and the potential measurements simultaneously at the same time. This eliminates the error due to the motion of the measuring electrodes and of the heart itself.
The paper is organized as follows. In the next section we discuss the forward and inverse problems for the dipole density. We show that the mapping between the dipole density and the potential is a compact integral operator which can well be represented by a finite dimensional matrix. But its inverse is an unbounded operator. This has the bad consequence that the inverse problem is a so-called ill-defined problem which requires special techniques of solution. These facts are illustrated in section 3 by a simple solvable model which we also use to test the general numerical code for solving the inverse problem for the dipole density. In the last section we consider phase mapping which seems to be the suitable technique to map complicated arrhythmias as atrial fibrillation. We compare the dynamics of phase singularities with the vortex dynamics in 2-dimensional hydrodynamics.
Forward and inverse problem for the dipole density
==================================================
We apply macroscopic electrodynamics in matter to the heart filled with blood. At this point one usually refers to Jackson \[2\] and starts from the phenomenological Maxwell’s equations $${1\over c}{\d D\over\d t}={\rm curl}\,H-{4\pi\over c}j,\quad {\rm div}\,D=\ro\eqno(2.1)$$ $${1\over c}{\d B\over\d t}=-{\rm curl}E,\quad {\rm div}B=0.\eqno(2.2)$$ Unfortunately Jackson does not make a clear distinction between microscopic electrodynamics in vacuum and macroscopic electrodynamics in matter. So we prefer the concise book by the first author \[3\] and refer to the derivation of the macroscopic Maxwell’s equations given there. Since the temporal variation of the cardiac fields is slow compared to the propagation of the fields through the body (with the speed of light), we can use the quasi-static approximation and neglect the time derivatives of $D$ and $B$. Then by (2.1) the conduction current $j(x)$ is divergence-less $${\rm div}\,j(x)=0\eqno(2.3)$$ and by (2.2) the electric field is curl-less $${\rm curl}\,E(x)=0.\eqno(2.4)$$ This implies that the electric field has a scalar potential $$E={\rm grad}\,V.\eqno(2.5)$$ The blood in the heart is a homogeneous medium with constant conductivity $\sigma$, so that $j=\sigma E$. Substituting this into equation (2.3) yields ${\rm div}E=0$. Then equation (2.5) implies Laplace’s equation $${\rm div\, grad}\,V=\triangle V=0.\eqno(2.6)$$ In the quasi-static approximation the time dependence of the fields has completely disappeared. One calculates the potential or dipole density at fixed time, and then makes a movie for successive time instants. Summing up we describe the heart as a uniform volume conductor \[4\] with electric dipole sources in the heart wall.
The Laplace equation holds inside the heart chamber filled with blood. In the whole body the situation is much more complicated. Here we have large inhomogeneities (lunges and bones) so that $\sigma$ is no longer constant. If one measures potentials on the body surface (ECG) one must construct a detailed model of the body, in order to derive the sources from those data. To avoid this severe problem we assume that we have a multi-electrode catheter in the heart chamber which measures the potential $V(t,x)$ at various locations. In the heart wall we have the dipole sources which we assume to be localized on a 2-dimensional surface $S$. Then instead of (2.6) we have Poisson’s equation of the form $$\triangle V(x)=-4\pi{\rm div}(nd(x)\delta_S(x))\eqno(2.7)$$ where $\delta_S(x)$ is the Dirac measure on $S$, $n$ is the outer normal at point x on the surface and $d(x)$ is the surface dipole density (dipole strength per area). Such a source is also called a dipole layer or double layer in electrostatics. The direction of the dipole moment is normal to the surface S. Note that this source describes the $macroscopic$ dipole density. The microscopic dipoles can have different directions, but this microscopic structure can hardly be resolved in detail by macroscopic non-contact techniques. The solution of (2.7) for $V$ is given by the surface integral $$V(x)=\int\limits_S d(y){\d\over\d n_y}{1\over |x-y|}dS_y\eqno(2.8)$$ where $\d/\d n_y$ is the derivative in the normal direction at point $y$ on $S$ and $dS_y$ is the surface measure. This integral can be rewritten as follows $$V(x)=\int\limits_S d(y){\cos\fii_{xy}\over |x-y|^2}dS_y\eqno(2.9)$$ where $\fii_{xy}$ is the angle between the vector $x-y$ and the normal $n$ \[5\]. If the dipole density $d(y)$ is given the calculation of the potential $V(x)$ is straightforward, this is the forward problem.
The inverse problem of computing $d(y)$ from measured $V(x)$ is much harder. The reason is the following. The integral operator (2.8) which maps $d(y)$ to the potential $V(x)$ is a $compact$ operator in the mathematical sense. This important fact must be proved.
[**Proof:**]{}
Here we follow the best mathematical reference we know \[6\]. Let the heart wall be a closed smooth surface $S$ where the dipole density $d(y)$ is located. This is no serious restriction because at the valves $d(y)$ can be put equal to 0. Let $S'$ be another smooth closed surface completely inside the blood volume where electrodes are placed to measure the potential $V(x)$. The dipole density $d(y)$ is assumed to be bounded and continuous on $S$. This implies that $V(x)$ is bounded and continuous on $S'$, because the kernel in (2.9) is continuous (note that $y\in S$ and $x\in S'$ so that we have always $x\ne y$). As usually let $C(S)$ be the Banach space of bounded continuous functions on $S$ and similarly $C(S')$. Then the integral operator (2.9) maps $C(S)$ on $C(S')$ and to simplify the notation we write it as $$V(x)=\int\limits_S d(y)\,K(x,y)\,dS_y.\eqno(2.10)$$ This is a bounded operator $K$ with the operator norm $$\Vert K\Vert=\max_{x\in S'}\int\limits_S\vert K(x,y)\vert dS_y.\eqno(2.11)$$
To prove that $K$ is even compact we must use a decomposition of unity. This is a sequence of positive continuous functions $e_j(x)$ with compact support on $S'$ with $$\sum_{j=1}^ne_j(x)=1$$ and the following property: for every compact set $M\subset S'$ the intersection of $M$ and the support of $e_j$ is not empty for finitely many $j$, only. Since the kernel $K(x,y)$ is uniformly continuous on $S\times S'$ there exists a decomposition of unity and points $x_j$ in the support of $e_j$ such that $$\vert K(x,y)-\sum_{j=1}^n e_j(x)K(x_j,y)\vert <\eps\eqno(2.12)$$ for all $x\in S'$, $y\in S$ and arbitrary $\eps$. The approximating integral operator $K_\eps$ defined by the sum of product kernels in here is clearly compact. The approximation is in the operator norm because $$\Vert K-K_\eps\Vert=\max_x\int\limits_{S'}\vert K(x,y)-\sum_{j=1}^n e_j(x)K(x_j,y)\vert dS_y <\eps\vert S'\vert\eqno(2.13)$$ where $\vert S'\vert$ is the total area of $S'$. This proves that $K$ is the limit of a converging sequence of compact operators and, therefore, compact.
[**End of proof.**]{}.
This fact has one good and one bad consequence. The good one is that compact operators can well be approximated by finite-dimensional matrices. Then (2.9) becomes a matrix equation $$V(x_j)=\sum_kW_{jk}d_k.\eqno(2.14)$$ The solution of the inverse problem is then given by the inverse matrix $$d_k=\sum_jW_{kj}^{-1}V(x_j).\eqno(2.15)$$ Compact operators have infinitely many eigenvalues $\lambda_n$ which accumulate only at 0, $\lambda_n\to 0$ for $n\to\infty$ (theorem of F.Riesz \[6\]). As a consequence the approximating matrix $W$ in (2.14) has small eigenvalues and this is unavoidable. In the inverse (2.15) we then have $\lambda_n^{-1}\to\infty$, so that the corresponding part in the data $V(x_j)$ gets strongly amplified. This is the ill-posed nature of the inverse problem. To avoid a huge amplification of the noise one must cut off the smallest eigenvalues. This is a convenient regularization method called truncated singular value decomposition (TSVD).
Another widely used method of regularization is the one of Tikhonov \[7\]. To solve the linear equation (2.14) $Wd - V = 0$ one considers the variation principle $$(Wd-V,\,WKd-V)+\gamma (Rd,\,Rd)=\Phi(d)=\min,\eqno(2.16)$$ where $R$ is a “regularizing ”operator (mostly $R=1$) and $\gamma$ is the regularization parameter. Putting the variational derivative $\Phi'(d)$ equal to 0 we obtain the equation $$(W^+W+\gamma R^+R)d=W^+V\eqno(2.17)$$ where the cross means the adjoint operator. For positive $\gamma$ the accumulation of eigenvalues at 0 is removed in the operator on the left. The latter can be inverted (the inverse is bounded) and the dipole density can be computed. The advantage of this regularization method is that the regularization parameter $\gamma$ can be varied continuously. The optimal choice of $\gamma$ is a serious problem which is discussed in the next section. For $R=1$ one has the first order Tikhonov regularization.
The inverse problem of electrocardiology in the standard sense is a voltage to voltage approach where one calculates the potential $V(y_S)$ at points $y_S$ on or near the wall $S$. If the dipole density $d$ is known this is a forward calculation (2.9) $$V(y_S)=Ld\eqno(2.18)$$ with a new integral operator $L$ because $y_S$ is now on or near the surface. If $y_S$ is on the wall $S$ the kernel of $L$ has a singularity at $y=y_S$. Nevertheless $L$ is still compact because it is again the limit of compact operators (corresponding to a sequence of surfaces $S'$ converging to $S$ from the interior). Using $d=K^{-1}V$ with the unbounded operator $K^{-1}$, we can eliminate $d$ and get $$V(y_S)=LK^{-1}V(x).\eqno(2.19)$$ This operator is better behaved than $K^{-1}$ alone, because $L$ damps the large eigenvalues of $K^{-1}$. Therefore the standard voltage inverse problem is easier to solve, which means that less strong regularization is necessary. But as discussed in the introduction, it gives less precise information on the electrical activity of the heart.
A solvable model and the numerical code
=======================================
To compare dipole density and potential and test candidate regularization methods, a mathematical model with absolutely known dipole density $d(y)$ was defined for which the corresponding potential $V(x)$ can be calculated exactly. This model represents dipole density “frozen” at one instant of time. A simple solvable model is obtained as follows. Let a sphere of radius 1 represent the heart wall (endocardial surface) $S$ and choose the dipole density applied upon it according to the formula $$d(y)=d_0\exp(-p\cos\te)\eqno(3.1)$$ where $\te$ is the polar angle with respect to the $z$-axis, $p$ is a positive parameter and $d_0$ a normalization factor. This distribution is rotationally symmetric around the $z$-axis, it has a maximum at the south pole $\te=\pi$ and diminishes toward the north pole. A large value of the parameter $p$ causes the maximum-region on the south pole to be narrow, whereas a small value of $p$ causes the maximum-region to be broad. The normalization factor $d_0$ conveniently scales the density values so that the integral over the unit sphere is equal to 1: $$d_0={p\over \exp{p}-\exp{(-p)}}.\eqno(3.2)$$
The corresponding voltage $V(x)$ for this dipole density can be exactly calculated as follows. We expand $d(y)$ in terms of Legendre polynomials $P_l(\cos\te)$ with respect to the polar angle $\te$ using the integral $$\int\limits_{-1}^{1}e^{-p\xi}P_l(\xi)d\xi=(-1)^l\sqrt{{2\pi\over p}}I_{l+1/2}(p)\eqno(3.3)$$ where $I_{l+1/2}$ is the modified spherical Bessel function. Then we obtain $$d(y)=\sum_{l=0}^\infty y^lD_lP_l(\cos\te)\eqno(3.4)$$ with $$D_l=(-1)^l(2l+1)\sqrt{{\pi p\over 2}}{e^p\over e^{2p}-1}I_{l+1/2}(p).\eqno(3.5)$$
On the other hand the kernel in the potential integral (2.8) can also be expanded in terms of Legendre polynomials. We start from the well-known expansion $${1\over \vert x-y\vert}={1\over y}\sum_{l=0}^\infty\B({r\over y}\B)^lP_l(\cos\al)\eqno(3.6)$$ where $r=\vert x\vert$, $\al$ is the angle between the vectors $x$ and $y$ and $\vert x\vert<\vert y\vert$. The normal derivative $d/dn$ in (2.8) on the unit sphere is equal to $d/dy$, hence $${d\over dn_y}{1\over\vert x-y\vert}=-\sum_l(l+1){r^l\over y^{l+2}}P_l(\cos\al).\eqno(3.7)$$ Substituting this into (2.8) we arrive at $$V(x)=-\sum_l (l+1)r^l\int\limits_S d(y){P_l(\cos\al)\over y^{l+2}}dS_y.\eqno(3.8)$$ A general bounded continuous dipole density on the unit sphere can be expanded in terms of spherical harmonics $$d(y)=\sum_ly^l\sum_{m=-l}^lD_{lm}Y^m_l(\te,\fii).\eqno(3.9)$$ Inserting this into (3.8) and using the following integral over the angles $$\int Y_{l'}^{m'}(\te',\fii')P_l(\cos\al)d\cos\te' d\fii'={4\pi\over 2l+1}\delta_{ll'}Y_l^{m'}(\te,\fii)\eqno(3.10)$$ we get the desired potential in the form $$V(x)=-4\pi\sum_l{l+1\over 2l+1}r^l\sum_mD_{lm}Y_l^m(\te,\fii).\eqno(3.11)$$ This general result can be applied to our rotationally symmetric dipole density (3.4) which gives the potential everywhere in the unit sphere: $$V(x)=\sum_l x^lV_lP_l(\cos\te)\eqno(3.12)$$ with $$V_l=-4\pi (-1)^l(l+1)I_{l+1/2}(p)\sqrt{{\pi p\over 2}}{e^p\over e^{2p}-1}.\eqno(3.13)$$ The sum over $l$ is rapidly converging so that one can stop at a finite value $l_{\rm max}$ and gets any desired accuracy.
In Figure A we compare $d(x_S)$ and $V(x_S)$ for $p=5$. We have normalized both quantities to a maximum value 1 for the purpose of direct comparison. As we follow both distributions from maximum (south pole) toward the minimum (north pole), the voltage has a long rightward tail (power law), compared to the rapid (exponential) descent of the dipole density. This shows clearly the local nature of the dipole density in contrast to the broad distribution of the potential.
![ Dipole density and voltage on the wall as function of the polar angle.](Figure1.png){width="\textwidth"}
Next we consider the general numerical code. The heart wall $S$ is covered by a triangular mesh. Since we want to have a continuous dipole density $d(y)$ we approximate it by piecewise linear functions $$h_n(y)={{\rm det}(x_k, x_l,y)\over {\rm det}(x_k,x_l,x_n)},\quad y\in\triangle_{kln}.\eqno(3.14)$$ and zero otherwise. Here $x_k$, $x_l$, $x_n$ are the vectors of the corner points of the triangle $\triangle_{kln}$ and det is the $3\times 3$ determinant of the 3 vectors. These functions satisfy the condition $$h_n(x_m)=\delta_{mn}.\eqno(3.15)$$ The unknown dipole density is expanded in the form $$d(y)=\sum_{n=1}^N d_nh_n(y).\eqno(3.16)$$ Then the integral equation (2.8) becomes the following set of linear equations for $N$ unknowns $d_n$ $$V(x_m)=\sum_{n=1}^N W_{mn}d_n,\quad m=1,\ldots N\eqno(3.17)$$ where $$W_{mn}=-\int\limits_S h_n(y)\nabla_y{1\over\vert y-x_m\vert}\cdot dS_y.\eqno(3.18)$$
The integral (3.18) is a sum of integrals over the triangles with corner $n$. These integrals can be calculated analytically \[8\]. The result for one triangle $\triangle_{kln}$ is equal to $$M_{mn}={1\over A^2}\B[\vec z_n\cdot\vec n\Omega+d(\vec y_k-\vec y_l)\cdot\vec S\B].\eqno(3.19)$$ Here $$\vec y_k=x_k-x_m,\quad \vec y_l=x_l-x_m,\quad y_n=x_n-x_m$$ $$\vec z_n=\vec y_k\times \vec y_l,\quad d=\vec y_k\cdot(\vec y_l\times\vec y_n),\eqno(3.20)$$ and $\vec n$ is the normal of the triangle and $A$ its absolute value $$\vec n=(\vec y_l-\vec y_k)\times (\vec y_n-\vec y_k),\quad A=\vert\vec n\vert=2F\eqno(3.21)$$ where $F$ is the area of the triangle. The vector $\vec S$ is given by $$\vec S=(\vec y_k-\vec y_l)\gamma_k+(\vec y_l-\vec y_n)\gamma_l+(\vec y_n-\vec y_k)\gamma_n\eqno(3.22)$$ with $$\gamma_k={1\over\vert\vec y_k-\vec y_l\vert}\log{{\vert\vec y_l\vert\vert\vec y_l-\vec y_k\vert+\vec y_l\cdot (\vec y_l-\vec y_k)
\over \vert\vec y_k\vert\vert\vec y_l-\vec y_k\vert+\vec y_k\cdot (\vec y_l-\vec y_k)}}\eqno(3.23)$$ and cyclic $k,l,n$. Finally $\Omega$ is the solid angle of the triangle subtended at the view point $x_m$. A convenient formula for $\Omega$ has been given by van Oosterom and Strackee \[9\].
If we substitute $h_n$ in (3.18) by 1, we obtain the so-called Gauss-integral which is equal to the solid angle $\Omega$. Since $S$ is a closed surface and $x_m$ is in the interior we get $4\pi$. This leads to the sum rule $$\sum_n W_{mn}=4\pi\eqno(3.24)$$ which holds exactly because the discrete triangulated surface subtends the same solid angle $4\pi$. The sum rule is an important test of the code, it must be satisfied with machine accuracy. In other words, the matrix $W_{mn}$ is a stochastic matrix times $4\pi$, it has an eigenvector $d=(1,1,\ldots)$ with eigenvalue $4\pi$. Unfortunately, it also has very small eigenvalues because it approximates a compact operator. Then the inverse problem requires regularization. This leads to some error in the resulting dipole density. We now discuss this essential problem in our solvable model.
To be near reality we define a spherical “basket” of radius r=0.5 which we first place concentric with the unit sphere representing the heart wall. We calculate the exact potential values at 186 evenly distributed points on this basket. This represents the measured values on an array of electrodes of a basket catheter. Finally, we calculate the dipole density on the heart wall ($r=1$) by solving the inverse problem $$d=W^{-1}V\eqno(3.25)$$ where $W^{-1}$ is a regularized inverse. Since we know the exact dipole density we can choose the regularization parameter in an optimal way. Using truncated singular-value regularization (TSVD) with 110 singular values from total 186 we obtain very good results as shown in Figure B plotted in red, compared with the exact dipole density values plotted on the blue curve. The normalized RMS error is 0.01. In the case of real data from living hearts a good eye of the medical doctor is required to find out the optimal regularization parameter. If we use Tikhonov regularization instead of TSVD we find no statistically significant difference in the resultant calculated dipole density. If the basket is not placed in the center the results get worse, but not dramatically. However using only 48 electrodes instead of 186 gives poor results showing that we have a large discretization error in this case. The remedy in view of the real situation with the AcQMap system is interpolation of the measured potential values. For this interpolation on a triangular surface the method of Oostendorp, van Oosterom and Huiskamp \[10\] is very useful, because it minimizes $\Delta V$. This is the best strategy because the exact potential would satisfy Laplace’s equation $\Delta V=0$. .
Phase dynamics
==============
Cardiac fibrillation is the main cause of death in the western world. Nevertheless its underlying mechanisms of activation are still poorly understood. Obviously mapping of cardiac potentials is not sensitive enough to improve the situation. There is considerable hope that dipole density maps can help. These maps show the amplitude of the dipole density $d(t,x)$ distributed over the heart wall ($x\in S$) as a function of time $t$. But in addition to the amplitude the phase of the dipole density gives important information as it is the case in the phase analysis of electrograms \[11-14\] (and references given there).
To define the phase, the dipole density $d$ is considered as the real part of a complex function whose imaginary part is given by the Hilbert transform $$(Hd)(t)={1\over\pi}P\int\limits_{-\infty}^{+\infty}{d(t')\over t-t'}dt'\eqno(4.1)$$ where $P$ stands for the principle value integral. The phase $\Phi(t)$ is then equal to the phase of the complex number $d+iHd$, that means $$\Phi(t)=\arctan{-d\over Hd}.\eqno(4.2)$$ If the phase moves out of the interval $[-\pi/2, \pi/2]$ it must be continued continuously until the full period $[-\pi, \pi]$ is reached. This arctan-function is denoted by arctan2 in Matlab so that the general definition is $$\Phi(t)=\arctan 2(-d, Hd).\eqno(4.3)$$ In this definition we have assumed that the mean value of $d(t)$ over time is zero. By adding or subtracting $2\pi$, $\Phi(t)$ can be made continuous in $t$.
Since the phase can be calculated at every point $x$ where the dipole density $d(t,x)$ has been determined, we actually get a phase map $\Phi(t,x)$ on the heart wall for every instant $t$. This map shows singular points where the phase is undetermined. Such a phase singularity is actually a singularity of the gradient of $\Phi(t,x)$. In fact, iif we integrate $\nabla_x\Phi(t,x)$ along a closed curve we get zero, except some singularity of the gradient is included. This is the same situation as in 2-dimensional hydrodynamics where the flow velocity integrated along a closed curve give the circulation which vanishes except a vertex is included. Considering $\vec v(t,x)=\nabla_x\Phi(t,x)$ as a flow velocity we get a complete hydrodynamical analogy. We have a potential flow, the phase is the velocity potential. In hydrodynamics the circulation is conserved in the course of time. We want to investigate the same property for the phase singularities on the heart wall.
The first observation is that the phase singularities are quantized vortices. That means the contour integral of $\nabla_x\Phi(t,x)$ (the circulation) always has the same value $\pm 2\pi$. Studying various phase maps on the heart wall we have found that the vortices always appear in pairs: one with circulation $+2\pi$ and a second one with circulation $-2\pi$. This shows that the circulation is indeed conserved like in hydrodynamics. One vortex cannot appear or disappear suddenly, it can only annihilate together with a partner of opposite circulation. In the healthy heart there seems to exist one pair of rather stable vortices only. The two vortices can be joined by a line where the flow velocity $\vec v(x)$ is maximal (see Figure 2). This line may be considered as the activation front. The front is most easily found by a jump from $+\pi$ to $-\pi$ in the phase. During one heart beat this activation front moves over the heart wall, while its endpoints at the vortices remain more or less fixed. If some arrhytmia is developed, more and more vortex pairs appear and move around. Beside vortices sometimes sources and perhaps also sinks show up which have a closed activation front. It is clear that the study of this flow dynamics will be an important tool for understanding complicated arrhytmias.
![Human left atrium with a pair of $\pm$ vertices (black and white) together with the phase flow. The black line is the activation front. LAA: Left Atrial Appendage, LSPV: Left Superior Pulmonary Vein, LIPV: Left Inferior Pulmonary Vein, RSPV: Right Superior Pulmonary Vein, RIPV: Right Inferior Pulmonry Vein, MV: Mitral Valve. ](Figure2.png){width="\textwidth"}
[**Acknowledgment**]{}
We thank Graydon Beatty from Acutus medical for innumerable elucidating discussions and communication of information. Thanks are also due to other members of the Acutus team, in particular Min Zhu and Xinwei Shi and, of course, Randy Werneth.
van Oosterom A, Solidifying the Solid Angle, J. of Electrocard. 35, No.4, part B, 181-192 (2002)
Jackson JD, Classical Electrodynamics, second ed. New York; Wiley (1975)
Scharf G, From Electrostatics to Optics, Springer, Berlin, Heidelberg, New York (1994)
Plonsey RF, Bioelectric Phenomena, New York; Mc Grawhill; (1969)
Wladimirov WS, Equations of mathematical physics; Moscow, Mir Publishers, (1984)
Jörgens K, Lineare Integraloperatoren, B.G. Teubner, Stuttgart (1970)
Tikhonov AN, Arsenin VY, Solutions of Ill-Posed Problems, Halsted Press, New York (1977)
de Munck JC, IEEE Trans.Biomed. Engeneering, 39, 986 (1992)
van Oosterom A, Strackee J, The Solid Angle of a Plane Triangle, IEEE Trans.Biomed. Engeneering, BME-30, 125 (1983)
Oostendorp TF,van Oosterom A, Huiskamp G, Interpolation on a Triangulated 3D Surface, J. Comp. Phys. 80, 331 (1989)
Gray RA, Pertsov AM, JalifeJ, Spatial and temporal organisation during cardiac fibrillation, Nature 392, 75(1998)
Jalife J, Gray RA, Chen J in Cardiac Electrophysiology, Zipes DP, Jalife J editors, Saunders, Philadelphia (2000)
Nash PN, Mourad AM, Clayton RH, Sutton PM, Bradley CP, Hayward M, Paterson DJ,Taggart P, Evidence for multiple mechanisms in human ventricular fibrillation, Circulation 114, 536 (2006)
Kuklik P, Zeemering S,Maesen B, Maesson J, Crijns HJ, Verheule S, Ganesan AN, Scotten U, Reconstruction of instantaneous phase of unipolar atrial contact electrogram, IEEE Trans.Biomed. Engeneering, 62, 296 (2015)
[^1]: e-mail: scharf@physik.uzh.ch
[^2]: e-mail: lam.dang.ch@gmail.com
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This is a review of some recent works which demonstrate how the classical equations of gravity in AdS themselves hold the key to understanding their holographic origin in the form of a strongly coupled large $N$ QFT whose algebra of local operators can be generated by a few (single-trace) elements. I discuss how this can be realised by reformulating Einstein’s equations in AdS in the form of a non-perturbative RG flow that further leads to a new approach towards constructing strongly interacting QFTs. In particular, the RG flow can self-determine the UV data that are otherwise obtained by solving classical gravity equations and demanding that the solutions do not have naked singularities. For a concrete demonstration, I focus on the hydrodynamic limit in which case this RG flow connects the AdS/CFT correspondence with the membrane paradigm, and also reproduces the known values of the dual QFT transport coefficients.'
address: |
Institut für Theoretische Physik, Technische Universität Wien, Wiedner Hauptstrasse 8-10, A-1040 Vienna, Austria and\
CERN, Theoretical Physics Department, 1211 Geneva 23, Switzerland\
ayan@hep.itp.tuwien.ac.at
author:
- Ayan Mukhopadhyay
bibliography:
- 'paddyat60ayan.bib'
title: Understanding the holographic principle via RG flow
---
A route to explore the holographic origin of gravity {#sec:1}
====================================================
It is widely believed that *the holographic principle* holds the key to merging quantum and gravity together into a consistent framework. This principle broadly postulates that the gravitational dynamics in a given *volume* of spacetime can be described using degrees of freedom living at the *boundary* [@tHooft:1999bw; @Susskind:1994vu; @Bousso:2002ju; @Maldacena:2003nj]. Thus gravity and at least one dimension of spacetime should be *both* emergent together from familiar quantum dynamics of many-body systems living on a holographic screen, whose embedding in the emergent spacetime should depend on the observer and the measurement process. A precise general statement of the holographic principle is still elusive although we do have a very concrete realisation in the form of AdS/CFT correspondence of string theory in which certain supergravity theories with stringy corrections in anti-de Sitter (AdS) space have been shown to have *dual* descriptions given by specific types of conformal Yang-Mills theories without gravity living at the boundary [@Maldacena:1997re; @Witten:1998qj; @Gubser:1998bc].
Recently, a specific approach towards understanding of the holographic principle (specially [@Kuperstein:2011fn; @Kuperstein:2013hqa; @Behr:2015yna; @Behr:2015aat]) has been developed which has been directly influenced by the broad philosophy that the classical gravity equations themselves hold the key to unravelling gravity’s holographic origin. In the context of AdS/CFT correspondence, which is the most concrete example of holography, this question can be formulated in a precise way. Let us however state this question from a broader point of view by considering a class of gravitational spacetimes where one can naturally define a *spatial holographic direction* related with a *decreasing* energy scale. Such spacetimes include asymptotically anti-de Sitter (aAdS) spaces and those with horizons (such as black holes) where this holographic direction is the radial direction associated with a *wrap factor* or a *blackening function*. *If gravity is holographic, then the holographic radial direction should be related to a scale of precise kind of renormalisation group flow of the dual quantum system implying that holographic screens at constant values of this radial coordinate should contain complete information about a specific kind of coarse-grained description of the dual quantum system.* The broad question partly is, how do we make this statement precise and also how do we relate the freedom of choice of local coarse-graining for doing measurements in the dual quantum system to the emergence of diffeomorphism symmetry in the gravitational theory.
In the case of AdS/CFT correspondence, the precise microscopic QFT described by the data on the holographic screen at infinity (the boundary of the aAdS spacetime) is precisely known. Nevertheless, the precise general relation between the scale of the QFT and the emergent radial coordinate, meaning a correspondence between RG flow in the QFT and the radial evolution of data on holographic screens via gravitational dynamics is still unknown. In this article, we will discuss how an amazing lot about this mysterious map between RG flow and radial evolution can be known by an appropriate reformulation of classical gravity equations themselves.
There is another aspect of the holographic origin of gravity which is also very enigmatic. Typically the map between classical gravity with a few fields and a dual QFT works only when the latter is strongly coupled [@Maldacena:1997re; @Witten:1998qj; @Gubser:1998bc]. This feature has revolutionised our understanding of strong coupling dynamics in quantum many-body systems. At strong coupling, the perturbative machinery of calculations with Feynman diagrams does not work and so far there is no better alternative to the holographic duality (whenever it is applicable) for calculating real-time quantum dynamics in presence of strong interactions. In order to calculate physical quantities via holography, one simply solves for the *asymptotic* data that lead to solutions in the dual gravity theory which are *free of naked singularities*. These lead to relations between the apriori independent leading (non-normalisable) and subleading (normalisable) modes of the gravitational fields near the boundary of AdS which satisfy two-derivative equations (such as Einstein’s equations and the covariant Klein-Gordon equation). Each such field corresponds to an operator of the quantum theory. The non-normalisable modes correspond to the *sources* for local operators, and the normalisable modes correspond to the *expectation values* of the corresponding operators. Solving for the relations between these two that lead to dual geometries without naked singularities, we can obtain correlation functions, transport coefficients, etc. of the dual QFT. In fact, even if the Lagrangian description of the dual QFT is unknown, the dual gravity description gives us a concrete machinery to calculate all physical observables.
The enigmatic aspect is as follows. If the classical gravity equations can be reformulated as a RG flow, then this RG flow itself should know which microscopic UV data should lead to dual spacetimes in the theory of gravity without naked singularities. The RG flow is a first order evolution with the holographic radial direction moving towards the infrared of the dual QFT. The criterion for absence of naked singularities should be better obtained from the infrared behaviour of the RG flow, as often the dual field theory can become weakly coupled in the ultraviolet so that the holographic classical gravitational description may no longer be valid [@Klebanov:2000hb]. Therefore, demanding appropriate requirements on the infrared holographic screen where the RG flow ends should ensure absence of naked singularities in dual gravity. Typically, this infrared holographic screen is the horizon. However, this infrared horizon screen should not be a fixed point of the RG flow so that the microscopic UV data can be recovered from the endpoint data by following back the first order scale evolution. *The question then is what is this infrared behaviour of RG flow that should be specified at the endpoint (the holographic screen coinciding with the horizon) which should lead us to the same UV data that is usually specified at the AdS boundary to ensure that the dual spacetimes do not have naked singularities.*
The data at the holographic horizon screen is expected to be very universal and characterised by a few parameters. As for example, although the microscopic UV data in the hydrodynamic limit consists of infinite number of transport coefficients which should be specified at the AdS boundary to obtain regular future horizons [@Policastro:2001yc; @Baier:2007ix; @Bhattacharyya:2008jc], the dynamics of the horizon is known to be characterised universally by a *non-relativistic incompressible Navier-Stokes fluid* with the shear viscosity being the only parameter as demonstrated via the membrane paradigm [@Damour:1978cg; @1986bhmp.book.....T]. Therefore, somehow the endpoint of RG flow that reformulates classical gravitational dynamics should be specified only by a *few* parameters which should determine the infinite number of physical observables of the dual QFT. A natural implication then is that *in any fixed number of dimensions only a class of gravitational theories (which may be constituted by finite or infinite number of higher derivative corrections to Einstein’s equations) can be holographic – such gravitational theories can be parametrised by a finite number of infrared parameters*. Furthermore, *this can possibly be revealed by reformulating the classical equations of gravity in the form of RG flows, and then finding out when the absence of naked singularities in solutions of the gravitational theory can be translated into an appropriate criterion for the endpoint of the RG flow which can be described by simple dynamical equations involving a few parameters only.*
In the following section, I will describe how such a reformulation of classical gravity equations in AdS in the form of RG flows work and also how the infrared criterion for the RG flow can determine the microscopic UV data of the dual field theories. In Section 3, I will describe the construction of the RG flow in the field theory which will also define the latter in a constructive way in special limits. Special emphasis will be given on the hydrodynamic sector. I will conclude with an outlook.
Reformulating gravity as a *highly efficient RG flow*
=====================================================
The map between gravity in AdS and RG flow can be readily understood in the Fefferman-Graham coordinates which is well adapted for the description of the asymptotic behaviour of the spacetime metric and other gravitational fields from which the microscopic UV data of the dual field theory can be readily extracted. Therefore, we first describe how the map works in the Fefferman-Graham coordinates. The map can of course be expressed in any coordinate system and this will be related to the freedom of choosing the scale of observation in the dual field theory locally as we will show later. We will also consider pure Einstein’s gravity for most of our discussion.
Any asymptotically AdS (aAdS) spacetime metric can be expressed in the Fefferman-Graham coordinates in the form: $$\label{FG}
{\rm d}s^2 = \frac{l^2}{r^2}\left({\rm d}r^2 + g_{\mu\nu}(r,x){\rm d}x^\mu {\rm d}x^\nu \right).$$ This coordinate system should be valid in a finite patch ending at the boundary which is at $r= 0$. Also $l$ is called the AdS radius and in the holographic correspondence it provides units of measurement of bulk gravitational quantities which then corresponds to parameters and couplings of the dual field theory. The *boundary metric* $g^{\rm (b)}_{\mu\nu}$ defined as: $$g^{\rm (b)}_{\mu\nu}(x) \equiv g_{\mu\nu}(r = 0, x)$$ is identified with the metric on which the dual field theory lives. For the sake of simplicity, unless stated otherwise we will assume that $g^{\rm (b)}_{\mu\nu} = \eta_{\mu\nu}$ so that the dual field theory lives in flat Minkowski space. Our results of course can be generalised readily to any arbitrary curved boundary metric.
For later purposes, it is useful to define: $$z^\mu_{\phantom{\mu}\nu} \equiv g^{\mu\rho}\frac{\partial}{\partial r}g_{\rho\nu}.$$ Einstein’s equations with a negative cosmological constant $\Lambda = -d(d-1)/(2l^2)$ in $(d+1)-$ dimensions in Fefferman-Graham coordinates can be written in the following form [@Kuperstein:2013hqa]: $$\begin{aligned}
\label{Einstein}
\frac{\partial}{\partial r}{z^\mu_{\phantom{\mu}\nu}} - \frac{d-1}{r} z^\mu_{\phantom{\mu}\nu} + {\rm Tr}\, z\left(\frac{1}{2}z^\mu_{\phantom{\mu}\nu}-\frac{1}{r}\delta^\mu_{\phantom{\mu}\nu}\right)
&=& 2 \, R^\mu_{\phantom{\mu}\nu},\label{tensor-equation}\nonumber\\
\nabla_\mu\left(z^\mu_{\phantom{\mu}\nu} -{\rm Tr}\, z\delta^\mu_{\phantom{\mu}\nu}\right)&=& 0, \label{vector-equation}\nonumber\\
\frac{\partial}{\partial r}{{\rm Tr}\, z} -\frac{1}{r}{\rm Tr}\, z +\frac{1}{2}{\rm Tr}\,z^2 &=& 0.\end{aligned}$$ Above, all indices have been lowered or raised with $g_{\mu\nu}$ or its inverse respectively. The first equation is the real dynamical equation and the latter ones are constraints that the data at that boundary $r=0$ should satisfy. The radial dynamical evolution preserves the constraints, and therefore if they are satisfied at $r=0$, they should be satisfied everywhere (for further details see [@Gupta:2008th]). It is to be noticed that in the above form the AdS radius $l$ does not appear in the equations of motion.
Let us proceed now *without* assuming the traditional rules of AdS/CFT correspondence. We only assume that corresponding to any *solution* of $(d+1)-$ dimensional Einstein’s equations with a negative cosmological constant that is free of naked singularities, there should exist a *state* in the dual $d-$dimensional field theory. The $(d+1)$ metric then should contain information about $\langle {t_{\mu\nu}}^\infty \rangle$, the expectation value of the microscopic energy-momentum tensor operator in the dual QFT state. For later convenience of analysing the hydrodynamic limit, we will consider $\langle {t^\mu_{\phantom{\mu}\nu}}^\infty \rangle$ instead of $\langle {t_{\mu\nu}}^\infty \rangle$. When the boundary metric is $\eta_{\mu\nu}$, $\langle {t^\mu_{\phantom{\mu}\nu}}^\infty \rangle$ should satisfy the Ward identities: $$\label{WIinfty}
\partial_\mu \langle {t^\mu_{\phantom{\mu}\nu}}^\infty \rangle = 0, \quad {\rm Tr}\, \langle t^\infty \rangle = 0$$ The first one is the local conservation of the energy-momentum tensor and the latter comes from conformal invariance (we will later see why the dual quantum theory should have conformal invariance). The question is of course how to extract $\langle {t^\mu_{\phantom{\mu}\nu}}^\infty \rangle$ from the dual spacetime metric. At this stage, it should be intuitively obvious that the microscopic Ward identities (\[WIinfty\]) should be related to the constraints of Einstein’s equations (\[Einstein\]).
We should now see the problem of identification of $\langle {t^\mu_{\phantom{\mu}\nu}}^\infty \rangle$ from a broader perspective of connecting data on holographic screens at $r = \textit{constant}$ with an appropriate RG flow in the dual QFT. Firstly, we identify the radial coordinate $r$ with the inverse of the scale $\Lambda$ of the dual quantum theory, i.e. we impose the relation $$r = \Lambda^{-1}.$$ If the relation between $r$ and $\Lambda$ should be such that (i) it is state (i.e. solution) independent, and (ii) that the AdS radius $l$ which has no direct interpretation in the dual QFT should not play a role in the mutual identifications, then the above is the only possibility given that $r=0$ corresponds to the UV. Now on holographic screens at $r = \textit{constant}$ we must identify the following pair of data $g_{\mu\nu}(\Lambda)$ and $\langle {t^\mu_{\phantom{\mu}\nu}}(\Lambda) \rangle$. The effective metric $g_{\mu\nu}(\Lambda)$ can be seen as a generalised effective scale-dependent coupling or rather the source for the effective operator $\langle {t^\mu_{\phantom{\mu}\nu}}(\Lambda) \rangle$. We should identify $g_{\mu\nu}(\Lambda)$ with $g_{\mu\nu}(r)$ that appears in the Fefferman-Graham metric (\[FG\]) at $r=\Lambda^{-1}$ for reasons similar to those mentioned above. Firstly, as evident from (\[tensor-equation\]), as a result of this identification the evolution equations for $g_{\mu\nu}(\Lambda)$ does not involve $l$ which has no direct meaning in the dual QFT, and secondly the identification is also state (solution) independent. Furthermore, $g_{\mu\nu}(\Lambda)$ coincides then with the metric $\eta_{\mu\nu}$ on which the dual QFT lives at $\Lambda = \infty$. In usual perturbative RG flows, we do not talk about a background metric $g_{\mu\nu}(\Lambda)$ that evolves with the scale, however it makes perfect sense to do so in a special limit as explained below.
At this stage, we can introduce the notion of *highly efficient RG flow* [@Behr:2015yna; @Behr:2015aat]. To understand this notion, it is first useful to classify operators in a QFT as *single-trace* and *multi-trace* operators. Single-trace operators are those which are gauge-invariant and which form the minimal set of generators of the algebra of all local gauge-invariant operators. All other gauge-invariant operators, which are multi-trace, are formed out of products of the single-trace operators and their spacetime derivatives. It is these single-trace operators which are dual to gravitational fields in the holographic correspondence. The *large $N$ limit* (where $N$ is usually the rank of the gauge group in the QFT) is that in which the expectation values of the multi-trace operators *factorise* into those of the constituent single-trace operators. It is only in this limit that a QFT can have a holographic dual in the form of a classical gravity theory. Furthermore, when the QFT is strongly interacting, we expect there to be only few single-trace operators which have small scaling dimensions, because unless protected by symmetries there will be large quantum corrections to the anomalous dimensions at strong coupling. The remaining single-trace operators will decouple from the RG flow. The holographically dual classical gravity should then have only a few fields which are dual to the single-trace operators with small scaling dimensions.
Even in the large $N$ and strong coupling limit, the single-trace operators can mix with multi-trace operators along the RG flow [@Heemskerk:2010hk]. However, the RG flow can be thought of as a classical equations for scale evolution of single-trace operators in the sense that due to large $N$ factorisation, the multi-trace operators can be readily replaced by the products of the constituents single-trace operators when their expectation values are evaluated in *any* state. It is expected that the gravitational theory can be truncated to pure gravity with a (negative) cosmological constant implying that there should be a consistent truncation of the dual RG flow equations to $$\label{schematic1}
\frac{\partial}{\partial \Lambda} t^\mu_{\phantom{\mu}\nu}(\Lambda) = F^\mu_{\phantom{\mu}\nu}[t^\mu_{\phantom{\mu}\nu}(\Lambda), \Lambda],$$ with $F^\mu_{\phantom{\mu}\nu}$ being non-linear in $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ so that it mixes with multi-trace operators built out of products of itself and it’s derivatives along the RG flow.
In the strong interaction and large $N$ limits, it is then useful to conceive a RG flow such that (despite $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ mixing with multi-trace operators constructed from products of itself and it’s derivatives) at each scale there should exist an effective metric $g_{\mu\nu}(\Lambda)$ which is a non-linear functional of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ and $\Lambda$, i.e. of the form $$\label{schematic2}
g_{\mu\nu} (\Lambda) = g_{\mu\nu}[t^\mu_{\phantom{\mu}\nu}(\Lambda), \Lambda],$$ which is constructed in the fixed background metric $\eta_{\mu\nu}$ such that $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ preserves the form of the Ward identity $$\label{WILambda}
\nabla_{(\Lambda)\mu} t^\mu_{\phantom{\mu}\nu}(\Lambda) = 0,$$ with $\nabla_{(\Lambda)}$ being the covariant derivative constructed from $g_{\mu\nu}(\Lambda)$. Therefore, an evolving metric $g_{\mu\nu}(\Lambda)$ which is a classical functional of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ (in the sense mentioned before) emerges as a tool for defining an efficient RG flow which invokes an efficient mixing of single-trace operators with multi-trace operators such that the Ward identity for local energy and momentum conservation takes the same form at each scale despite *coarse-graining*. In order to find a dual field theory description we need to understand precisely how such a coarse-graining can be performed. *This property of preservation of form of Ward identity for local conservation of energy-momentum constitutes the major ingredient for defining an highly efficient RG flow.* This definition is not complete as it does not tell us how such a RG flow can be constructed in the field theory – this will be described in the following section. Furthermore, we will also discuss the utility of such a RG flow in constructing strongly interacting large $N$ field theories.
The major motivation of constructing a highly efficient RG flow is that it readily gives rise to a holographically dual classical gravity theory with full diffeomorphism invariance in one higher dimension due to the following theorem [@Behr:2015yna].
Let us consider the $d-$dimensional scale evolution of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ taking the schematic form (\[schematic1\]) in a *fixed* background metric $g^{\rm (b)}_{\mu\nu}$ such that there exists a background metric $g_{\mu\nu}(\Lambda)$ which is a functional of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ and $\Lambda$ in the same *fixed* background metric $g^{\rm (b)}_{\mu\nu}$ as schematically represented by (\[schematic2\]), and in which $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ satisfies the local conservation equation (\[WILambda\]) at each $\Lambda$. Also let $g_{\mu\nu}(\Lambda)$ coincide with the fixed background metric $g^{\rm (b)}_{\mu\nu}$ at $\Lambda = \infty$ so that ${t^\mu_{\phantom{\mu}\nu}}^\infty$ satisfies $\nabla_{\rm(b)\mu}{t^\mu_{\phantom{\mu}\nu}}^\infty = 0$ with $\nabla_{\rm (b)}$ being the covariant derivative constructed from $g^{\rm (b)}_{\mu\nu}$.
We claim that as a consequence of the above assumptions, $g_{\mu\nu}(\Lambda)$ gives a $(d+1)-$dimensional bulk metric (\[FG\]) in the Fefferman-Graham gauge with $r = \Lambda^{-1}$ such that it solves the equations of a *pure* $(d+1)-$classical gravity theory with *full* $(d+1)-$diffeomorphism invariance and a negative cosmological constant determined by the asymptotic curvature radius $l$. Also $g^{\rm (b)}_{\mu\nu}$ is the boundary metric of this emergent asymptotically AdS spacetime.
This theorem ensures that a $(d+1)-$dimensional classical gravity with full diffeomorphism invariance can be rewritten as a *first order scale evolution* (\[schematic1\]) of an effective energy-momentum tensor operator.
Let us now go back and see how Einstein’s equation (\[Einstein\]) can be reformulated into such a form as (\[schematic1\]). Let us consider the background metric of the dual $4$ dimensional field theory to be $\eta_{\mu\nu}$ where the following RG flow equation [@Behr:2015yna] $$\begin{aligned}
\label{t-rg-example}
\frac{\partial t^\mu_{\phantom{\mu}\nu}(\Lambda)}{\partial \Lambda} &=& \frac{1}{\Lambda^3}\cdot\frac{1}{2} \Box t^{\mu}_{\phantom{\mu}\nu}(\Lambda)- \frac{1}{\Lambda^5}\cdot\left(\frac{1}{4}\, \delta^\mu_{\phantom{\mu}\nu}
{t^\alpha_{\phantom{\alpha}\beta}}(\Lambda)
{t^\beta_{\phantom{\beta}\alpha}}(\Lambda) - \frac{7}{128}\,\Box^2 {t^\mu_{\phantom{\mu}\nu}}(\Lambda)\right)
\nonumber\\&&+\frac{1}{\Lambda^5}\, \log\, \Lambda \cdot \frac{1}{32}\cdot \Box^2 {t^\mu_{\phantom{\mu}\nu}}(\Lambda)
+\mathcal{O}\left(\frac{1}{\Lambda^7}\, \log\, \Lambda\right)\end{aligned}$$ can be constructed. For the above RG flow, we can indeed construct the following unique $g_{\mu\nu}(\Lambda)$ as given by $$\begin{aligned}
\label{g-example}
g_{\mu\nu}(\Lambda) &=& \eta_{\mu\nu} +\, \frac{1}{\Lambda^4}\cdot\frac{1}{4} \eta_{\mu\alpha}{t^\alpha_{\phantom{\alpha}\nu}}(\Lambda)
+\,\frac{1}{\Lambda^6}\cdot \frac{1}{24}\eta_{\mu\alpha}\Box {t^\alpha_{\phantom{\alpha}\nu}}(\Lambda)+
\nonumber\\&&
+ \frac{1}{\Lambda^8} \cdot \Bigg(\frac{1}{32}\,\eta_{\mu\alpha} {t^\alpha_{\phantom{\alpha}\rho}}(\Lambda)
{t^\rho_{\phantom{\rho}\nu}}(\Lambda) -\frac{7}{384}\, \eta_{\mu\nu}{t^\alpha_{\phantom{\alpha}\beta}}(\Lambda)
{t^\beta_{\phantom{\beta}\alpha}}(\Lambda)\nonumber\\&&\qquad\quad +\frac{11}{1536}\,\eta_{\mu\alpha}\Box^2 {t^\alpha_{\phantom{\alpha}\nu}}(\Lambda)\Bigg) +
\nonumber\\&& + \frac{1}{\Lambda^8}\,\log \, \Lambda\cdot \frac{1}{516} \cdot \eta_{\mu\alpha}\Box^2 {t^\alpha_{\phantom{\alpha}\nu}}(\Lambda)+ \mathcal{O}\left(\frac{1}{\Lambda^{10}}\, \log\, \Lambda\right)\end{aligned}$$ as a functional of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ and $\Lambda$ in the flat Minkowski space background such that when it is considered as an effective background metric, the scale-dependent Ward identity (\[WILambda\]) is satisfied at each $\Lambda$ (given that at $\Lambda= \infty$, the usual Ward identities (\[WIinfty\]) hold). Furthermore, the $5-$dimensional bulk metric (\[FG\]) then satisfies Einstein’s equations (\[Einstein\]) with $r = \Lambda^{-1}$ and the cosmological constant set to $-6/l^2$. The *log* term in (\[t-rg-example\]) is related to the conformal anomaly.
It is to be noted that the Ward identity (\[WILambda\]) can also be recast as an effective operator equation, i.e. can be rewritten in a state-independent manner as an identity in flat Minkowski space $\eta_{\mu\nu}$. In the above example, (\[WILambda\]) can be readily unpacked into $$\begin{aligned}
\label{opform}
\partial_\mu t^\mu_{\phantom{\mu}\nu}(\Lambda) &=& \frac{1}{\Lambda^4}\cdot\left(\frac{1}{16}\partial_\nu \left(t^\alpha_{\phantom{\alpha}\beta}(\Lambda)t^\beta_{\phantom{\beta}\alpha}(\Lambda)\right)-\frac{1}{8}t^\mu_{\phantom{\mu}\nu}(\Lambda)\partial_\mu\, {\rm Tr}\,t(\Lambda) \right)+\nonumber\\&&
+\frac{1}{\Lambda^6}\cdot\left(\frac{1}{48}t^\alpha_{\phantom{\alpha}\beta}(\Lambda)\partial_\nu\Box t^\beta_{\phantom{\beta}\alpha}(\Lambda)-\frac{1}{48}t^\mu_{\phantom{\mu}\nu}(\Lambda)\partial_\mu\Box\, {\rm Tr}\,t(\Lambda) \right)+\nonumber\\&&
+\mathcal{O}\left(\frac{1}{\Lambda^8}\right).\end{aligned}$$ We then explicitly see that the scale-dependent effective background $g_{\mu\nu}(\Lambda)$ as given by (\[g-example\]) serves to absorb the multi-trace contributions that spoil the usual Ward identity for local energy-momentum conservation. As a result, the Ward identity preserves its form (\[WILambda\]) at each scale in the new scale-dependent background.
It should be immediately noted that although the RG flow (\[t-rg-example\]) leads to the bulk metric in the Fefferman-Graham gauge, the classical gravity equations determining the latter should have underlying full diffeomorphism invariance. It can be readily argued that otherwise it is impossible that the RG flow (\[t-rg-example\]) will be able to preserve a Ward identity of the form (\[WILambda\]). In particular, absence of diffeomorphism invariance in the dual bulk theory that gives the evolution of $g_{\mu\nu}(\Lambda)$ will imply that there will be other propagating degrees of freedom in addition to $g_{\mu\nu}(\Lambda)$ in which case the Ward identity (\[WILambda\]) should be modified.
The RG flow reformulation (\[t-rg-example\]) of Einstein’s equations has been demonstrated so far only in the asymptotic (i.e. UV) expansion. This series (\[t-rg-example\]) has a *finite* radius of convergence related to the scale (radius) where the Fefferman-Graham coordinates has a coordinate singularity in the dual spacetime. In order to sum (\[t-rg-example\]) to all orders in $\Lambda^{-1}$, we need to assume a specific form of the energy-momentum tensor such as the hydrodynamic form to be considered later. In the latter case, all orders in $\Lambda^{-1}$ can be summed at any given order in derivative expansion. The radius of convergence is the scale corresponding to the location of the horizon at late time and is related to the final temperature.
The immediate question is how do we derive the RG flow reformulation of the classical gravity equations such as (\[t-rg-example\]) corresponding to Einstein’s equations. In order to answer this, it is sufficient to understand what does $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ correspond to in the dual gravitational theory. To do this a gauge-independent formulation of the map between RG flow and gravitational equations is helpful. We express the $(d+1)-$dimensional spacetime metric via ADM-like variables [@Arnowitt:1962hi]: $$\label{ADM}
{\rm d}s^2 = \alpha(r,x) {\rm d}r^2 + \gamma_{\mu\nu}(r,x)\left({\rm d}x^\mu + \beta^\mu(r,x) {\rm d}r\right)\left({\rm d}x^\nu + \beta^\nu(r,x) {\rm d}r\right).$$ in which $\alpha$ is the analogue of the lapse function and $\beta^\mu$ is the analogue of the shift vector. Specifying conditions determining these amounts to gauge-fixing the diffeomorphism symmetry. For reasons (state independence and absence of explicit presence of $l$ in the evolution equations) mentioned before, the identification of $\Lambda$ and $g_{\mu\nu}(\Lambda)$ should take the following forms assuming that $r=0$ is the boundary [@Behr:2015yna]: $$r = \Lambda^{-1}, \quad g_{\mu\nu}(\Lambda = r^{-1}) = \frac{r^2}{l^2}\gamma_{\mu\nu}(r,x).$$ Note the above is not only true for Einstein’s gravity but also for a general gravitational theory. In this case the form of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ can also be fixed up to an overall multiplicative constant by (i) requiring it to be state (solution) independent, (ii) demanding absence of explicit presence of $l$ in its scale evolution, and (iii) requiring that it satisfies the Ward identity (\[WILambda\]). In a general gravitational theory, these imply that $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ should take the form [@Behr:2015yna]: $$\label{tandT}
t^\mu_{\phantom{\mu}\nu}(\Lambda = r^{-1}) =\left(\frac{l}{r}\right)^d\cdot\left({T^\mu_{\phantom{\mu}\nu}}^{\rm ql} +{T^\mu_{\phantom{\mu}\nu}}^{\rm ct}\right),$$ up to an overall multiplicative constant, where ${T^\mu_{\phantom{\mu}\nu}}^{\rm ql}$ is the quasi-local stress tensor that is conserved via equations of motion [@Balcerzak:2007da] and ${T^\mu_{\phantom{\mu}\nu}}^{\rm ct}$ is a sum of gravitational counterterms built out of the Riemann curvature of $\gamma_{\mu\nu}$ and its covariant derivatives such that they satisfy (\[WILambda\]) via Bianchi-type identities. Up to second order in derivatives, ${T^\mu_{\phantom{\mu}\nu}}^{\rm ct}$ can be parametrised as: $$\begin{aligned}
\label{Tct}
{T^\mu_{\phantom{\mu}\nu}}^{\rm ct} &=&-\frac{1}{8 \pi G_N}\Bigg[C_{(0)}\cdot \frac{1}{l} \cdot \delta^\mu_{\phantom{\mu}\nu} + C_{(2)}\cdot l \cdot \left(R^\mu_{\phantom{\mu}\nu}[\gamma] - \frac{1}{2}R[\gamma]\delta^\mu_{\phantom{\mu}\nu}\right) +
\cdots\,\Bigg],\end{aligned}$$ with $C_{(n)}$s being dimensionless constants that depend on the gravitational theory and $G_N$ being the $(d+1)-$dimensional gravitational constant. Above, the indices have been lowered/raised by the induced metric $\gamma_{\mu\nu}$/its inverse. In the case of Einstein’s gravity, ${T^\mu_{\phantom{\mu}\nu}}^{\rm ql}$ is the Brown-York tensor: $$\label{Brown-York}
{T^\mu_{\phantom{\mu}\nu}}^{{\rm ql}} = -\frac{1}{8 \pi G_N} \, \gamma^{\mu\rho}\left(K_{\rho\nu}- K \gamma_{\rho\nu}\right).$$ Here $K_{\mu\nu}$ is the extrinsic curvature of the hypersurface $r = \textit{constant}$ given by $$\label{extrinsic-curvature}
K_{\mu\nu} = -\frac{1}{2\alpha}\left(\frac{\partial \gamma_{\mu\nu}}{\partial r} - \nabla_{(\gamma)\mu} \beta_\nu -\nabla_{(\gamma)\nu} \beta_\mu \right),$$ with $\beta_\rho = \gamma_{\rho\mu} \beta^\mu$, and $K = K_{\mu\nu}\gamma^{\mu\nu}$. Therefore, in the Fefferman-Graham gauge, $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ should take the following form for Einstein’s gravity: $$\begin{aligned}
\label{tFG}
t^\mu_{\phantom{\mu}\nu}(\Lambda= r^{-1}) &=& \frac{l^{d-1}}{16\pi G_N}\Bigg[\frac{1}{r^{d-1}}\cdot \left(z^\mu_{\phantom{\mu}\nu} - ({\rm Tr}\, z) \,\delta^\mu_{\phantom{\mu}\nu}\right) +2\cdot \frac{1}{r^d}\cdot \left( d- 1 - C_{(0)}\right) \cdot \delta^\mu_{\phantom{\mu}\nu}-\nonumber\\&&
\quad\quad\quad\quad - 2\cdot \frac{1}{r^{d-2}}\cdot C_{(2)}\cdot \left(R^\mu_{\phantom{\mu}\nu}[g] - \frac{1}{2}R[g]\delta^\mu_{\phantom{\mu}\nu}\right)
+ \cdots \Bigg].\end{aligned}$$ The overall multiplicative constant $l^{d-1}/(16\pi G_N)$ has been chosen by us above and cannot be fixed by the arguments presented before. This overall factor is actually proportional to $N^2$ of the dual field theory (as mentioned before $l$ itself has no meaning in the dual QFT but the gravitational constant measured in units where $l=1$ does have one). This overall factor can be fixed by identifying the temperature in the field theory in a thermal state to that of the Hawking temperature of the dual black hole. This however requires taking into account quantum effects. For later convenience, we rescale $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ by this overall factor $(16\pi G_N)/l^{d-1}$ so that $N^2$ is now absorbed in the definition of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$. There is still a genuine ambiguity in the definition of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ which arises from the choice of the gravitational counterterm coefficients $C_{(n)}$s. Fixing this ambiguity leads us to a profound and surprising understanding of gravity itself as described below.
We first observe that the above ambiguity of choosing coefficients of gravitational counterterms has an immediate consequence for the map between gravity and RG flow. It implies that the equations of gravity can be reformulated into infinitely many RG flow equations of the form (\[schematic1\]) for any choice of gauge fixing of bulk diffeomorphisms. Each of these formulations corresponds to a specific choice of gravitational counterterms $C_{(n)}$s. Furthermore, each such RG flow will require the existence of the same (unique) $g_{\mu\nu}(\Lambda)$ taking the schematic form (\[schematic2\]) in which the effective Ward identity (\[WILambda\]) will be satisfied, and which will lead to the same bulk metric that satisfies the dual diffeomorphism invariant gravitational equations with a specific gauge fixing.
It is of course desirable that at the UV fixed point, i.e. at $\Lambda = \infty$, ${t^\mu_{\phantom{\mu}\nu}}^\infty$ is finite. This leads to fixing a finite number of leading counterterms, particularly [@Henningson:1998ey; @Balasubramanian:1999re; @deBoer:1999tgo] $$\label{C0C2}
C_{(0)} = d -1, \quad C_{(2)} = - \frac{1}{d-2}, \quad \text{etc.}$$ It is interesting to note that ${t^\mu_{\phantom{\mu}\nu}}^\infty$ is completely free of ambiguities when the boundary metric is $\eta_{\mu\nu}$, because all other counterterms, except a few leading terms, vanish in any asymptotically AdS space because of the enhancement of symmetries in the geometry in the asymptotic limit. We thus recover the result for ${t^\mu_{\phantom{\mu}\nu}}^\infty$ as in the traditional AdS/CFT correspondence. This procedure is however unsatisfactory for two reasons. Firstly, we still have infinite ambiguities in the form of unfixed coefficients of the infinite number of gravitational counterterms which vanish asymptotically. Secondly, if we can genuinely rewrite gravity as RG flow, in the latter form it should be first order evolution so that we can either specify conditions at the UV or at the IR, but not at both places. It is more desirable that we restrict the IR as we need a sensible IR behaviour of the RG flow even in cases where the UV completion is unknown. This is specially relevant for finding holographic duals of theories like QCD where only the IR can be expected to be captured by a holographically dual classical gravity description at large $N$ [@Klebanov:2000hb] – in the UV the emergent geometry can have a singularity implying the necessity of new degrees of freedom.
This ambiguity is fixed by the following theorem stated below [@Kuperstein:2013hqa; @Behr:2015yna; @Behr:2015aat].
Up to an overall multiplicative constant for $t^\mu_{\phantom{\mu}\nu}(\Lambda)$, there is a unique choice of the functional $F^\mu_{\phantom{\mu}\nu}$ in (\[schematic1\]) that reformulates a pure holographic classical gravity theory as RG flow such that the endpoint of the RG flow at $\Lambda = \Lambda_{\rm IR}$ can be converted to a fixed point in the hydrodynamic limit corresponding to *non-relativistic incompressible Navier-Stokes fluid* under the universal rescaling: $$\label{rescale}
\Lambda_{\rm IR}^{-1} - \Lambda^{-1} = \xi \cdot \lambda^{-1} \quad t = \frac{\tau}{\xi},$$ (corresponding to near horizon and long time behaviour of the dual gravitational dynamics) where $\xi$ is taken to zero with $\lambda$ and $\tau$ kept finite. This also corresponds to fixing the gravitational counterterms in (\[Tct\]) uniquely so that $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ is uniquely identified as a functional of the ADM variables in the dual pure gravitational theory. Even those counterterms which are necessary to cancel UV divergences are also determined by the prescribed IR behaviour.
Remarkably, the hydrodynamic limit can fix all the ambiguities of the RG flow which however has a state-independent formulation in terms of evolution of the operator $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ with the scale and which is valid even beyond this limit. Thus long wavelength perturbations of black holes unsurprisingly play a very fundamental role in understanding holographic correspondence as RG flow. We do not have a complete proof of this theorem, so actually it is still a conjecture. However very non-trivial calculations which will be sketched in the next section provide solid supporting verifications.
It is also important that the end point of the RG flow is not really fixed point although it becomes so after the rescaling (\[rescale\]) which has been first introduced in the context of gravitational dynamics in the hydrodynamic limit in the dual theory [@Bredberg:2010ky]. As we will see in the next section, it implies that all physical parameters in $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ should satisfy appropriate bounds regrading how they behave at the endpoint. These bounds determine all integration constants of the first order RG flow and thus determine the UV values of physical observables. Remarkably, these UV values are exactly the same as those for which dual gravitational geometries are free of naked singularities. Since the hydrodynamic limit determines the RG flow uniquely, all physical observables beyond the hydrodynamic limit can also be obtained from the RG flow. *Therefore, not only that a holographic gravitational theory can be reformulated as a unique RG flow for every choice of gauge-fixing of diffeomorphism symmetry (up to an overall constant numerical factor for $t^\mu_{\phantom{\mu}\nu}(\Lambda)$) , the data which leads to regular horizons are also determined by this RG flow.* This IR criterion constitutes another crucial defining feature of a highly efficient RG flow as exemplified by (\[t-rg-example\]) for Einstein’s gravity. In the following section, we will present more details on how this IR criterion fixes the ambiguity of gravitational couunterterms leading to a unique highly efficient RG flow for each choice of gauge-fixing of diffeomorphism symmetry in the dual classical gravity equations.
Finally, we note that the choice of gauge fixing of the diffeomorphism symmetry is also encoded in the RG flow (which in cases other than the Fefferman-Graham gauge may contain auxiliary non-dynamical variables corresponding to the lapse function and the shift vector). This is due to the feature that any asymptotically AdS metric has a residual gauge symmetry which corresponds to conformal transformations for the dual theory at the boundary under which the dual theory must be invariant (up to quantum anomalies that are related to logarithmic terms necessary for regulating divergences of the on-shell gravitational action [@Henningson:1998ey; @Balasubramanian:1999re]). Such diffeomorphisms which preserve the Fefferman-Graham gauge are called Penrose-Brown-Henneaux (PBH) transformations in the literature [@Penrose; @Brown:1986nw; @Schwimmer:2000cu], and these can be readily generalised to other choices of gauge fixing [@Behr:2015yna]. These turn out to lead to automorphism symmetry of the dual RG flow equations (\[schematic1\]) when they are formulated in a general fixed conformally flat background metric [@Behr:2015yna]. We have called this *lifted Weyl symmetry*. Deciphering this symmetry for a given highly efficient RG flow readily leads us to determine the corresponding gauge fixing in the dual gravity theory and thus also the choice of hypersurface foliation in the dual geometries used as holographic screens at various scales.
The field theory perspective and the hydrodynamic limit
=======================================================
In the previous section, we have discussed reformulation of a holographic pure gravity theory as a highly efficient RG flow which can self-determine microscopic UV data by an appropriate IR criterion, and reproduce results of traditional holographic correspondence where these data are determined by explicitly solving the gravitational equations and demanding absence of naked singularities. In this section, following [@Behr:2015aat] we will show how such a RG flow can be constructed in the field theory and even define it constructively in the strong interaction and large $N$ limits. We will illustrate the construction briefly in the hydrodynamic limit.
In the strong interaction and large $N$ limits, a handful of single-trace operators (dual to the fields in the gravitational theory) can define at least some sectors of the full theory in the sense mentioned in the previous section. Instead of using the elementary fields to define the QFT, it then makes sense to use collective variables which are directly measurable and which parametrise the expectation values of these single-trace operators in all states. Such collective variables include the hydrodynamic variables and can be extended to include the shear-stress tensor and other non-hydrodynamic parameters also (see for innstance [@Iyer:2009in; @Iyer:2011qc; @Heller:2014wfa]). At the very outset, it is clear that such an exercise of defining quantum operators via collective variables which parametrise their expectation values is futile except in the strong interaction and large $N$ limits. Unless we are in the large $N$ limit, the expectation values of the multi-trace operators do not factorise, therefore we need new collective variables for defining multi-trace operators. Also if we are not in the strong interaction limit, we will need to consider infinitely many single-trace operators. These will imply proliferation of the number of collective variables required to describe exact quantum dynamics.
The physical picture is as follows. Consider a set of microscopic single-trace operators $O_I^\infty$ such as the energy-momentum tensor which can be parametrised by a set of collective variables $X_A^\infty$ such as the hydrodynamic variables. Furthermore, the spacetime evolution of the expectation values $\langle O_I^\infty \rangle$ can be captured by equations of motions for the collective variables $X_A^\infty$ such as the hydrodynamic equations with parameters $\eta_M^\infty$ such as the transport coefficients. It is to be noted here that the hydrodynamics being mentioned here is not referring to any kind of coarse-graining, rather an asymptotic series involving perturbative derivative expansion (with infinite number of transport coefficients) which captures the dynamics near thermal equilibrium [@Heller:2013fn; @Basar:2015ava]. Generally speaking, we can succinctly represent the quantum operators $O_I^\infty$ through their expectation values $\langle O_I^\infty \rangle[X_A^\infty,\eta_M^\infty ]$.
We can readily do an appropriate coarse-graining of our measurements of $\langle O_I^\infty \rangle$ and proceed to define $\langle O_I (\Lambda) \rangle$. The latter definition can be achieved via appropriate coarse-grained collective variables $X_A(\Lambda)$ which by construction follow similar equations as $X_A(\infty)$ but with new parameters $\eta_M(\Lambda)$. As in any RG flow, we expect that we need fewer parameters $\eta_M(\Lambda)$ to describe the spacetime evolution of $X_A(\Lambda)$ than the number of $\eta_M^\infty$ we need to describe that of $X_A^\infty$ to the same degree of approximation. In a highly efficient RG flow, we define the coarse-grained quantum operators $O_I(\Lambda)$ through their expectation values $\langle O_I(\Lambda) \rangle[X_A(\Lambda),\eta_M(\Lambda) ]$ assuming that the coarse-grained operators are the same functionals of the coarse-grained collective variables at each scale (as in the UV) but with new scale-dependent parameters. Note that there is no explicit dependence on $\Lambda$ in the functionals $\langle O_I(\Lambda) \rangle[X_A(\Lambda),\eta_M(\Lambda) ]$.
In order to complete the construction we will need to define the constructive principles for coarse-graining that defines $X_A(\Lambda)$ which should follow similar equations at each scale but with new scale-dependent parameters $\eta_M(\Lambda)$. These three principles are listed below.
1. **High efficiency:**\
There should exist an appropriate background metric:\
$g_{\mu\nu}(\Lambda)[X_A(\Lambda), \eta_M(\Lambda), \Lambda]$\
and appropriate background sources:\
$J(\Lambda)[X_A(\Lambda), \eta_M(\Lambda), \Lambda]$\
at each $\Lambda$ such that the Ward identity $$\label{WILambdanew}
\nabla_{(\Lambda)\mu}t^\mu_{\phantom{\mu}\nu}(\Lambda) = {\sum}' O_I(\Lambda) \nabla_{(\Lambda)\nu}J_I(\Lambda)$$ is satisfied with $\nabla_{(\Lambda)}$ being the covariant derivative constructed from $g_{\mu\nu}(\Lambda)$ and $\sum'$ denoting summation over all effective single-trace operators except $t^\mu_{\phantom{\mu}\nu}(\Lambda)$.\
2. **Upliftability to operator dynamics:**\
The functionals $g_{\mu\nu}(\Lambda)[X_A(\Lambda), \eta_M(\Lambda), \Lambda]$ and $J(\Lambda)[X_A(\Lambda), \eta_M(\Lambda), \Lambda]$ can be uplifted to functionals of the single-trace operators. Therefore, they should assume the forms\
$g_{\mu\nu}(\Lambda)[O_I(\Lambda),\Lambda]$ and $J(\Lambda)[O_I(\Lambda),\Lambda]$\
so that the effective Ward identities (\[WILambdanew\]) can be promoted to operator equations such as (\[opform\]). As a consequence, it follows that the scale evolution equations for $O_I(\Lambda)$ such as (\[t-rg-example\]) become state-independent equations involving single and multi-trace operators and $\Lambda$ only, and thus without involving the collective variables explicitly.\
3. **Good endpoint behaviour:**\
The IR end point of the RG flow where most of the parameters $\eta_M(\Lambda)$ blow up and some collective variables $X_A(\Lambda)$ become singular can be made regular under the universal rescaling (\[rescale\]) corresponding to near horizon and long time limits of the dual spacetimes. In the hydrodynamic limit, the endpoint should be converted to a fixed point corresponding to non-relativistic incompressible Navier-Stokes equations under the stated rescaling.
Our claim is that for every realisation of a highly efficient RG flow which satisfies the above three principles:
1. there corresponds a unique dual gravitational theory up to a choice of gauge-fixing of the bulk diffeomorphism symmetry that can have a dual holographic description as a strongly interacting large $N$ QFT, and
2. there is unique set of UV data for (the infinitely many) $\eta_M(\Lambda)$ which however can be resummed in the IR so that the dynamics at the endpoint can be described by a finite number of parameters (such as the shear viscosity of the infrared non-relativistic incompressible Navier-Stokes fluid), and also these UV data (such as the UV values of the infinitely many transport coefficients) are the same as those which lead to the regularity of the future horizons in the dual gravitational theory corresponding to the RG flow.
The infrared end point typically corresponds to the location of the horizon at late time, and thus the highly efficient RG flow connects the AdS/CFT correspondence with the membrane paradigm. The highly efficient RG flow gives a constructive way to define strongly interacting large $N$ QFTs by reformulating the holographic correspondence. The first two principles in our list defining highly efficient RG flows utilise the first theorem of reformulation of diffeomorphism invariant gravity and the third principle in our list utilises the second theorem discussed in the previous section. However, here our list of principles also presents a generalisation which is valid not only for the reconstruction of holographic pure gravity but also when the latter is coupled to a finite number of matter fields. The utility of highly efficient RG flow is actually deeper. It shows that all such QFTs and hence all holographic gravitational theories are determined by finite amount of data that governs the dynamics at the end point. Therefore, all holographic gravitational theories can be parametrised by a finite number of free parameters in any given dimension. How this parametrisation works has not been completely understood yet.
As an illustration, let us see how we construct highly efficient RG flows in the hydrodynamic limit [@Behr:2015aat]. Once again, let us revert back to the sector of states where $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ is the only single-trace operator with a non-vanishing expectation value for the sake of simplicity. The expectation value of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ is parametrised by the (collective) hydrodynamic variables $u^\mu(\Lambda)$ and $T(\Lambda)$ which thus define the quantum operator. Furthermore, $u^\mu(\Lambda)$ can be assumed to satisfy Landau-Lifshitz definition in which case $u^\mu(\Lambda)$ is a timelike eigenvector of $t^\mu_{\phantom{\mu}\nu}(\Lambda)$ with unit norm with respect to the background metric $g_{\mu\nu}(\Lambda)$ so that $u^\mu(\Lambda) g_{\mu\nu}(\Lambda)u^\nu(\Lambda) =-1$. The hydrodynamic variables $u^\mu(\Lambda)$ and $T(\Lambda)$ should satisfy hydrodynamic equations in the effective background $g_{\mu\nu}(\Lambda)$ with scale-dependent energy density $\epsilon(\Lambda)$, pressure $P(\Lambda)$ and transport coefficients $\gamma^{(n,m)}(\Lambda)$, where $n$ denotes the order in the derivative expansion (running from zero to infinity) and $m$ lists the finite number of independent parameters at each order in the derivative expansion. At the first order in the derivative expansion, there are only two independent transport coefficients, namely the shear and the bulk viscosities.
The coarse-graining of $u^\mu(\Lambda)$ and $T(\Lambda)$ can be expressed both in integral or differential form. The latter form is more useful and is as shown below: $$\begin{aligned}
\label{uTevol}
:\frac{\partial u^{\mu} (\Lambda)}{\partial \Lambda}: &=& a^{(0)}(\Lambda) u^\mu(\Lambda) + \sum_{n=1}^\infty \sum_{m=1}^{n_{\rm s}}a^{(n,m)}_{\rm s}(\Lambda)\, \mathcal{S}^{(n,m)}(\Lambda)\, u^\mu(\Lambda) +\nonumber\\&&+\sum_{n=1}^\infty \sum_{m=1}^{n_{\rm v}}a^{(n,m)}_{\rm v}(\Lambda) \,{\mathcal{V}^\mu}^{(n,m)}(\Lambda)\, , \nonumber\\
:\frac{\partial T (\Lambda)}{\partial\Lambda}: &=& b^{(0)}(\Lambda) + \sum_{n=1}^\infty \sum_{m=1}^{n_{\rm s}}b^{(n,m)}_{\rm s}(\Lambda) \mathcal{S}^{(n,m)}(\Lambda).\end{aligned}$$ Above $\mathcal{S}^{(n,m)}$ denotes the independent hydrodynamic scalars that can be constructed from derivatives of $u^\mu(\Lambda)$ and $T(\Lambda)$ at the $n-th$ order in derivatives (with independent meaning that a linear sum of these scalars do not vanish using lower order equations of motion). When $n=1$, there is only one such scalar, namely $(\partial\cdot u)$. Similarly, ${\mathcal{V}^\mu}^{(n,m)}(\Lambda)$ denotes hydrodynamic vectors which are not parallel to $u^\mu(\Lambda)$ (as otherwise it can be expressed via a scalar multiplying $u^\mu(\Lambda)$). When $n=1$, there is only one such vector, namely $(u(\Lambda)\cdot \partial)u^\mu(\Lambda)$. The symbols $:\cdots:$ stand for subtracting away non-hydrodynamic contributions. The coarse-graining actually arises from a truncation of the series (\[uTevol\]) at a given order in the derivative expansion. So far this is the most general way to coarse-grain hydrodynamic variables which is consistent with the hydrodynamic limit.
Furthermore, we assume that the flow of the energy density, pressure and the transport coefficients take the form of ordinary differential equations: $$\begin{aligned}
\label{transODE}
\frac{{\rm d}\epsilon(\Lambda)}{{\rm d}\Lambda} &=& K[\epsilon(\Lambda), P(\Lambda), \Lambda], \nonumber\\
\frac{{\rm d}P(\Lambda)}{{\rm d}\Lambda} &=& L[\epsilon(\Lambda), P(\Lambda), \Lambda],\nonumber\\
\frac{{\rm d}\gamma^{(n,m)}(\Lambda)}{{\rm d}\Lambda} &=& M^{(n,m)}[\epsilon(\Lambda), P(\Lambda), \gamma^{(k \leq n, p)}(\Lambda), \Lambda],\end{aligned}$$ in which the scale evolution of transport coefficients at $n-$th order in derivative expansion involves only those at the same or lower orders.
The mathematical problem of constructing highly efficient RG flows in the hydrodynamic limit now becomes well-defined. We simply need to solve for the parameters $a^{(0)}$, $b^{(0)}$, $a^{(n,m)}_{\rm s}$, $a^{(n,m)}_{\rm v}$, $b^{(n,m)}_{\rm s}$ in (\[uTevol\]) and the functionals $K$, $L$ and $M^{(n,m)}$ appearing in (\[transODE\]) such that the three principles listed before are satisfied. Unfortunately, we do not yet know how this mathematical problem can be solved directly. Fortunately, there is a concrete algorithmic method [@Kuperstein:2013hqa] (developed using some results of [@Gupta:2008th; @Kuperstein:2011fn]) to reformulate the classical gravitational equations in the forms (\[uTevol\]) and (\[transODE\]) which can be used to solve for these parameters indirectly so that we can satisfy the three principles and obtain all highly efficient RG flows.
The most subtle aspect of this procedure is in how we satisfy the third principle of good infrared behaviour. As discussed in the previous section, the reformulation of gravity as RG flow is subject to the ambiguities of undetermined counterterm coefficients as presented in (\[Tct\]) before. However there are a finite number of such terms at each order in the derivative expansion. The recipe is to proceed with these ambiguities which lead to unknown numerical constants in (\[uTevol\]) and (\[transODE\]). In order for the endpoint to be governed by non-relativistic incompressible Navier-Stokes equations, $\epsilon(\Lambda)$ must be finite at the endpoint $\Lambda_{\rm IR}$ where $\gamma^{(n,m)}(\Lambda)$ should satisfy bounds $\gamma^{(n,m)}(\Lambda)\leq (\Lambda -\Lambda_{\rm IR})^{-k(n,m)}$ with $k(n,m)$ being appropriate numerical constants which are independent of the RG flow or the dual gravitational theory [@Kuperstein:2013hqa]. It turns out that when we actually solve for $\gamma^{(n,m)}(\Lambda)$ the number of terms which diverge worse than the prescribed bounds are typically more than the number of integration constants available unless the counterterm coefficients which have been left undetermined so far are precisely chosen at each order in the derivative expansion. Setting these counterterm coefficients to such values, we can fix all integration constants of the RG flow and thus we can determine the UV values of all transport coefficients *uniquely*.
This procedure has been explicitly implemented for Einstein’s gravity at zeroth, first and second orders in the derivative expansion. Remarkably, the UV values of the equations of state and the first and second order transport coefficients determined via this method matches exactly with the known values [@Policastro:2001yc; @Baier:2007ix; @Bhattacharyya:2008jc] which are required for the regularity of the future horizon. The methods of explicit construction of highly efficient RG flows can be generalised beyond the hydrodynamic limit by including non-hydrodynamic collective variables [@Iyer:2009in; @Iyer:2011qc; @Iyer:2011ak; @Heller:2013fn; @Heller:2014wfa; @Basar:2015ava] as discussed in the literature before.
We should understand how to solve for highly efficient RG flows independently without using the theorems for reformulation of dual gravitational theories so that we can classify all gravitational theories that are holographic and also where a finite number of IR parameters can determine all microscopic UV data in the dual theories.
Outlook
=======
We have demonstrated that the reformulation of classical gravity as RG flow not only reveals how the holographic duality works but also gives us a deeper understanding of gravitational dynamics itself, in particular relating to what kind of data that determine the spacetime metric lead to absence of naked singularities.
An outstanding issue is to take another step to understand how to include quantum corrections in gravity while mapping it to a highly efficient RG flow whose notion also needs to be further generalised to go beyond the large $N$ limit. In order to proceed, it should be useful to understand better how the three principles which define highly efficient RG flows themselves originate from a simpler and more holistic principle. Such a direction seems possible as there is evidence that classical gravity emerges from features of quantum entanglement in dual quantum systems [@Faulkner:2013ica]. In particular, it is known that classical minimal surfaces in dual geometries encode entanglement entropies in dual field theories [@Ryu:2006bv]. It has also been argued elsewhere that efficient nonperturbative RG flows that coarse-grain quantum information efficiently such that they remove short range entanglement but preserve long range entanglement give rise to the holographic correspondence [@PhysRevD.86.065007]. It is natural to speculate that when quantum gravity corrections are included the infrared end point for the dual RG flow is not characterised necessarily by local order parameters, but rather by non-local quantum order parameters related to patterns of global long range entanglement. This point of view also has a potential for defining quantum geometry in the emergent gravity theory.
We hold the point of view that a breakthrough in this direction is likely to come from a reformulation of classical gravity equations themselves which uses non-local geometric objects such as geodesics and minimal surfaces as the dynamical variables, and also which makes a tangible connection with the local RG flow perspective described in the present article. At present, how this can be realised seems a bit mysterious, however it is very likely that there are hidden treasures in classical gravity which are yet to discovered. It will not be surprising if the surface terms[@Mukhopadhyay:2006vu; @Padmanabhan:2007en] introduced by T. Padmanbhan, and his novel variational principle involving these surface terms which give classical gravitational equations in the bulk without using the metric as a dynamical variable, can shed some light in this direction. Another interesting reformulation [@deBoer:2016pqk] of classical gravity equations involving objects which are analogous to minimal surfaces but also sensitive to the operator content in the dual QFTs has appeared recently.
Finally, I would like to mention that the reformulation of classical gravity equations as RG flows has also informed the development of a new approach for combining weak and strong coupling degrees of freedom of the quark-gluon plasma produced by heavy ion collisions self-consistently into a novel nonperturbative framework [@Iancu:2014ava; @Mukhopadhyay:2015smb]. Unravelling the holographic origin of gravity will surely revolutionise our understanding of nonperturbative aspects of quantum dynamics in the future.
Acknowledgments {#acknowledgments .unnumbered}
===============
The research of A.M. is supported by a Lise-Meitner fellowship of the Austrian Science Fund (FWF), project no. M 1893-N27. A part of this article has been used as a contribution for the festschrift in honour of the 60th anniversary celebration of Prof. T. Padmanabhan.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The electronic states near a surface or a domain wall in the $p_x$$\pm$i$p_y$-wave superconductor are studied. This state has been recently suggested as the superconducting state of Sr$_2$RuO$_4$. The $p_x$$\pm$i$p_y$-wave paring state breaks the time reversal symmetry and induces a magnetic field. The obtained temperature dependence of the magnetic field is consistent with the observed $\mu$SR data.'
address:
- 'Department of Physics, Faculty of Science, Shizuoka University, 836 Oya, Shizuoka 422-8529, Japan'
- 'Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan'
author:
- Masashige Matsumoto
- Manfred Sigrist
title: 'Quasiparticle States near the Surface and the Domain Wall in a $p_x$$\pm$i$p_y$-Wave Superconductor'
---
,
$p$-wave superconductor; quasi-classical theory; Sr$_2$RuO$_4$
Studying unconventional superconductors has become one of the most attractive problems in recent condensed matter research. They include the recently discovered Sr$_2$RuO$_4$ [@Maeno]. Triplet pairing state of ${\mbox{\boldmath$d$}}({\mbox{\boldmath$k$}})$=$(k_x$$\pm$${{\rm i}}k_y){\hat z}$ is suggested as the d-vector [@Sigrist]. Tunneling conductance for such paring state has been examined finding that the conductance peak features related to the bound states are very sensitive to the angle of the incidence of the electron [@Honerkamp; @Yamashiro]. Recently we have studied quasiparticle properties at the surface or domain wall and reported that the local density of states at the surface is constant and does not show any peak-like or gap-like structure within the superconducting energy gap at low temperatures. While at the domain wall it is v-shaped and contains a small gap-like feature [@Matsumoto].
The intrinsic magnetism in the superconducting phase by $\mu$SR experiment indicates a pairing state with broken time reversal symmetry [@Luke]. The magnetic field in the superconducting phase can be induced by surface, domain wall and impurity [@Sigrist2]. In this paper we examine the temperature dependence of the magnetic field induced near the surface and domain wall and compare them with the $\mu$SR experiment. For this purpose we use the same formulation as in our previous paper [@Matsumoto], which is based on the quasi-classical formulation developed by Schopohl $et$ $al$. [@Schopohl]. The spatial variation of the order parameter and vector potential can be determined self-consistently. For simplicity we assume a two-dimensional $p_x$+i$p_y$ state. In Fig. \[fig:1\] we show the magnetic field near a surface and a domain wall which is formed between $p_x-$i$p_y$ and $p_x$+i$p_y$ state.
![Induced magnetic field. $B_c$=$\Phi_0/2\sqrt{2}\pi\xi_0\lambda_{\rm L}$, $\lambda_{\rm L}$ is the London penetration depth and $\Phi_0$=$h/2e$. (a) Spatial dependence at several temperatures. $x$ is the distance from the surface or domain wall scaled by $\xi_0$=${v_{\rm F}}/\pi\Delta(0)$, where $\Delta(0)$ is the magnitude of the bulk order parameter at $T$=0. We chose a cutoff energy $\omega_c$=20$T_c$ and $\kappa$=$\lambda_{\rm L}/\xi_0$=2.5. Temperatures scaled by $T_c$ are depicted. $B_z$ is antisymmetric under $x$$\leftrightarrow$$-x$ for the domain wall. (b) Temperature dependence of the maximum $\mid B_z/B_c\mid$.[]{data-label="fig:1"}](figure1a.eps "fig:"){width="0.56\linewidth"} ![Induced magnetic field. $B_c$=$\Phi_0/2\sqrt{2}\pi\xi_0\lambda_{\rm L}$, $\lambda_{\rm L}$ is the London penetration depth and $\Phi_0$=$h/2e$. (a) Spatial dependence at several temperatures. $x$ is the distance from the surface or domain wall scaled by $\xi_0$=${v_{\rm F}}/\pi\Delta(0)$, where $\Delta(0)$ is the magnitude of the bulk order parameter at $T$=0. We chose a cutoff energy $\omega_c$=20$T_c$ and $\kappa$=$\lambda_{\rm L}/\xi_0$=2.5. Temperatures scaled by $T_c$ are depicted. $B_z$ is antisymmetric under $x$$\leftrightarrow$$-x$ for the domain wall. (b) Temperature dependence of the maximum $\mid B_z/B_c\mid$.[]{data-label="fig:1"}](figure1b.eps "fig:"){width="0.43\linewidth"}
Near the $T_c$ field maximum increases linearly with the decreasing the temperature and it saturates at low temperatures. This temperature dependence is qualitatively consistent with the $\mu$SR experiment. In the surface case the energy level of the bound state is estimated as ${\Delta_y}({\mbox{\boldmath$k_{\rm F}$}})$, where ${\Delta_y}({\mbox{\boldmath$k_{\rm F}$}})$ is the $p_y$-component of the order parameter with momentum ${\mbox{\boldmath$k_{\rm F}$}}$. Therefore, bound states in the region ${{k_{\rm F}}_y}$$<0$ are occupied and yielding a spontaneous magnetic field, as long as they satisfy the condition $T$$<$$\mid{\Delta_y}({\mbox{\boldmath$k_{\rm F}$}})\mid$.
An interesting magnetic property appears in the case of a $p_x$ state. It has been pointed out that midgap state gives rise to a paramagnetic response [@Higashitani; @Fogelstrom; @Walter]. Let us demonstrate it in the $p_x$ state, which is suggested in the presence of the strong magnetic field in the $x$ direction [@Agterberg]. Figure \[fig:2\] shows the spatial dependence of the paramagnetic field in the $z$ direction.
![Spatial dependence of the magnetic field near the (1,0,0) surface for the $p_x$ state at several temperatures. A small external field $B_{\rm ext}$=0.01$B_c$ is applied in the $z$ direction. Temperatures scaled by $T_c$ are depicted.[]{data-label="fig:2"}](figure2.eps){width="0.5\linewidth"}
As it is studied in the $d$-wave case energy level of the midgap state shifts to $e{{v_{\rm F}}_y}A_y$, which splits the zero bias conductance peak [@Fogelstrom]. Here ${{v_{\rm F}}_y}$ and $A_y$ are the $y$-component of Fermi velocity and vector potential, respectively. Note that $A_y$ has the opposite sign of $B_{\rm ext}$. Therefore, bound states in ${{k_{\rm F}}_y}$$>$0 region are occupied for a positive $B_{\rm ext}$ and it generates a magnetic field parallel to $B_{\rm ext}$. Bound states satisfying $T$$<$$\mid e{{v_{\rm F}}_y}A_y\mid$ contribute to the effect and the paramagnetic field rapidly decreases with the increase of temperature. In the real case, a small $p_y$ part can be induced by $B_{\rm ext}$. The realized phase of the $p_y$ component is such that generates a surface current which induces a filed parallel to $B_{\rm ext}$. This results in also the paramagnetic response. Without the strong field in the $x$ direction, it is difficult to see this paramagnetic response, since the occupied bound states are already asymmetric under ${{k_{\rm F}}_y}$$\rightarrow$$-{{k_{\rm F}}_y}$ and the state is difficult to modify with a small external field in the $z$ direction.
This work was supported by Grant-in-Aid for Encouragement of Young Scientists from Japan Society for the Promotion of Science.
[9]{}
Y. Maeno $et$ $al$., Nature [**372**]{} (1994) 532.
M. Sigrist $et$ $al$., in: Physics and Chemistrry of Transition-Metal Oxides, (Springer, 1999).
C. Honerkamp and M. Sigrist, J. Low Temp. Phys. [**111**]{} (1998) 895.
M. Yamashiro, Y. Tanaka, Y. Tanuma and S. Kashiwaya, J. Phys. Soc. Jpn. [**67**]{} (1998) 3224.
M. Matsumoto and M. Sigrist, J. Phys. Soc. Jpn. [**68**]{} (1999) 994; erratum, $ibid$. 3120.
G. M. Luke $et$ $al$., Nature [**394**]{} (1998) 558.
M. Sigrist and K. Ueda, Rev. Mod. Phys. [**63**]{} (1991) 239.
N. Schopohl and K. Maki, Phys. Rev. B [**52**]{} (1995) 490.
S. Higashitani, J. Phys. Soc. Jpn. [**66**]{} (1997) 2556.
M. Fogelström, D. Rainer and J. A. Sauls, Phys. Rev. Lett. [**79**]{} (1997) 281.
H. Walter $et$ $al$., Phys. Rev. Lett. [**80**]{} (1998) 3598.
D. F. Agterberg, Phys. Rev. Lett. [**80**]{} (1998) 5184.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We impose perfect fluid concept along with slow expansion approximation to derive new solutions which, considering non-static spherically symmetric metrics, can be treated as Black Holes. We will refer to these solutions as Quasi Black Holes. Mathematical and physical features such as Killing vectors, singularities, and mass have been studied. Their horizons and thermodynamic properties have also been investigated. In addition, relationship with other related works (including mcVittie’s) are described.'
address: |
$1$ Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), P.O. Box 55134-441, Maragha, Iran,\
$^2$ Physics Department, Shahid Beheshti University, Evin, Tehran 19839, Iran.
author:
- 'H. Moradpour$^1$[^1] and N. Riazi$^2$[^2]'
title: Spherically symmetric solutions in a FRW background
---
Introduction \[Introduction\]
=============================
The Universe expansion can be modeled by the so called FRW metric $$\begin{aligned}
\label{FRW}
ds^2=-dt^2+a(t)^2[\frac{dr^2}{(1-kr^2)}+r^2d\theta^2+r^2sin(\theta)^2d\phi^2],\end{aligned}$$ where $k=0, +1, -1$ are curvature scalars which represent the flat, closed and open universes, respectively. The WMAP data confirms a flat ($k=0$) universe [@Roos]. $a(t)$ is the scale factor and for a background which is filled by a perfect fluid with equation of state $p=\omega \rho$, there are three classes of expanding solutions. These three solutions are $$\begin{aligned}
\label{Scale factor1}
a(t)=a_0 t^{\frac{2}{3(\omega + 1)}}\end{aligned}$$ for $ \omega\neq 0$ when $-1<\omega$ and , $$\begin{aligned}
\label{Scale factor2}
a(t)=a_0 e^{Ht}\end{aligned}$$ for $\omega=-1$ (dark energy), and for the Phantom regime ($\omega<-1$) is $$\begin{aligned}
\label{Scale factor3}
a(t)=a_0(t_0-t)^{\frac{2}{3(\omega + 1)}},\end{aligned}$$ where $t_0$ is the big rip singularity time and will be available, if the universe is in the phantom regime.
In Eq. (\[Scale factor2\]), $H(\equiv\frac{\dot{a}(t)}{a(t)})$ is the Hubble parameter and the current estimates are $H=73^{+4}_{-3}kms^{-1}Mpc^{-1}$ [@Roos].
Note that, at the end of the Phantom regime, everything will decompose into its fundamental constituents [@Mukh]. In addition, this spacetime can be classified as a subgroup of the Godel-type spacetime with $\sigma=m=0$ and $k^{\prime}=1$ [@godel].
A signal which was emitted at the time $t_0$ by a co-moving source and absorbed by a co-moving observer at a later time $t$ is affected by a redshift ($z$) as $$\begin{aligned}
1+z=\frac{a(t)}{a(t_0)}.\end{aligned}$$ The apparent horizon as a marginally trapped surface, is defined as [@SWR] $$\begin{aligned}
\label{aph1}
g^{\mu \nu}\partial_{\mu}\xi \partial_{\nu}\xi=0,\end{aligned}$$ which for the physical radius of $\xi=a(t)r$, the solution will be: $$\begin{aligned}
\label{aph2}
\xi=\frac{1}{\sqrt{H^2+\frac{k}{a(t)^2}}}.\end{aligned}$$ The surface gravity of the apparent horizon can be evaluated by: $$\begin{aligned}
\label{sg1}
\kappa=\frac{1}{2\sqrt{-h}}\partial_a(\sqrt{-h}h^{ab}\partial_b
\xi).\end{aligned}$$
Where the two dimensional induced metric is $h_{ab}=diag(-1,\frac{a(t)}{(1-kr^2)})$. It was shown that the first law of thermodynamics is satisfied on the apparent horizon [@S0; @S1; @S2; @S3]. The special case of $\omega=-1$ is called the dark energy, and by a suitable change of variables one can rewrite this case in the static form [@Poisson]: $$\begin{aligned}
\label{static dark}
ds^2=-(1-H^2r^2)dt^2+\frac{dr^2}{(1-H^2r^2)}+r^2d\Omega^2.\end{aligned}$$ This metric belongs to a more general class of spherically symmetric, static metrics. For these class of spherically symmetric static metrics, the line element can be written in the form of: $$\begin{aligned}
\label{SSM}
ds^2=-f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\Omega^2,\end{aligned}$$ where the general form of $f(r)$ is: $$\begin{aligned}
f(r)=1-2\frac{m}{r}+\frac{Q^2}{r^2}-H^2r^2.\end{aligned}$$ In the above expression, $m$ and $Q$ represent mass and charge, respectively. For this metric, one can evaluate redshift: $$\begin{aligned}
1+z=(\frac{1-2\frac{m}{r}+\frac{Q^2}{r^2}-H^2r^2}
{1-2\frac{m}{r_0}+\frac{Q^2}{r_0^2}-H^2r_0^2})^\frac{1}{2}.\end{aligned}$$ Where, $r_0$ and $r$ are radial coordinates at the emission and the absorption points. For the horizons, the radius and the surface gravity can be found using equations $$\begin{aligned}
\label{SG}
g_{tt}&=&f(r)=0 \longrightarrow r_h \\ \nonumber \kappa
&=&\frac{f^\prime(r)}{2}|_{r_h},\end{aligned}$$ where $(^\prime)$ denotes derivative with respect to the coordinate $r$ [@Poisson]. From the thermodynamic laws of Black Holes (BHs) we know $$\begin{aligned}
\label{Temp2}
T=\frac{\kappa}{2\pi},\end{aligned}$$ which $T$ is the temperature on the horizon [@Poisson]. Validity of the first law of the thermodynamics on the static horizons for the static spherically symmetric spacetime has been shown [@Cai1; @Padm1].
The BHs with the FRW dynamic background has motivated many investigations. The first approach, which is named Swiss Cheese, includes efforts in order to find the effects of the expansion of the Universe on the gravitational field of the stars [@P1], introduced originally by Einstein and Straus $(1945)$ [@ES]. In these models, authors tried to join the Schwarzschild metric to the FRW metric by satisfying the junction conditions on the boundary, which is an expanding timelike hypersurface. The inner spacetime is described by the Schwarzschild metric, while the FRW metric explains the outer spacetime. These models don’t contain dynamical BHs, Because the inner spacetime is in the Schwarzschild coordinate, hence, is static [@saida]. In addition, the Swiss Cheese models can be classified as a subclass of inhomogeneous Lemabitre-Tolman-Bondi models [@MD1; @CLure].
Looking for dynamical BHs, some authors used the conformal transformation of the Schwarzschild BH, where the conformal factor is the scale factor of the famous FRW model. Originally, Thakurta $(1981)$ have used this technique and obtained a dynamical version of the Schwarzschild BH [@Thak]. Since the Thakurta spacetime is a conformal transformation of the Schwarzschild metric, it is now accepted that its redshift radii points to the co-moving radii of the event horizon of BH [@MD1; @MR; @RMS]. By considering asymptotic behavior of the gravitational lagrangian (Ricci scalar), one can classify the Thakurta BH and its extension to the charged BH into the same class of solutions [@MR; @RMS]. The Thakurta spacetime sustains an inward flow, which leads to an increase in the mass of BH [@MR; @RMS; @Gao1]. This ingoing flow comes from the back-reaction effect and can be neglected in a low density background [@Gao1]. In fact, for the low density background, the mass will be decreased in the Phantom regime [@Bab]. Also, the radius of event horizon increases with the scale factor when its temperature decreases by the inverse of scale factor [@MR; @RMS].
Using the Eddington-Finkelstein form of the Schwarzschild metric and the conformal transformation, Sultana and Dyer $(2005)$ have constructed their metric and studied its properties [@SD]. In addition, unlike the Thakurta spacetime, the curvature scalars do not diverge at the redshift singularity radii (event horizon) of the Sultana and Dyer spacetimes. Since the Sultana and Dyer spacetimes is conformal transformation of the Schwarzschild metric, it is now accepted that the Sultana and Dyer spacetimes include dynamic BHs [@MD1]. Various examples can be found in [@MD1; @MD2; @FJ; @MN]. Among these conformal BHs, only the solutions by M$^{\textmd{c}}$Clure et al. and Thakurta can satisfy the energy conditions [@RMS; @MD1]. Static charged BHs which are confined into the FRW spacetime and the dynamic, charged BHs were studied in [@O1; @O2; @O3; @O4; @O5; @O6; @O7; @O8]. The Brane solutions can be found in [@BS1; @BS2; @BS3].
In another approach, mcVittie found new solutions including contracting BHs in the coordinates co-moving with the universe’s expansion [@mcvittie]. Its generalization to the arbitrary dimensions and to the charged BHs can be found in [@Gao0; @Gao]. In these solutions, it is easy to check that the curvature scalars diverge at the redshift singularities. In this approach, authors have used the isotropic form of the FRW metric along as the perfect fluid concept and could find their solutions which can contain BHs [@Far]. The mass and the charge of their BHs seem to be decreased with the scale factor. Also, it seems that the redshift singularities does not point to a dynamic event horizon [@nol1; @nol2; @SUS; @fri]. Unlike the Swiss Cheese models, the energy conditions are violated by these solutions [@MD1]. These solutions can be considered as Models for cosmological inhomogeneities [@CLure].
This paper is organized as follows: in the next section, we consider the conformal transformation of a non-static spherically symmetric metric, where conformal factor has only time dependency. In addition, we derive the general possible form of metric by using perfect fluid concept. In section $3$, slow time varying approximation is used in order to find the physical meaning of the parameters of metric. In continue, the mcVittie like solution and its thermodynamic properties are addressed. In section $4$, we generalized our debates to the charged spacetime, when the effects of the dark energy are considerable. In section $5$, we summarize and conclude the results.
Metric, general properties and basic assumptions
================================================
Let us begin with this metric: $$\begin{aligned}
ds^2=a(\tau)^2[-f(\tau,r)d\tau^2+\frac{dr^2}{(1-kr^2)f(\tau,r)}+r^2d\theta^2+r^2sin(\theta)^2d\phi^2].\end{aligned}$$ Where $a(\tau)$ is the arbitrary function of time coordinate $\tau$. This metric has three Killing vectors $$\begin{aligned}
\label{symm}
\partial_{\phi},\ \ \sin\phi \ \partial_{\theta}+\cot\theta \ \cos\phi \
\partial_{\phi}\ \ \textmd{and}\ \ \cos\phi \ \partial_\theta - \cot\theta \ \sin\phi \ \partial_\phi .\end{aligned}$$ Now, if we define new time coordinate as $$\begin{aligned}
\tau \rightarrow t=\int a(\tau)d\tau,\end{aligned}$$ we will get $$\begin{aligned}
\label{Metric1}
ds^2=-f(t,r)dt^2+a(t)^2[\frac{dr^2}{(1-kr^2)f(t,r)}+r^2d\theta^2+r^2sin(\theta)^2d\phi^2],\end{aligned}$$ which possesses symmetries like as Eq. (\[symm\]). From now, it is assumed that $a(t)$ is the cosmic scale factor similar to the FRW’s. For $f(t,r)=1$, Eq. (\[Metric1\]) is reduced to the FRW metric (\[FRW\]). Also, conformal BHs can be achieved by choosing $f(t,r)=f(r)$ where, the general form of $f(r)$ is [@MR]: $$\begin{aligned}
f(r)=1-\frac{2m}{r}+\frac{Q^2}{r^2}-\frac{\Lambda r^2}{3}.\end{aligned}$$ Therefore, conformal BHs can be classified as a special subclass of metric (\[Metric1\]). $n_{\alpha}=\delta^r_{\alpha}$ is normal to the hypersurface $r=const$ and yields $$\begin{aligned}
\label{nhs}
n_{\alpha}n^{\alpha}=g^{rr}=\frac{(1-kr^2)f(t,r)}{a(t)^2},\end{aligned}$$ which is timelike when $(1-kr^2)f(t,r)<0$, null for $(1-kr^2)f(t,r)=0$ and spacelike if we have $(1-kr^2)f(t,r)>0$. For an emitted signal at the coordinates $t_0$ and $r_0$, when it is absorbed at coordinates $t$ and $r$ simple calculations lead to $$\begin{aligned}
\label{redshift}
1+z=\frac{\lambda}{\lambda_0}=\frac{a(t)}{a(t_0)}(\frac{f(t,r)}{f(t_0,r_0)})^{\frac{1}{2}},\end{aligned}$$ as induced redshift due to the universe expansion and factor $f(t,r)$. Redshift will diverge when $f(t_0 , r_0)$ goes to zero or $1+z\longrightarrow \infty$. This divergence as the signal of singularity is independent of the curvature scalar ($k$), unlike the Mcvittie’s solution and its various generalizations [@Gao0; @Gao], which shows that our solutions are compatible with the FRW background. As a desired expectation, it is obvious that the FRW result is covered when $f(t_0 , r_0)=f(t,r)=1$. The only non-diagonal term of the Einstein tensor is $$\begin{aligned}
\label{off diagonal}
G^{t
r}=-\frac{1-kr^2}{f(t,r)a(t)^3r}(a(t)\dot{f}(t,r)-{f}^\prime(t,r)\dot{a}(t)r),\end{aligned}$$ which $(\dot{})$ and $(^\prime)$ are derivatives with respect to time and radius, respectively. Using $\frac{\partial f}{\partial
t}=\dot{a}\frac{\partial f}{\partial a}$, one gets $$\begin{aligned}
\label{off diagonal2}
G^{t
r}=-\frac{(1-kr^2)\dot{a}(t)}{f(a(t),r)a(t)^3r}(a(t)\tilde{f}(a(t),r)-{f}^\prime(a(t),r)r),\end{aligned}$$ where $\tilde{f}(a(t),r)=\frac{\partial f}{\partial a}$. In order to get perfect fluid solutions, we impose condition $G^{tr}=0$ and reach to $$\begin{aligned}
\label{f}
f(t,r)=f(a(t)r)=\sum_n b_n (a(t)r)^n.\end{aligned}$$ Although Eq. (\[f\]) includes numerous terms, but the slow expansion approximation helps us to attribute physical meaning to the certain coefficients $b_n$. Since $G_{tr}=0$, we should stress that here that there is no redial flow and thus, the backreaction effect is zero [@RMS; @Gao1], which means that there is no energy accretion in these solutions [@fj]. Finally and briefly, we see that the perfect fluid concept is in line with the no energy accretion condition. The only answer which is independent of the rate of expansion can be obtained by condition $b_n=\delta_{n0}$ which is yielding the FRW solution.
mcVittie like solution in the FRW background
============================================
The mcVittie’s solution in the flat FRW background can be written as [@MD1] $$\begin{aligned}
\label{mc1}
ds^2=-(\frac{1-\frac{M}{2a(t)\tilde{r}}}{1+\frac{M}{2a(t)\tilde{r}}})^2dt^2+
a(t)^2 (1+\frac{M}{2a(t)\tilde{r}})^4[d\tilde{r}^2+
\tilde{r}^2d\Omega^2].\end{aligned}$$ This metric possess symmetries same as metric (\[Metric1\]). $\tilde{r}$ is isotropic radius defined by: $$\begin{aligned}
\label{r1}
r=\tilde{r}(1+\frac{M}{2 \tilde{r}})^2.\end{aligned}$$ There is a redshift singularity at radii $\tilde{r}_h=\frac{M}{2a(t)}$ which yields the radius $r_h=\frac{M}{2a(t)}(1+a(t))^2$ [@Fara]. In addition, $\tilde{r}_h$ is a spacelike hypersurface, and can not point to an event horizon [@fj].
Consider $f(a(t)r)=1-\frac{2b_{-1}}{a(t)r}$. This assumption satisfies condition (\[f\]) and leads to $$\begin{aligned}
\label{SCHWM}
ds^2=-(1-\frac{2b_{-1}}{a(t)r})dt^2+a(t)^2
[\frac{dr^2}{(1-kr^2)(1-\frac{2b_{-1}}{a(t)r})}+r^2d\Omega^2].\end{aligned}$$ For $b_{-1}\neq0$, this metric will converge to the FRW metric when $r\longrightarrow \infty$. The Schwarzschild metric is obtainable by putting $a(t)=1$, $b_{-1}=M$ and $k=0$. Metric suffers from three singularities at $a(t)=0$ (big bang), $r=0$ and $$\begin{aligned}
\label{SCHR}
f(a(t)r)=0\Rightarrow a(t)r_h=2b_{-1}.\end{aligned}$$ Third singularity exists if $b_{-1}>0$. In this manner, Eq. (\[redshift\]) will diverge at $r_0=r_h$. In addition and in contrast to the Gao’s solutions, the radii of the redshift singularity $(r_h)$ in our solutions is independent of the background curvature $(k)$, while for the flat case our radius is compatible with the previous works [@mcvittie; @Gao; @MD1]. Also, metric changes its sign at $r=r_h$ just the same as the schwarzschild spacetime. In addition, curvature scalars diverge at this radius as well as the mcVittie spacetime. Accordingly, this singularity point to a naked singularity which can be considered as alternatives for BHs [@virb; @virb1]. In continue, we will point to the some physical and mathematical properties of this singularity which has the same behaviors as event horizon if one considers slow expansion approximation. The surface area integration at this radius leads to $$\begin{aligned}
\label{surface area1}
A=\int\sqrt{\sigma}d\theta d\phi=4\pi r^2_h a(t)^2=16\pi
(b_{-1})^{2}.\end{aligned}$$ The main questions that arise here are: what is the nature of $b_{-1}$? and can we better clarify the meaning of $r_h$? For these purposes, we consider the slow expansion approximation $(a(t)\approx
c)$, define new coordinate $\eta=cr$ and get $$\begin{aligned}
\label{apmetric}
ds^2\approx-(1-\frac{2b_{-1}}{\eta})dt^2+\frac{d\eta^2}{(1-k^{\prime}\eta^2)
(1-\frac{2b_{-1}}{\eta})}+\eta^2d\theta^2+\eta^2sin(\theta)^2d\phi^2,\end{aligned}$$ where $k^{\prime}=\frac{k}{c^2}$. In these new coordinates, $(t,\eta,\theta,\phi)$, and from Eq. (\[nhs\]) it is apparent that for $b_{-1}>0$, hypersurface with equation $\eta=\eta_h=2b_{-1}$ is a null hypersurface. When our approximation is broken, then $\eta_h$ may not be actually a null hypersurface, despite its resemblance to that. We call this null hypersurface a quasi event horizon which is signalling us an object like a BH and we refer to that as a quasi BH. From now, we assume $b_{-1}>0$, the reason of this option will be more clear later, when we debate mass. Therefore by the slow expansion approximation, $r_h$ ($=\frac{2b_{-1}}{c}$) plays the role of the co-moving radius of event horizon and it is decreased with time. In order to find an answer to the first question about the physical meaning of $b_{-1}$, we use Komar mass: $$\begin{aligned}
\label{mass1}
M=\frac{1}{4\pi}\int_S n^{\alpha} \sigma_{\beta}
\triangledown_\alpha \xi^\beta_t dA,\end{aligned}$$ where $\xi^\beta_t$ is the timelike Killing vector of spacetime. Since the Komar mass is only definable for the stationary and asymptotically flat spacetimes [@W1], one should consider the flat case ($k=0$) and then by bearing the spirit of the stationary limit in mind (the slow expansion approximation) tries to evaluate Eq. (\[mass1\]).
Consider $n_{\alpha}=\sqrt{1-\frac{2b_{-1}}{a(t)r}}\delta^t_\alpha$ and $\sigma_\beta=\frac{a(t)}{\sqrt{1-\frac{2b_{-1}}{a(t)r}}}\delta^r_\beta$ as the unit timelike and unit spacelike four-vectors, respectively. Now using Eq. (\[mass1\]) and bearing the spirit of the slow expansion approximation in mind, one gets $$\begin{aligned}
\label{mass}
M=\frac{1}{4\pi}\int_S n^{\alpha} \sigma_{\beta}
\Gamma^{\beta}_{\alpha t}dA=b_{-1},\end{aligned}$$ which is compatible with the no energy accretion condition ($G_{tr}=0$). In addition, we will find the same result as Eq. (\[mass\]), if we considered the flat case ($k=0$) of metric (\[apmetric\]) and use $n_{\alpha}=\sqrt{1-\frac{2b_{-1}}{\eta}}\delta^t_\alpha$ and $\sigma_\beta=\frac{1}{\sqrt{1-\frac{2b_{-1}}{\eta}}}\delta^\eta_\beta$. Since the integrand is independent of the scale factor ($a(t)$), the slow expansion approximation does not change the result of integral. But, the accessibility of the slow expansion approximation is necessary if one wants to evaluate the Komar mass for dynamical spacetimes [@W1]. Indeed, this situation is the same as what we have in the quasi-equilibrium thermodynamical systems, where the accessibility of the quasi-equilibrium condition lets us use the equilibrium formulation for the vast thermodynamical systems [@callen]. It is obvious that for avoiding negative mass, we should have $b_{-1}>0$. Relation to the Komar mass of the mcVittie’s solution can be written as [@MD1; @Gao] $$\begin{aligned}
\label{komar}
M_{mcVittie}=\frac{M}{a(t)}.\end{aligned}$$ In addition, some studies show that the Komar mass is just a metric parameter in the mcVittie spacetime [@nol1; @nol2; @fj]. Indeed, Hawking-Hayward quasi-local mass satisfies $\dot{M}=0$, which is compatible with $G_{r t}=0$ and indicates that there is no redial flow and thus the backreaction effect, in the mcVittie’s solution [@RMS; @Gao1; @Bab; @fj]. In order to clarify the mass notion in the mcVittie spacetime, we consider the slow expansion approximation of the mcVittie spacetime which yields $$\begin{aligned}
\label{mse}
ds^2 \approx -(\frac{1-\frac{M}{2\eta}}{1+\frac{M}{2\eta}})^2dt^2+
(1+\frac{M}{2\eta})^4[d\eta^2+ \eta^2d\Omega^2].\end{aligned}$$ This metric is signalling us that the $M$ may play the role of the mass in the mcVittie spacetime. In addition, by defining new radii $R$ as $$\begin{aligned}
R(t,r)=a(t)\tilde{r}(1+\frac{M}{2\tilde{r}})^2,\end{aligned}$$ one can rewrite the mcVittie spacetime in the form of $$\begin{aligned}
\label{mn}
ds^2=-(1-\frac{2M}{R}-H^2R^2)dt^2-\frac{2HR}{\sqrt{1-\frac{2M}{R}}}dtdR+
\frac{dR^2}{1-\frac{2M}{R}}+R^2d\Omega^2,\end{aligned}$$ where $H=\frac{\dot{a}}{a}$ [@ff]. This form of the mcVittie spacetime indicates these facts that the Komar mass is a metric parameter and $M$ is the physical mass in this spacetime [@ff]. Finally, we see that the results of the slow expansion approximation (Eq. (\[mse\])) and Eq. (\[mn\]) are in line with the result of the study of the Hawking-Hayward quasi-local mass in the mcVittie spacetime [@nol1; @nol2; @fj; @ff]. For the flat case ($k=0$) of our spacetime (Eq. (\[SCHWM\])), by considering Eq. (\[komar\]) and following the slow expansion approximation, we reach at $$\begin{aligned}
\label{nm1}
ds^2\approx-(1-\frac{2M}{\eta})dt^2+\frac{d\eta^2}{(1-\frac{2M}{\eta})}
+\eta^2d\theta^2+\eta^2sin(\theta)^2d\phi^2.\end{aligned}$$ Also, if we define new radius $R$ as $$\begin{aligned}
r=\frac{R}{a}(1+\frac{M}{2R})^2,\end{aligned}$$ we obtain $$\begin{aligned}
\label{nm}
ds^2&=&
-(\frac{(1-\frac{M}{2R})^2}{(1+\frac{M}{2R})^2}-\frac{R^2H^2(1+\frac{M}{2R})^6}
{(1-\frac{M}{2R})^2})dt^2-
\frac{2RH(1+\frac{M}{2R})^5}{(1-\frac{M}{2R})}dtdR\\ \nonumber
&+&(1+\frac{M}{2R})^4[dR^2+R^2d\Omega^2].\end{aligned}$$ Both of the equations (\[nm1\]) and (\[nm\]) as well as the no energy accretion condition suggest that, unlike the mcVittie’s spacetime, the Komar mass may play the role of the mass in our solution. From Eq. (\[nm\]) it is apparent that $R=\frac{M}{2}$ points to the spacelike hypersurface where, in the metric (\[mn\]), $R=2M$ points to the null hypersurface. In the next subsection and when we debate thermodynamics, we will derive the same result for the mass notion in our spacetime.Only in the $a(t)=1$ limit (the Schwarzschild limit), Eqs. (\[nm\]) and (\[mc1\]) will be compatible which shows that our spacetime is different with the mcVittie’s. Let us note that the obtained metric (Eq. (\[nm\])) is consistent with Eq. (\[mn\]), provided we take $M=0$ (the FRW limit).
Horizons, energy and thermodynamics {#horizons-energy-and-thermodynamics .unnumbered}
-----------------------------------
There is an apparent horizon in accordance with the FRW background which can be evaluated from Eq. (\[aph1\]): $$\begin{aligned}
(1-kr_{ap}^2)(1-\frac{2M}{a(t)r_{ap}})^2-r_{ap}^2\dot{a}(t)^2=0.\end{aligned}$$ This equation covers the FRW results in the limit of $M\longrightarrow0$ ( see Eq. (\[aph2\])). In addition, one can get the Schwarzschild radius by considering $\dot{a}(t)=0$, which supports our previous definition for $b_{-1}$. Calculations for the flat case yield four solutions. The only solution which is in full agreement with the limiting situation of the FRW metric (in the limit of zero $M$) is $$\begin{aligned}
\label{hcr}
r_{ap}=\frac{1+\sqrt{1-8HM}}{2\dot{a}}.\end{aligned}$$ Therefore, the physical radius of apparent horizon $(\xi_{ap}=a(t)r_{ap})$ is $$\begin{aligned}
\xi_{ap}=\frac{1+\sqrt{1-8HM}}{2H},\end{aligned}$$ which is similar to the conformal BHs [@RMS]. It is obvious that in the limit of $M\longrightarrow0$, the radius for the apparent horizon of the flat FRW is recovered. For the surface gravity of apparent horizon, one can use Eq. (\[sg1\]) and gets: $$\begin{aligned}
\label{fsg}
\kappa=\frac{\kappa_{FRW}}{(1-\frac{2M}{a(t)r_{ap}})^2}+\frac{M}{a(t)^2}
[\frac{1}{r_{ap}^2}+\frac{1}{(1-\frac{2M}{a(t)r_{ap}})^2}(\ddot{a}(t)+
2\frac{\dot{a}(t)^2}{a(t)})],\end{aligned}$$ where $h^{ab}=diag(-\frac{1}{1-\frac{2M}{a(t)r}},\frac{1-\frac{2M}{a(t)r}}{a(t)^2})$, $r_{ap}$ is the apparent horizon co-moving radius (\[hcr\]) and $\kappa_{FRW}$ is the surface gravity of the flat FRW manifold $$\begin{aligned}
\kappa_{FRW}=-\frac{\dot{a}(t)^2+a(t)\ddot{a}(t)}{2a(t)\dot{a}(t)}.\end{aligned}$$
The schwarzschild limit ($\kappa=\frac{1}{4M}$) is obtainable by inserting $a(t)=1$ in Eq. (\[fsg\]). In the limiting case $M\longrightarrow0$, Eq. (\[fsg\]) is reduced to the surface gravity of the flat FRW spacetime, as a desired result. The Misner-Sharp mass inside radius $\xi$ for this spherically symmetric spacetime is defined as [@Ms]: $$\begin{aligned}
\label{MS}
M_{MS}=\frac{\xi}{2}(1-h^{ab}\partial_a \xi \partial_b \xi).\end{aligned}$$ Because this definition does not yield true results in some theories such as the Brans-Dicke and scalar-tensor gravities, we are pointing to the Gong-Wang definition of mass [@Gong]: $$\begin{aligned}
\label{GW}
M_{GW}=\frac{\xi}{2}(1+h^{ab}\partial_a \xi \partial_b \xi).\end{aligned}$$ It is apparent that, for the apparent horizon, Eqs. (\[MS\]) and (\[GW\]) yield the same result as $M_{GW}=M_{MS}=\frac{\xi_{ap}}{2}$. In the limit of $M\longrightarrow0$, the FRW’s results are recovered and we reach to $M_{GW}=M_{MS}=\rho V$ as a desired result [@Cai1]. Using Eqs. (\[MS\]) and (\[GW\]) and taking the slow expansion approximation into account, we reach to $M_{GW}=M_{MS}\simeq M$ as the mass of quasi BH. Also, this result supports our previous guess about the Komar mass as the physical mass in our solution, and is in line with the result of Eqs. (\[nm1\]) and (\[nm\]). For the Mcvittie metric, Eqs. (\[MS\]) and (\[GW\]) yield $M_{GW}=M_{MS}\simeq\frac{M}{4}$ as the confined mass to radius $\xi_h=a(t)\tilde{r}_h=\frac{M}{2}$. Also, Eqs. (\[mass\]), (\[MS\]) and (\[GW\]) leads to the same result in the Schwarzschild’s limit ($\mathcal{M}=M_{GW}=M_{MS}=M$). For the flat background, using metric (\[apmetric\]), Eq. (\[SG\]) and inserting results into Eq. (\[Temp2\]), one gets $$\begin{aligned}
\label{Temp1}
T\simeq\frac{1}{8 \pi M},\end{aligned}$$ for the temperature on the surface of quasi horizon. The same calculations yield similar results, as the temperature on the horizon of the Mcvittie’s solution. For the conformal Schwarzschild BH, the same analysis leads to $$\begin{aligned}
T\simeq\frac{1}{8 \pi a(t)M},\end{aligned}$$ which shows that the $a(t)M$ plays the role of mass, and is compatible with the energy accretion in the conformal BHs [@RMS; @Gao1; @Bab; @Fara]. Again, we see that the temperature analysis can support our expectation from $M$ as the physical mass in our solutions. For the area of quasi horizon, we have $$\begin{aligned}
\label{surface area}
A=\int\sqrt{\sigma}d\theta d\phi=4\pi a(t)^2 r_h^2=16\pi M^{2}.\end{aligned}$$ In the mcVittie spacetime, this integral leads to $A=16 \pi M^2$. In order to vindicate our approximation, we consider $S=\frac{A}{4}$ for the entropy of quasi BH. In continue and from Eq. (\[Temp1\]), we get $$\begin{aligned}
TdS\simeq dM=dE.\end{aligned}$$
Whereas, we reach to $TdS\simeq dM\neq dE $ for the mcVittie spacetime. In the coordinates $(t,\eta,\theta,\phi)$, we should remind that, unlike the mcVittie spacetime, $E=M_{GW}=M_{MS}\simeq
M$ is valid for quasi BH and the work term can be neglected as the result of slow expansion approximation ($dW\sim 0$) [@Fara]. Finally and unlike the mcVittie’s horizon, we see that $TdS\simeq
dE$ is valid on the quasi event horizon. This result points us to this fact that the first law of the BH thermodynamics on quasi event horizon will be satisfied if we use either the Gong-Wang or the Misner-Sharp definitions for the energy of quasi BH. $TdS\simeq dE$ is valid for the conformal Schwarzschild BH, too [@Fara]. For the flat background, we see that the surface area at redshift singularity in our spacetime is equal to the mcVittie metric which is equal to the Schwarzschild metric. In continue and by bearing the slow expansion approximation in mind, we saw that the temperature on quasi horizon is like the Schwarzschild spacetime [@RMS]. In addition, we saw that the quality of the validity of the first law of the BH thermodynamics on quasi event horizon is like the conformal Schwarzschild BH’s and differs from the mcVittie’s solution.
In another approach and for the mcVittie spacetime, if we use the Hawking-Hayward definition of mass as the total confined energy to the hypersurface $\tilde{r}=\frac{M}{2a(t)}$, we reach to $$\begin{aligned}
\label{Tempp}
TdS\simeq dM=dE,\end{aligned}$$ where we have considered the slow expansion approximation. In addition, Eq. (\[Tempp\]) will be not valid, if one uses the Komar mass (\[komar\]). Finally, we saw that the first law of thermodynamics will be approximately valid in the mcVittie’s solution, if one uses the Hawking-Hayward definition of energy. Also, none of the Komar, Misner-Sharp and Gong-Wang masses can not satisfy the first law of thermodynamics on the mcVittie’s horizon.
Other Possibilities
===================
According to what we have said, it is obvious that there are two other meaningful sentences in expansion (\[f\]). The first term is due to $n=-2$ and points to the charge, where the second term comes from $n=2$ and it is related to the cosmological constant. Therefore, the more general form of $f(t,r)$ can be written as: $$\begin{aligned}
\label{tf}
f(t,r)=1-\frac{2M}{a(t)r}+\frac{Q^2}{(a(t)r)^2}-\frac{1}{3}\Lambda
(a(t)r)^2,\end{aligned}$$ where we have considered the slow expansion approximation and used these definitions $b_{-2}\equiv Q^2$ and $b_2\equiv-\frac{1}{3}\Lambda$. Imaginary charge $(b_{-2}<0)$ and the anti De-Sitter $(\Lambda<0)$ solutions are allowed by this scheme, but these possibilities are removed by the other parts of physics. Consider Eq. (\[tf\]) when $\Lambda=0$, there are two horizons located at $r_+=\frac{M+\sqrt{M^2-Q^2}}{a(t)}$ and $r_-=\frac{M-\sqrt{M^2-Q^2}}{a(t)}$. These radiuses are same as the Gao’s flat case [@Gao]. In the low expansion regime ($a(t)\sim
c$), these radiuses point to the event and the Coushy horizons, as the Riessner-Nordstorm metric [@Poisson]. Hence, we refer to them as quasi event and quasi Coushy horizons. The case with $Q=0$, $M=0$ and $\Lambda>0$ has attractive properties. Because in the low expansion regime $(a(t)\simeq c)$, one can rewrite this case as $$\begin{aligned}
ds^2\approx -(1-\frac{\Lambda}{3}\eta^2)dt^2+\frac{d\eta^2}
{(1-\frac{\Lambda}{3}\eta^2)}+\eta^2 d\Omega^2.\end{aligned}$$ This is nothing but the De-Sitter spacetime with cosmological constant $\Lambda$, which points to the current acceleration era.
Horizons and temperature {#horizons-and-temperature .unnumbered}
------------------------
Different $f(t,r)$ yield apparent horizons with different locations, and one can use Eqs. (\[aph1\]) and (\[sg1\]) in order to find the location and the temperature of apparent horizon. For every $f(t,r)$, using the slow expansion regime, we get: $$\begin{aligned}
\label{Metricf}
ds^2\approx -f(\eta)dt^2+\frac{d\eta^2}{f(\eta)}+\eta^2 d\Omega^2.\end{aligned}$$ Now, the location of horizons and their surface gravity can be evaluated by using Eq. (\[SG\]). Their temperature is approximately equal to Eq. (\[Temp2\]), or briefly: $$\begin{aligned}
T_i \simeq \frac{f^{\prime}(\eta)}{4 \pi}|_{\eta_{hi}},\end{aligned}$$ where $(^\prime)$ is derivative with respect to radii $\eta$ and $\eta_{hi}$ is the radii of i$^{\textmd{th}}$ horizon.
Conclusions \[Conclusions\]
===========================
We considered the conformal form of the special group of the non-static spherically symmetric metrics, where it was assumed that the time dependence of the conformal factor is like as the FRW’s. We saw that the conformal BHs can be classified as a special subgroup of these metrics. In order to derive the new solutions of the Einstein equations, we have imposed perfect fluid concept and used slow expansion approximation which helps us to clarify the physical meaning of the parameters of metric. Since the Einstein tensor is diagonal, there is no energy accretion and thus the backreaction effect is zero. This imply that the energy (mass) should be constant in our solutions. These new solutions have similarities with earlier metrics that have been presented by others [@mcvittie; @Gao; @Gao0]. A related metric which is similar to the special class of our solutions was introduced by mcVittie [@mcvittie; @Gao]. These similarities are explicit in the flat case (temperature and entropy at the redshift singularity), but the differences will be more clear in the non-flat case ($k\neq0$), and we pointed to the one of them, when we debate the redshift. In addition and in the flat case, we tried to clear the some of differences between our solution and the mcVittie’s. We did it by pointing to the behavior of the redshift singularity in the various coordinates, the mass notion, and thermodynamics. Meanwhile, when our slow expansion approximation is broken then there is no horizon for our solutions. Indeed, these objects can be classified as naked singularities which can be considered as alternatives for BHs [@virb; @virb1].
For the our solutions and similar with earlier works [@mcvittie; @Gao; @Gao0], the co-moving radiuses of the redshift singularities are decreased by the expansion of universe. Also, unlike the previous works [@mcvittie; @Gao; @Gao0], the redshift singularities in our solutions are independent of the background curvature. By considering the slow expansion approximation, we were able to find out BH’s like behavior of these singularities. We pointed to these objects and their surfaces as quasi BHs and the quasi horizons, respectively. In continue, we introduced the apparent horizon for our spacetime which should be evaluated by considering the FRW background.
In order to compare the mcVittie’s solution with our mcVittie’s like solution, we have used the three existing definitions of mass including the Komar mass, the Misner-Sharp mass ($M_{MS}$) and the Gong-Wang mass ($M_{GW}$). We saw that the notion of the Komar mass of quasi BH differs from the mcVittie’s solution. Also, in our spacetime, we showed that the $M_{MS}$ and $M_{GW}$ masses yield the same result on the apparent horizon and cover the FRW’s result in the limiting situations. In addition, using the slow expansion approximation, we evaluated $M_{MS}$ and $M_{GW}$ on the quasi event horizon of our mcVittie’s like solution, which leads to the same result as the Komar mass. In addition, we should express that, the same as the mcVittie spacetime, the energy conditions are not satisfied near the quasi horizon.
In addition, we have proved that, unlike the mcVittie’s solution, the first law of thermodynamics may be satisfied on the quasi event horizon of our mcVittie’s like solution, if we use the Komar mass or either $M_{MS}$ or $M_{GW}$ as the confined mass and consider the slow expansion approximation. This result is consistent with previous studies about the conformal BHs [@Fara], which shows that the thermodynamics of our solutions is similar to the conformal BHs’. In order to clarify the mass notion, we think that the full analysis of the Hawking-Hayward mass for our solution is needed, which is out of the scope of this letter and should be considered as another work, but our resolution makes this feeling that the predictions by either the slow expansion approximation or using the suitable coordinates for describing the metric for mass, may be in line with the Hawking-Hayward definition of energy, and have reasonable accordance with the Komar, $M_{MS}$ and $M_{GW}$ masses of our solutions. Indeed, this final remark can be supported by the thermodynamics considerations and the no energy accretion condition ($G_{tr}=0$). Moreover, we think that, in dynamic spacetimes, the thermodynamic considerations along as the slow time varying approximation can help us to get the reasonable assumptions for energy and thus mass. Finally, we saw that the first law of thermodynamics will be approximately valid in the mcVittie’s and our solutions if we use the Hawking-Hayward definition of the mass in the mcVittie spacetime and the Komar mass as the physical mass in our solution, respectively. In continue, the more general solutions such as the charged quasi BHs and the some of their properties have been addressed.
Results obtained in this paper may help achieving a better understanding of black holes in a dynamical background. From a phenomenological point of view, this issue is important since after all, any local astrophysical object lives in an expanding cosmological background. Finally, we tried to explore the concepts of mass, entropy and temperature in a dynamic spacetime.
Acknowledgments
===============
We are grateful to referee for appreciable comments which led to sensible improvements in this manuscript. This work has been supported financially by Research Institute for Astronomy & Astrophysics of Maragha (RIAAM).
[99]{} M. Roos, *Introduction to Cosmology* (John Wiley and Sons, UK, 2003). V. Mukhanov, *Physical Foundations of Cosmology* (Cambridge University Press, Cambridge, 2005). S. C. Ulhoa, A. F. Santos and R. G. G. Amorim, Mod. Phys. Lett. A **28**, 1350039 (2013). A. Sheykhi, B. Wang and N. Riazi, Phys. Rev. D [**75**]{}, 123513 (2007). A. Sheykhi, B. Wang and R. G. Cai, Phys. Rev. D [**76**]{}, 023515 (2007). A. Sheykhi, B. Wang and R. G. Cai, Nucl. Phys. B [**779**]{}, 1 (2007). A. Sheykhi, J. Cosmol. Astropart. Phys. [**05**]{}, 019 (2009). A. Sheykhi, Class. Quant. Grav. [**27**]{}, 025007 (2010). E. Poisson, *A Relativist’s Toolkit* (Cambridge University Press, UK, 2004). M. Akbar and R. G. Cai, Phys. Lett. B [**648**]{}, 243 (2007). T. Padmanabhan, Class. Quant. Grav. [**19**]{}, 5387 (2002);\
A. Paranjape, S. Sarkar and T. Padmanabhan, arXiv:hep-th/0607240v2. and refferences in there. P. J. E. Peebles, *Principles of Physical Cosmology* (Princeton University Press, Princeton, NJ, 1993). A. Einstein and E. G. Straus, Rev. Mod. Phys. [**17**]{}, 120 (1945). H. Saida, Class. Quant. Grav. [**19**]{}, 3179 (2002). M. L. McClure, Ph.D. thesis, University of Toronto, 2006. M. L. McClure and C. C. Dyer, Class. Quant. Grav. [**23**]{}, 1971 (2006). S. N. G. Thakurta, Indian J. Phys. B [**55**]{}, 30410 (1981). H. Moradpour and N. Riazi, Under Review in Int. J. Mod. Phys. D. N. Riazi, H. Moradpour and A. Sheykhi, Int. J. Mod. Phys. D [**23**]{}, 5, 1450048 (2014). C. Gao, X. Chen, V. Faraoni and Y. G. Shen, Phys. Rev. D [**78**]{}, 024008 (2008). E. Babichev, V. Dokuchaev and Yu. Eroshenko, Phys. Rev. Lett. [**93**]{}, 021102 (2004). J. Sultana and C. C. Dyer, Gen. Rel. Grav. [**37**]{}, 1349 (2005). M. L. McClure and C. C. Dyer, Gen. Rel. Grav. [**38**]{}, 1347 (2006). V. Faraoni and A. Jacques, Phys. Rev. D [**76**]{}, 063510 (2007). K. Meada and M. Nozawa, Phys. Rev. D [**81**]{}, 124038 (2010). B. Carter, *Black Holes*, eds. C. DeWitt, J. DeWitt (Gordon and Breach, New18 York, 1973). F. Kottler, Ann. Phys. [**56**]{}, 410 (1918). D. R. Brill and S. A. Hayward, Class. Quant. Grav. [**11**]{}, 359 (1994). D. Kastor and J. H. Traschen, Phys. Rev. D [**47**]{}, 5370 (1993). D. R. Brill, G. T. Horowitz, D. Kastor and J. H. Traschen, Phys. Rev. D [**49**]{}, 840 (1994). J. B. Hartle and S. W. Hawking, Commun. Math. Phys. [**26**]{}, 87 (1972). K. Behrndt and M. Cvetic, Class. Quant. Grav. [**20**]{}, 4177 (2003). T. Shiromizu, Prog. Theor. Phys. [**102**]{}, 1207 (1999). G. W. Gibbons and K. Maeda, Phys. Rev. Lett. [**104**]{}, 131101 (2010). K. Maeda, N. Ohta and K. Uzawa, J. High Energy Phys. [**0906**]{}, 051 (2009). K. Maeda and M. Nozawa, Phys. Rev. D [**81**]{}, 044017 (2010). G. C. McVittie, Mon. Not. R. Astron. Soc. [**93**]{}, 325 (1933). C. J. Gao, Class. Quant. Grav. [**21**]{}, 4805 (2004). C. J. Gao and S. N. Zhang, Phys. Lett. B [**595**]{}, 28 (2004). V. Faraoni, F. Zambrano Moreno and A. Prain, Phys. Rev.**D 89**, 103514 (2014). B. C. Nolan, Phys. Rev. D [**58**]{}, 064006 (1998). B.C. Nolan, Class. Quant. Grav. [**16**]{}, 1227 (1999). R. Sussman, Gen. Rel. Grav. [**17**]{}, 251 (1985). M. Ferraris, M. Francaviglia and A. Spallicci, Nuovo Ci-mento B[**111**]{}, 1031 (1996). V. Faraoni and A. Jacques, Phys. Rev. **D 76**, 063510 (2007). K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. **D 65**, 103004 (2002). K. S. Virbhadra and C. R. Keeton, Phys. Rev. **D 77**, 124014 (2008). R. M. Wald, *General relativity* (The University of Chicago Press, 1984). H. B. Callen, *Thermodynamics and Introduction to Thermostatics* (JohnWiley and Sons, New York, USA, 1985). V. Faraoni, Galaxies. **2013**, 1 (2013). V. Faraoni, Phys. Rev. D [**76**]{}, 104042 (2007). C. M. Misner and D. H. Sharp, Phys. Rev. B [**136**]{}, 571 (1964). Y. Gong and A. Wang, Phys. Rev. Lett. [**99**]{}, 211301 (2007).
[^1]: h.moradpour@riaam.ac.ir
[^2]: n$\_$riazi@sbu.ac.ir
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Contextuality has been identified as a potential resource responsible for the quantum advantage in several tasks. It is then necessary to develop a resource-theoretic framework for contextuality, both in its standard and generalized forms. Here we provide a formal resource-theoretic approach for generalized contextuality based on a physically motivated set of free operations with an explicit parametrization. Then, using an efficient linear programming characterization for the noncontextual set of prepared-and-measured statistics, we adapt known resource quantifiers for contextuality and nonlocality to obtain natural monotones for generalized contextuality in arbitrary prepare-and-measure experiments.'
author:
- Cristhiano Duarte
- Barbara Amaral
bibliography:
- 'biblio2.bib'
title: 'Resource theory of contextuality for arbitrary prepare-and-measure experiments'
---
Introduction
============
Prepare-and-measure experiments provide simple situations in which the differences between classical and nonclassical probabilistic theories can be explored. One such difference is related to the generalized notion of noncontextuality, a condition imposed on ontological models that asserts that operationally indistinguishable laboratory operations should be represented identically in the model [@RS05; @SSW17; @MPKRS16; @RK17]. Inconsistencies between observed data and the existence of such a model can be understood as a signature of nonclassicality.
Besides its importance for foundations of physics [@Specker60; @KS67; @RS05], noncontextuality has been identified as a potential resource responsible for the quantum advantage in several tasks [@VFGE12; @Raussendorf13; @UZZWYDDK13; @HWVE14; @DGBR14; @BDBOR17; @SS17; @SHP17]. Hence, it is important to investigate contextuality in arbitrary prepare-and-measure experiments from the perspective of resource theories, which give powerful frameworks for the formal treatment of a physical property as an operational resource [@SOT16; @BG15; @CMH16; @TF17; @CFS16; @ACTA17; @GL15; @BHORS13].
It is commonly understood, see Refs. [@BG15; @ACTA17; @TF17; @CFS16] for instance, that such theories consist in the specification of three main components: $i)$ a class ${\mathcal{O}}$ of *objects*, that represent those entities one aims to manipulate seeking for some gain or benefit, and that may possess the resource under consideration; $ii)$ a special class ${\mathcal{F}}$ of transformations, called the *free operations*, that fulfills the essential requirement of mapping every resourceless object of the theory into another resourceles object, *i.e.* a set of transformations that does not create a new resource from a resourceless object; and $iii)$ a *measure* or a *quantifier* that outputs the amount of resource a given object contains. For consistency, the fundamental requirement for a function to be a valid quantifier is that of being a monotone with respect to the considered resource: every quantifier is non-increasing under the corresponding free operations.
Resource-theoretic approaches for quantum nonlocality are highly developed [@Barrett05b; @Allcock09; @GWAN12; @Joshi13; @Vicente14; @LVN14; @GA15; @GA17] and the operational framework of the standard notion of contextuality as a resource has received much attention lately [@HGJKL15; @GHHHHJKW14; @ACTA17; @ABM17]. Nonetheless, a proper treatment for the generalized framework of prepare-and-measure experiments considered in Refs. [@RS05; @SSW17; @MPKRS16; @RK17] as a resource is still missing. Here, using the novel generalized-noncontextual polytope, an efficient linear programming [@LexSchrivjer99] characterization for the contextual set of prepared-and-measured statistics presented in Ref. [@SSW17], we present a mathematically well structured resource-theoretic approach for generalized contextuality based on a physically motivated set of free operations with an explicit parametrization. We then adapt known resource quantifiers for contextuality and nonlocality [@Barrett05b; @Allcock09; @GWAN12; @Joshi13; @Vicente14; @LVN14; @GHHHHJKW14; @GA15; @HGJKL15; @GA17; @ACTA17; @BAC17; @ABM17] to obtain natural monotones for generalized contextuality in arbitrary prepare-and-measure experiments.
This work is organized as follows: in Sec. \[sec: GenContextuality\] we review the definition of generalized non-contextuality and the linear programming characterization of the noncontextual set; in Sec. \[sec:RTofContextuality\] we introduce the three important components of the resource theory: in Subsec. \[subsec:Objects\] we define the objects of the theory, in Subsec. \[subsec:FreeOperations\] we provide a set of physically motivated free operations for generalized contextuality in prepare-and-measure experiments, and in Subsec. \[subsec:Quantifiers\] we list several contextuality quantifiers and we explicitly prove that they are monotones with respect to the set of free operations defined in Subsec. \[subsec:FreeOperations\]; we finish with discussion and open questions in Sec. \[sec:conc\].
Generalized Contextuality {#sec: GenContextuality}
=========================
A glimpse on the theory {#sec:glimpse}
-----------------------
We consider a prepare-and-measure experiment with a set of possible preparations $\mathcal{P}=\left\{P_1,P_2, \cdots , P_I\right\}$, a set of possible measurements $\mathcal{M}=\left\{M_1,M_2, \cdots ,
M_J\right\}$, each measurement with possible outcomes $\mathcal{D}=\left\{d_1, d_2, \ldots, d_K\right\}$. An operational probabilistic theory that describe this prepare-and-measure experiment specifies, for each measurement $M_j$, a probability distribution $p(k\vert j,i)$ over $\mathcal{D}$ wich specifies the probability of obtaining outcome $d_k$ when performing measurement $M_j$, conditioned on the preparation $P_i$. We denote the measurement event of measuring $M_j$ and obtaining outcome $d_k$ as $k \vert j$.
Two preparations $P_i$ and $P_{i'}$ are *operationally equivalent* if p(kj,i)= p(kj,i’) d\_k, M\_j .
In other words, $P_i$ and $P_{i'}$ are said to be operationally equivalent if they give the same statistics for every measurement. Operational equivalence between $P_i$ and $P_{i'}$ will be denoted by $P_i \simeq P_{i'}$.
Two measurement events $k\vert j$ and $k'\vert j'$ are *operationally equivalent* if p(kj,i)= p(k’j’,i) P\_i .
In other words, $k\vert j$ and $k'\vert j'$ are said to be operationally equivalent whenever they have the same statistics for every preparation in $\mathcal{P}$. Operational equivalence between $k\vert j$ and $k' \vert j'$ will be denoted by $k\vert j \simeq k'\vert j'$.
We then specify a set $\mathcal{E}_P$ of operational equivalences for the preparations \_i \_i\^sP\_i \_i \_i\^sP\_i, s=1, …, |\_P|where $\sum_i \alpha_i^sP_i$ and $\sum_i \beta_i^sP_i$ represent convex combinations of the preparations $P_i$, and a set $\mathcal{E}_M$ of operational equivalences for the measurement effects \_[k,j]{} \_[kj]{}\^r\_[k,j]{} \_[kj]{}\^r, r=1, …, |\_M|where $\sum_{k,j} \alpha_{k\vert j}^r\left[k\vert j\right]$ and $\sum_{k,j} \beta_{k\vert j}^r\left[k\vert j\right]$ represent convex combinations of measurement events.
A *prepare-and measure scenario* {, , , \_P, \_M}consists of a set of preparations $\mathcal{P}$, a set of measurements $\mathcal{M}$, a set of outcomes $\mathcal{D}$, a set of operational equivalences for the preparations $\mathcal{E}_P$ and a set of operational equivalences for the measurements $\mathcal{E}_M$. A prepare-and-measure statistics (more commonly known as behaviours or black-box correlations [@BCPSW13; @NGHA15; @Slofstra17]) is a set of conditional probability distributions $$\label{def:behaviour}
\boldsymbol{B} \coloneqq \left\{p\left(k \vert
j,i\right)\right\}_{j \in [J], i \in [I], k \in [K]}$$ that give the probability of outcome $d_k$ for each measurement $M_j$ given the preparation $P_i$.
A schematic representation of a prepare-and-measure scenario is shown in Fig. \[Figure\_prepare\_and\_measure\].
![ \[Figure\_prepare\_and\_measure\]](Figure_prepare_and_measure.pdf)
### Ontological models
An *ontological model* for a prepare-and-measure statistics $B=\left\{p\left(k\vert j,i\right)\right\}$ is a specification of a set of *ontic states* $\Lambda$, for each preparation $P_i$ a probability space $\left(\Lambda, \Sigma, \mu_i\right)$ and for each $\lambda \in \Lambda$ and each $M_j \in \mathcal{M}$ a probability distribution $\left\{\xi_{k\vert j}\left(\lambda\right)\right\}$ over $\mathcal{D}$, such that p(kj,i) = \_[ ]{} \_[k j]{}()\_i().\[eq:model\]
![ \[Figure\_geometrical\_meaning\]](Figure_geometrical_meaning.pdf)
The interpretation of an ontological model is shown in Fig. \[Figure\_geometrical\_meaning\]. The ontic state $\lambda$ is understood as a variable that describes the behavior of the system that may not be accessible experimentally. If preparation $P_i$ is implemented, the ontic state $\lambda$ is sampled according to the associated probability distribution $\mu_{i}$. On the other hand, given $\lambda$, for every measurement $M_j$ the outcome $d_k$ is a probabilistic function of $\lambda$, described by the response functions $\xi_{k\vert
j}\left(\lambda\right)$. The variable $\lambda$ mediates the correlations between measurements and preparations. From the perspective of causal models [@Pearl00], Eq. implies that the prepare-and-measure statistics is consistent with the causal structure shown in Fig. \[fig:causal\].
(4,1) – (4,-0.65); (4,-1) – (6.65,-1); (7,1) – (7,-0.65); (4,1) circle (10pt); (4,1) node [$i$]{}; (7,1) circle (10pt); (7,1) node [$j$]{}; (4,-1) circle (10pt); (4,-1) node [$\lambda$]{}; (7,-1) circle (10pt); (7,-1) node [$k$]{};
### Noncontextual models
The generalized notion of noncontextuality introduced in Ref. [@RS05] requires that preparations and measurement events that can not be distinguished operationally are identically represented in the model. This implies that the operational equivalences valid for $\mathcal{P}$ and $\mathcal{M}$ should also be valid for the functions $\mu_i$ and $\xi_{k\vert j}$, respectively. In terms of the previous definitions, we have:
An ontological model satisfies *preparation noncontextuality* if $\mu_{i}=\mu_{i'}$ whenever $P_i$ and $P_{i'}$ are operationally equivalent. An ontological model satisfies *measurement noncontextuality* if $\xi_{k\vert j}=\xi_{k' \vert j'}$ whenever $k\vert j$ and $k'\vert j'$ are operationally equivalent. An ontological model is *universally noncontextual*, or simply *noncontextual*, if it satisfies both preparation and measurement noncontextuality.
The non-existence of a noncontextual ontological model for the prepare-and-measure statistics $B$ can be interpreted as signature of the nonclassicality of $B$. It is a known fact that some prepare-and-measure statistics obtained with quantum systems do not admit a noncontextual ontological model [@RS05].
The prepare-and-measure statistics $B$ is called *noncontextual* if it has an noncontextual ontological model. The set of all noncontextual prepare-and-measure statistics for the scenario $\mathcal{S}$ will be denoted by $\mathsf{NC}\left(\mathcal{S}\right).$
Linear Characterization {#sec:linear_characterization}
-----------------------
It was shown in Ref. [@SSW17] that if a prepare-and-measure statistics $B$ has a noncontextual ontological model, then it also has a noncontextual ontological model with an ontic state space $\Lambda$ of finite cardinality. This implies that membership in $\mathsf{NC}\left(\mathcal{S}\right)$ can be formulated in terms of linear programming.
Given a ontic state $\lambda$, the value of the response functions $\xi_{k\vert j}$ can be represented in a vector ()(\_[11]{}(), …, \_[K1]{}(), …\_[1J]{}(), …, \_[KJ]{}() )
For fixed $\lambda \in \Lambda$, the vectors $\boldsymbol{\xi}\left(\lambda\right)$ defined by different choices of response functions $\xi_{m\vert M}$ satisfying measurement noncontextuality are called *noncontextual measurement assignments*. The set of all noncontextual measurement assignments is a called the *noncontextual measurement-assignment polytope*.
As shown in Ref. [@SSW17], fixed $\lambda \in \Lambda$, the set of all noncontextual measurement assignments is indeed a polytope since it is characterized by the following linear restrictions: $$\begin{aligned}
\xi_{k\vert j}\left(\lambda\right)&\geq 0\\
\sum_k \xi_{k\vert j}\left(\lambda\right)&=1\\
\sum_{k,j}\left(\alpha_{k\vert j}^r -\beta_{k\vert j}^r\right)\xi_{k\vert
j}\left(\lambda\right)&=0.\end{aligned}$$ Notice that since these constraints do not depend on $\lambda$, the noncontextual measurement-assignment polytope is the same for every $\lambda$. We denote by $\tilde{\boldsymbol{\xi}}\left(\kappa\right)$ the extremal points of this polytope, with $\kappa$ a discrete variable ranging over some enumeration of these extremal points.
A prepare-and-measure statistics $B=\left\{p\left(k\vert
j,i\right)\right\}$ in the scenario $\mathcal{S}$ has a noncontextual ontological model if, and only if, there is a set of probability distributions $\left\{\mu_i\left(\kappa\right)\right\}$ over $\kappa$ such that $$\begin{aligned}
\sum_i\left(\alpha^s_i - \beta^s_i\right)\mu_i\left(\kappa\right)&=0 \\
\sum_{\kappa} \tilde{\xi}_{k\vert j}\left(\kappa\right)\mu_i\left(\kappa\right)&=p\left(k\vert j,i\right),\end{aligned}$$ where $\kappa$ ranges over the discrete set of vertices of the measurement-assignment polytope.
This proposition implies that membership in $\mathsf{NC}\left(\mathcal{S}\right)$ can be efficiently tested using linear programming, which in turn implies that some of the quantifiers proposed in Sec.\[subsec:Quantifiers\] can also be computed efficiently using linear programming.
Resource Theory of Generalized Contextuality {#sec:RTofContextuality}
============================================
Objects {#subsec:Objects}
-------
We define the set ${\mathcal{O}}$ of *objects* as the collection of all allowed prepare-and-measure statistics: $$\boldsymbol{B} = \left\{p\left(k \vert
j,i\right)\right\}_{j \in J, i \in I, k \in K}$$ for a prepare-and-measure scenario $\mathcal{S}$. The *free objects*, or *resourceles* ones, correspond to those behaviors $\boldsymbol{B}\in \mathsf{NC}\left(\mathcal{S}\right)$ with a universally noncontextual model.
Given two objects $\boldsymbol{B}_{1}$ and $\boldsymbol{B}_{2}$ we also allow for a combination of them in order to obtain a third new object, denoted by $\boldsymbol{B}_{1} \otimes \boldsymbol{B}_{2}$. One may think of this such an object $\boldsymbol{B}_{1} \otimes \boldsymbol{B}_{2}$ as representing full access to both $\boldsymbol{B}_{1}$ and $\boldsymbol{B}_{2}$ together and at the same time. For our purposes it will be enough to consider that combination as the juxtaposition of two independent behaviours:
Given two behavior $B_1$ and $B_2$ (not necessarily in the same scenario), the juxtaposition of $B_1$ and $B_2$, denoted by $B_1 \otimes B_2$, is the behavior obtained by independently choosing preparation and measurement for $B_1$ and $B_2$. That is, the preparations in $B_1\otimes B_2$ correspond to a pair of preparations, $i_1$ for $B_1$ and $i_2$ for $B_2$ and analogously for the measurements. The corresponding probability distributions are given by p(k\_1k\_2j\_1j\_2, i\_1i\_2)=p(k\_1j\_1, i\_1)p(k\_2j\_2, i\_2).
As expected, the juxtaposition of two noncontextual behaviors is a noncontextual behavior.
If $B_1$ and $B_2$ are noncontextual if and only if $B_1 \otimes B_2$ is noncontextual.
Let $\left(\Lambda_1, \Sigma_1, \mu_{i_1}\right)$ and $\left\{\xi_{k_1\vert j_1}
\right\}$ be an ontological model for $B_1$ and $\left(\Lambda_2, \Sigma_2, \mu_{i_2}\right)$ and $\left\{\xi_{k_2\vert j_2}\right\}$ be an ontological model for $B_2$. Then $\left(\Lambda_1 \times \Lambda_2, \Sigma_1 \times \Sigma_2, \mu_{i_1}\times
\mu_{i_2}\right)$ and $\left\{\xi_{k_1\vert j_1}\times \xi_{k_2\vert j_2} \right\}$ is an ontological model for $B_1 \otimes B_2$. Conversely, if an ontological model for $B_1 \otimes B_2$ is given, an ontological model for $B_1$ can be obtained by marginalizing over $B_2$ and an ontological model for $B_2$ can be obtained by marginalizing over $B_1$.
Free Operations {#subsec:FreeOperations}
---------------
Given a prepare-and-measure scenario ${\mathcal{S}}$, we define the set ${\mathcal{F}}$ of free operations in analogy with simulation of communication channels [@TF17]: the image $\boldsymbol{\tilde{B}}=T(\boldsymbol{B})$ of $B$ through every mapping $T:{\mathcal{O}}
\longrightarrow {\mathcal{O}}$ in ${\mathcal{F}}$ should be viewed as a simulation of a new scenario using one preprocessing for the preparation box $\mathcal{P}$, another preprocessing for the measurement box $\mathcal{M}$, and for the last a post-processing for the outcomes $\mathcal{D}$ of each measurement $M_j$ (see Fig.\[Figure\_free\_operations\_boxes\]). More formally, each free operation is a map: $$\begin{aligned}
\label{def:free_operations}
T: {\mathcal{O}} &\longrightarrow {\mathcal{O}} \\
\{p(k \vert j,i)\} &\mapsto \{p({\tilde{k}} \vert
\tilde{j},\tilde{i}) \} \nonumber\end{aligned}$$ where for each ${\tilde{k}} \in {\tilde{K}},{\tilde{i}} \in {\tilde{I}},{\tilde{j}} \in {\tilde{J}}$: $$\label{def:free_operations2}
\sum_{k,j,i}q_{O}({\tilde{k}} \vert {k})p(k \vert j,i)q_{P}(i
\vert {{\tilde{i}}})q_{M}(j \vert
{{\tilde{j}}}),$$ with $q_{P}:\tilde{I} \longrightarrow I$, $q_{M}:\tilde{J} \longrightarrow J$, and $q_O:
K \longrightarrow \tilde{K}$ stochastic maps [@NC00] from certain input alphabets to another sets of output alphabets. In what follows, the stochastic map $q_O$ can also depend on the measurement $\tilde{j}$, that is, different post-processing of the outcomes can be applied to different measurements. Hence, it would be more appropriate to write $q^{\tilde{j}}_O$, but we avoid the use of this heavy notation. Eq. shows that, after a suitable relabeling of the indexes, the overall effect of each free operation is a right-multiplication of a stochastic matrix and a left-multiplication of another stochastic matrix on prepare-and-measure statistics $\{p(k \vert j,i)\}$, thus each $T$ in ${\mathcal{F}}$ acts as a linear map on the set of objects. We have proved, therefore, the following results:
![ \[Figure\_free\_operations\_boxes\]](Figure_free_operations_boxes.pdf)
\[lemma:T\_is\_convex\] Let ${\mathcal{F}}$ be the above defined set of free operations. Given $B,B^{\prime}$ two prepare-and-measure statistics in ${\mathcal{O}}$, and given $\pi \in [0,1]$, one has: \[lemma:eq\_T\_is\_convex\] T(B+(1-)B\^)=T(B)+(1-)T(B\^), where the sum and multiplication on ${\mathcal{O}}$ are defined component-wise.
The set ${\mathcal{F}}$ of free operations is closed under composition.
It remains to show that the transformations belonging to $\mathcal{F}$ really fulfill the requirement of being free operations, *i.e.* we must show that any element $T$ in ${\mathcal{F}}$ does not create a resource from a resourceless object – it preserves the set of non-contextual prepare-and-measure statistics. More formally:
Given a free operation $T \in {\mathcal{F}}$, and a [prepare-and-measure statistics ]{} $B=\{p(k \vert j,i)\}$, if $B$ admits a universally noncontextual model, then $\tilde{B}=T(B)$ also admits a universally noncontextual model.
Since $B$ admits a universally contextual model (w.r.t. to sets ${\mathcal{E}}_{M}$ and ${\mathcal{E}}_{P}$ of operational equivalences for measurements and preparations), there exist a family of probability spaces $\{\Lambda,\Sigma_{i},\mu_{i}\}_{i}$ one for each preparation $P_i$, and a set of response functions $\{\xi_{k \vert j}(\lambda)\}$ such that: $$\begin{aligned}
\label{eq:thm_T_preservesNC_noncontextualboxes_begin}
\forall& \,\, \lambda, j, k: &\xi_{k \vert j}(\lambda) \geq 0; \\
\forall& \,\, \lambda,j: &\sum_{k \in K}\xi_{k \vert j}(\lambda)=1; \\
\forall& \,\, \lambda,r: &\sum_{j \in J}\sum_{k \in
K}\left(\alpha^{r}_{k \vert j} - \beta^{r}_{k \vert j}\right)\xi_{k \vert
j}(\lambda)=0; \\
\forall& \,\, \lambda, i: &\mu_{i}(\lambda) \geq 0; \\
\forall& \,\, i: &\int_{\Lambda}\mu_{i}(\lambda)=1; \\
\forall& \,\, \lambda,s: &\sum_{i \in I}\left(\alpha^{s}_{i} - \beta^{s}_{i}\right)\mu_{i}(\lambda)=0; \\
\forall& \,\, i,j,k: &\int_{\Lambda}\xi_{k \vert
j}(\lambda)\mu_{i}(\lambda) = p(k \vert j,i).
\label{eq:thm_T_preservesNC_noncontextualboxes_end}\end{aligned}$$ Therefore: $$\begin{aligned}
&{p(\tilde{k} \vert \tilde{j},\tilde{i})}: =T({p({k} \vert {j},{i})}) \\
&= {\sum_{k,j,i}q_{O}(\tilde{k} \vert k)p(k \vert
j,i)q_{P}(i \vert \tilde{i} \,\, )q_{M} (j \vert \tilde{j} \,\,)}\\
&=\sum_{k,j,i}q_{O}(\tilde{k} \vert k)\left( \int_{\Lambda}\xi_{k
\vert
j}(\lambda)\mu_{i}(\lambda)\right)q_{P}(i
\vert {\tilde{i}} \,\, )q_{M} (j \vert {\tilde{j}} \,\,) \\
&=\int_{\Lambda}\underbrace{\left(\sum_{k,j}q_{O}({\tilde{k}} \vert
k) \xi_{k \vert j}(\lambda)q_{M}(j \vert {\tilde{j}})
\right)}_{:=\xi_{{\tilde{k}} \vert {\tilde{j}}}(\lambda)} \underbrace{\left(
\sum_{i}\mu_{i}(\lambda)q_{P}(i \vert {\tilde{i}})
\right)}_{:=\mu_{{\tilde{i}}}(\lambda)} \\
&= \int_{\Lambda}\xi_{{\tilde{k}} \vert
{\tilde{j}}}(\lambda)\mu_{{\tilde{i}}}(\lambda).
\end{aligned}$$ With these new set of response functions $\{\xi_{{\tilde{k}} \vert
{\tilde{j}}}(\lambda)\}$ and probability measures $\{\mu_{{\tilde{i}}}\}$ over $\Lambda$, we are going to prove that $\tilde{B}$ admits a universally noncontextual model (w.r.t. the transformed operational equivalences). For clarity we break the remaining of the proof into three statements:
- The family $\{\xi_{{\tilde{k}} \vert
{\tilde{j}}}(\lambda)\}$ constitute an admissible set of response functions. On one hand: $$\begin{aligned}
\forall {\tilde{k}},{\tilde{j}},\lambda:& \\
\xi_{{\tilde{k}} \vert
{\tilde{j}}}(\lambda)&:=\sum_{k,j}q_{O}({\tilde{k}} \vert
k) \xi_{k \vert j}(\lambda)q_{M}(j \vert {\tilde{j}}) \\
&\geq 0.\end{aligned}$$ On the other hand (see Fig. \[Figure\_free\_operations\_boxes\_rhs\_composition\]), for all $\lambda$ belonging to $\Lambda$: $$\begin{aligned}
\sum_{{\tilde{k}}}&\xi_{{\tilde{k}} \vert {\tilde{j}}}(\lambda) =
\sum_{{\tilde{k}}}\sum_{k,j}q_{O}({\tilde{k}} \vert
k) \xi_{k \vert j}(\lambda)q_{M}(j \vert {\tilde{j}}) \\
&=\sum_{{\tilde{k}}}\sum_{j} \underbrace{\left[\sum_{k}q_{O}({\tilde{k}}\vert
k)\xi_{k \vert j}(\lambda) \right]}_{:=q_{\lambda}({\tilde{k}}\vert j)}q_{M}(j
\vert {\tilde{j}}) \\
&= \sum_{{\tilde{k}}} \underbrace{\sum_{j}q_{\lambda}({\tilde{k}}\vert j)q_{M}(j
\vert {\tilde{j}})}_{:=q_{\lambda}({\tilde{k}} \vert {\tilde{j}})} =
\sum_{{\tilde{k}}}q_{\lambda}({\tilde{k}} \vert {\tilde{j}})=1.\end{aligned}$$
![ \[Figure\_free\_operations\_boxes\_rhs\_composition\]](Figure_free_operations_boxes_rhs_composition.pdf)
- The family $\{\mu_{{\tilde{i}}}\}$ constitute a set of probability measures over $\Lambda$.
It is clear from the definition that each $\mu_{{\tilde{i}}}$ is a non-negative function over $\Lambda$. In addition, for each ${\tilde{i}} \in {\tilde{I}}$, one has: $$\begin{aligned}
\int_{\lambda}\mu_{{\tilde{i}}}(\lambda)&=\int_{\Lambda} \sum_{i}q_{P}(i \vert
{\tilde{i}})\mu_{i}(\lambda) \\
&=\sum_{i}q_{P}(i \vert {\tilde{i}})
\underbrace{\int_{\lambda}\mu_{i}(\lambda)}_{=1} \\
&=\sum_{i}q_{P}(i \vert {\tilde{i}}) =1.\end{aligned}$$
- The operational equivalences ${\mathcal{E}}_{P}$ and ${\mathcal{E}}_{M}$ are lifted, through the action of free operations in ${\mathcal{F}}$, onto other sets of operational equivalences ${\mathcal{E}}_{{\tilde{P}}}$ and ${\mathcal{E}}_{{\tilde{M}}}$ which in turn also respect the principle of preparation noncontextuality and measurement noncontextulity [@RS05; @SSW17] respectively.
We prove the result for operational equivalences among preparations, and the result will follow in complete analogy for operational equivalences among measurements. Given one operational equivalence among preparations for the ordinary prepare-and-measure scenario, labelled by $s$, *i.e.* an equality $$\begin{aligned}
&\sum_{i \in I}\left(\alpha^{s}_{i} - \beta^{s}_{i}\right)\mu_{i}(\lambda)=0, \forall
\,\, \lambda \in \Lambda,\end{aligned}$$ we define novel set of coefficients $\{\alpha^{s}_{{\tilde{i}}}\}_{{\tilde{i}} \in
{\tilde{I}}}$ and $\{\beta^{s}_{{\tilde{i}}}\}_{{\tilde{i}} \in
{\tilde{I}}}$ satisfying $$\begin{aligned}
\label{eq:proof_equivalences_td_alpha}
\forall \,\, i \in [I]: \sum_{{\tilde{i}}}\alpha^{s}_{{\tilde{i}}}q_{P}(i \vert
{\tilde{i}})=\alpha^{s}_{i},\end{aligned}$$ and $$\begin{aligned}
\label{eq:proof_equivalences_td_beta}
\forall \,\, i \in [I]: \sum_{{\tilde{i}}}\beta^{s}_{{\tilde{i}}}q_{P}(i \vert
{\tilde{i}})=\beta^{s}_{i}.\end{aligned}$$ Now, if one defines the new set of operational equivalences for preparations ${\mathcal{E}}_{{\tilde{P}}}$ using Eqs. and , it is straightforward to check that $$\forall \,\, s, \lambda:
\sum_{{\tilde{i}}}\alpha^{s}_{{\tilde{i}}}\mu_{{\tilde{i}}}(\lambda)=
\sum_{{\tilde{i}}}\beta^{s}_{{\tilde{i}}}\mu_{{\tilde{i}}}(\lambda),$$ since these novel operational equivalences come from a lift of operational equivalences which are noncontextual in the ordinary (non-)transformed scenario.
Monotones {#subsec:Quantifiers}
---------
Now that we have defined the sets of objects and free operations, we are in position to verify if the monotones introduced in Refs. [@Barrett05b; @Allcock09; @GWAN12; @Joshi13; @Vicente14; @LVN14; @GHHHHJKW14; @GA15; @HGJKL15; @GA17; @ACTA17; @BAC17; @ABM17] can be adapted to the framework of generalized contextuality.
### Contextual Fraction {#subsubsec:ConetxtualFract}
The *contexual fraction* is a contextuality quantifier based on the intuitive notion of what fraction of a given prepare-and-measure statistics admits a noncontextual description. Formally it is defined as follows [@AB11; @ADLPBC12; @ABM17; @AT17]: $$\begin{aligned}
{\mathcal{C}}: {\mathcal{O}} &\longrightarrow [0,1] \\ \nonumber
B=\{p(k \vert j,i)\} & \mapsto {\mathcal{C}}(B)\end{aligned}$$ where $$\begin{aligned}
1-{\mathcal{C}}(B):= \mbox{max } \lambda \nonumber \\
\qquad \textup{s.t.}\quad &B=\lambda B^{NC}+(1-\lambda)B^{\prime}
\nonumber\\
& B^{NC} \in \mathsf{NC}\left(\mathcal{S}\right) \nonumber \\
& B^{\prime} \in {\mathcal{O}}. \label{def:cf2}\end{aligned}$$
\[thm:cf\_is\_monotone\] The contextual-fraction is a resource monotone with respect to ${\mathcal{F}}$.
Let $B \in {\mathcal{O}}$ be a given prepare-and-measure statistics, and let ${\mathcal{C}}(B)$ be equal to $\lambda_{\max}$ with the following decomposition: $$B=\lambda_{\max}B^{NC}_{\max}+(1-\lambda_{\max})B^{\prime}_{\max}.$$ Then: $$\begin{aligned}
T(B)&=T(\lambda_{\max}B^{NC}_{\max}+(1-\lambda_{\max})B^{\prime}_{\max}) \\
&=\lambda_{\max}T(B^{NC}_{\max})+(1-\lambda_{\max})T(B^{\prime}_{\max}) \\
&= \lambda_{\max}B^{NC}+(1-\lambda_{\max})B^{\prime}.\end{aligned}$$ with $B^{NC}=T(B^{NC}_{\max})$ a noncontextual box, since $T$ preserves the noncontextual set, and $B^{\prime}$ a valid object. Then $\lambda_{\max(T(B))} \geq \lambda_{\max}$. Therefore: 1-(T(B)) 1-(B) (T(B)) (B).
The contextual fraction is subadditive under independent juxtapositions.
\[thm:cf\_ad\] Given two behaviors $B_1$ and $B_2$,we have that (B\_1 B\_2) (B\_1 )+ (B\_2)-(B\_1 )(B\_2) (B\_1 )+ (B\_2) .
Let $B_1^*$ and $B_2^*$ be the behaviors achieving the minimum in Eq. for $B_1$ and $B_2$, respectively, with $1-\mathcal{C}\left(B_1 \right)=\lambda_1$ and $1-\mathcal{C}\left(B_2 \right)=\lambda_2$. The decomposition in Eq. implies that $$\begin{aligned}
p\left(k_1 \vert j_1,i_1\right) &\leq \lambda_1 p^*\left(k_1 \vert j_1,i_1\right), \ \forall \ i_1,j_1,k_1,\\
p\left(k_2 \vert j_2,i_2\right) &\leq \lambda_1 p^*\left(k_2 \vert j_2,i_2\right), \ \forall \ i_2,j_2,k_2,
\end{aligned}$$ which in turn imply that p(k\_1 k\_2 j\_1j\_2,i\_1i\_2) \_1\_2 p\^\*(k\_1 k\_2 j\_1j\_2,i\_1i\_2), i\_1,j\_1,k\_1,i\_2,j\_2,k\_2,\[eq:cf\_ad\] where $p^*\left(k_1 k_2 \vert j_1j_2,i_1i_2\right)= p^*\left(k_1 \vert j_1,i_1\right)p^*\left(k_2 \vert j_2,i_2\right)$ are the probabilities given by $B_1^* \otimes B_2^*$. From Eq. if follows that there is a behavior $B'$ such that B\_1 B\_2 = \_1 \_2 B\_1\^\* B\_2\^\* + B’. Hence, we have $$\begin{aligned}
1-\mathcal{C}\left(B_1 \otimes B_2 \right)& \geq \lambda_1 \lambda_2\\
&= \left(1-\mathcal{C}\left(B_1\right)\right)\left(1-\mathcal{C}\left( B_2 \right)\right),\end{aligned}$$ which in turn implies the desired result.
### Robustness Measures
Robustness of contextuality is a quantifier based on the intuitive notion of how much noncontextual noise a given prepare-and-measure statistics can sustain before becoming noncontextual. Given a scenario ${\mathcal{S}}$ one defines the *robustness measure* [@ACTA17] as follows:
$$\begin{aligned}
\label{def:eq_robustness}
{\mathcal{R}}: {\mathcal{O}} &\longrightarrow [0,1] \\
B &\mapsto {\mathcal{R}}(B) \nonumber\end{aligned}$$
where $$\begin{aligned}
\label{def:eq_minimization_for_robustness}
{\mathcal{R}}(B):= \mbox{min } \lambda \nonumber \\
\textup{s.t.}\quad &(\lambda B^{NC}+(1-\lambda)B)\in \mathsf{NC}\left(\mathcal{S}\right) \nonumber\\
& B^{NC} \in \mathsf{NC}\left(\mathcal{S}\right) .\end{aligned}$$
\[thm:robustness\_is\_monotone\] The robustness measure ${\mathcal{R}}$ is a resource monotone with respect to the free-operations in ${\mathcal{F}}$.
Given $B \in {\mathcal{O}}$, let $\lambda_{\min}$ be the minimum of the optimization problem above, in which \_B\^[NC]{}\_+(1-\_)B = B\^, with $B^{\ast}$ noncontextual. Then: $$\begin{aligned}
T(B^{\ast})&=T(\lambda_{\min}B^{NC}_{\min}+(1-\lambda_{\min})B) \\
&=\lambda_{\min}T(B^{NC}_{\min})+(1-\lambda_{\min})T(B) \\
&=\lambda_{\min}{\tilde{B}}^{NC}_{\min}+(1-\lambda_{\min})T(B).\end{aligned}$$ Since, $T$ preserves the noncontextual set, $T(B^{\ast})$ is noncontextual as well as ${\tilde{B}}^{NC}_{\min}$ and therefore $\lambda_{\min(T(B))}\leq
\lambda_{\min}$.
The robustness is subadditive under independent juxtapositions.
Given two behaviors $B_1$ and $B_2$,we have that (B\_1 B\_2) (B\_1 )+ (B\_2)-(B\_1 )(B\_2) (B\_1 )+ (B\_2) .
The proof follows the same lines of the proof of Thm. \[thm:cf\_ad\].
The proof of Thm.\[thm:robustness\_is\_monotone\] above suggests that for a more restrictive class of free operations, one can relax the assumption present at the constraints of the optimization problem in. , and instead of optimizing over all [prepare-and-measure statistics ]{}which are noncontextual, we could fix a given reference noncontextual prepare-and-measure statistics, say $B_{ref}$, and then define the a new robustness measure with respect to that fixed prepare-and-measure statistics:
$$\begin{aligned}
\label{def:eq_robustness_ref}
{\mathcal{R}}_{ref}: {\mathcal{O}} &\longrightarrow [0,1] \\
B &\mapsto {\mathcal{R}}_{ref}(B) \nonumber\end{aligned}$$
where $$\begin{aligned}
\label{def:eq_minimization_for_robustness_ref}
{\mathcal{R}}_{ref}(B):= \mbox{min } \lambda \nonumber \\
\textup{s.t.}\quad &(\lambda B_{ref}+(1-\lambda)B) \in \mathsf{NC}\left(\mathcal{S}\right) . \end{aligned}$$ This new measure is a monotone whenever $B_{ref}$ is preserved upon action of (a more restrictive subset of) free operations. With this on hands it is possible to enunciate the following result:
Let ${\mathcal{F}}_{ref} \subseteq {\mathcal{F}}$ be all free operations which preserves $B_{ref} \in {\mathcal{O}}$, *i.e.* \_[ref]{}:={T ; T(B\_[ref]{})=B\_[ref]{}}. Under this new set of free operations, ${\mathcal{R}}_{ref}$ is a resource monotone.
The proof follows the same lines as the proof of the proof of Thm.\[thm:robustness\_is\_monotone\].
Notice that the proofs of Thms. \[thm:cf\_is\_monotone\] and \[thm:robustness\_is\_monotone\] rely only on the fact that the operations in $\mathcal{F}$ are linear and preserve $\mathsf{NC}\left(\mathcal{S}\right)$. The results in Sec. \[sec: GenContextuality\] imply that ${\mathcal{C}}, {\mathcal{R}}$ and ${\mathcal{R}}_{ref}$ can be computed efficiently using linear programming [@SSW17].
Kullback-Liebler divergence
---------------------------
Given two probability distributions $p$ and $q$ in a sample space $\Omega$, the Kullback-Leiber divergence or relative entropy between $p$ and $q$ D\_[KL]{}(pq)= \_[i]{} p\_i()is a measure of the difference between the two probability distributions $p$ and $q$ [@KL51]. With this, one can define the relative entropy $D_{KL}\left(B\|B'\right)$ between two prepare-and-measure statistics $B=\left\{p\left(\cdot \vert j,i\right)\right\}$ and $B'=\left\{p'\left(\cdot \vert j,i\right)\right\}$ as the relative entropy between the output distributions obtained from $B$ and $B'$ for the optimal choice of preparation and measurement: D\_[KL]{}(BB’) \_[i,j]{} D\_[KL]{}(p(j,i)p(j,i)).This quantity measures the distinguishability of $B$ from $B'$. We can now define the relative entropy of contextuality [@DGG05; @GHHHHJKW14; @HGJKL15] (B)\_[B’()]{}D\_[KL]{}(BB’), \[eq:defKL\]which quantifies the distinguishability of $B$ from its closest, with respect to $D_{KL}$, noncontextual preapre-and-measure statistics.
The relative entropy of contextuality $\mathcal{KL}$ is a resource monotone with respect to ${\mathcal{F}}$.
Given $B \in \mathcal{O}$, let $B^*$ be the noncontextual prepare-and-measure statistics achieving the minimum in Eq. . Given $T \in \mathcal{F}$, we have
$$\begin{aligned}
{\mathcal{KL}}\left(T\left(B\right)\right) &\leq \max_{\tilde{i},\tilde{j}} D_{KL}\left(p\left(\cdot\vert \tilde{j},\tilde{i}\right) \| p^*\left(\cdot\vert \tilde{j},\tilde{i}\right)\right)\label{eq:KLmon1}\\
& = \max_{\tilde{i},\tilde{j}} \sum_{\tilde{k}} p\left(\tilde{k}\vert \tilde{j},\tilde{i}\right) \log\left[\frac{p\left(\tilde{k}\vert \tilde{j},\tilde{i}\right)}{p^*\left(\tilde{k}\vert \tilde{j},\tilde{i}\right)}\right]\label{eq:KLmon2}\\
& = \max_{\tilde{i},\tilde{j}} \sum_{\tilde{k}} \sum_{i,j,k}q_O\left(\tilde{k}\vert k\right)p\left(k\vert j,i\right) q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right)\log\left[\frac{\sum_{i,j,k}q_O\left(\tilde{k}\vert k\right)p\left(k\vert j,i\right) q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right)}{\sum_{i,j,k}q_O\left(\tilde{k}\vert k\right)p^*\left(k\vert j,i\right) q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right)}\right]\label{eq:KLmon3}\\
& \leq \max_{\tilde{i},\tilde{j}} \sum_{\tilde{k}} \sum_{i,j,k} q_O\left(\tilde{k}\vert k\right)p\left(k\vert j,i\right) q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right)\log\left[\frac{p\left(k\vert j,i\right) }{p^*\left(k\vert j,i\right) }\right]\label{eq:KLmon4}\\
&= \max_{\tilde{i},\tilde{j}} \sum_{i,j,k} p\left(k\vert j,i\right) q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right)\log\left[\frac{p\left(k\vert j,i\right) }{p^*\left(k\vert j,i\right) }\right]\label{eq:KLmon5}\\
&= \max_{\tilde{i},\tilde{j}} \sum_{i,j}q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right) \left(\sum_k p\left(k\vert j,i\right) \log\left[\frac{p\left(k\vert j,i\right) }{p^*\left(k\vert j,i\right) }\right]\right)\label{eq:KLmon6}\\
&\leq \max_{i,j} \sum_k p\left(k\vert j,i\right) \log\left[\frac{p\left(k\vert j,i\right) }{p^*\left(k\vert j,i\right) }\right]\label{eq:KLmon7}\\
&=\mathcal{KL}\left(B\right).\end{aligned}$$
Eq. follows form the definition of $\mathcal{KL}\left(T(B)\right)$ and the fact that $T\left(B^*\right) \in \mathsf{NC}\left(\mathcal{S}\right)$, Eq. follows from the definition of $D_{KL}$, Eq. follows from the definition of $T\in \mathcal{F}$, Eq. follows from the log sum inequality, Eq. follows from basic algebra, Eq. follows from $\sum_{\tilde{k}} q_{O}\left(\tilde{k}\vert k\right)=1$ and Eq. follows from the fact that the average is smaller than the largest value.
The relative entropy of contextuality is subadditive under independent juxtapositions.
Given two behaviors $B_1$ and $B_2$,we have that (B\_1 B\_2) (B\_1 )+ (B\_2).
Let $B_1^*$ and $B_2^*$ be behaviors achieving the minimum in Eq. for $B_1$ and $B_2$, respectively. Then, we have $$\begin{aligned}
\mathcal{KL}\left(B_1 \otimes B_2\right)&\leq D_{KL}\left(B_1\otimes B_2\|B_1^*\otimes B_2^*\right) \label{eq:kl_ad1}\\
&= \max_{i_1,i_2,j_1,j_2} \sum_{k_1,k_2} p\left(k_1k_2\vert j_1j_2, i_1i_2\right)\log\left(\frac{p\left(k_1k_2\vert j_1j_2, i_1i_2\right)}{p^*\left(k_1k_2\vert j_1j_2, i_1i_2\right)}\right)\label{eq:kl_ad2}\\
&=\max_{i_1,i_2,j_1,j_2} \sum_{k_1,k_2} p\left(k_1\vert j_1, i_1\right)p\left(k_2\vert j_2, i_2\right)\log\left(\frac{p\left(k_1\vert j_1, i_1\right)p\left(k_2\vert j_2, i_2\right)}{p^*\left(k_1\vert j_1, i_1\right)p^*\left(k_2\vert j_2, i_2\right)}\right)\label{eq:kl_ad3}\\
&= \max_{i_1,i_2,j_1,j_2} \left[\sum_{k_1} p\left(k_1\vert j_1,
i_1\right)\log\left(\frac{ p\left(k_1\vert j_1, i_1\right)}{ p^*\left(k_1\vert j_1,
i_1\right)}\right) + \sum_{k_2} p\left(k_2\vert j_2,
i_2\right)\log\left(\frac{p\left(k_2\vert j_2, i_2\right)}{p^*\left(k_2\vert j_2,
i_2\right)}\right)\right]\label{eq:kl_ad4}\\
& \leq \max_{i_1,j_1} \sum_{k_1} p\left(k_1\vert j_1, i_1\right)\log\left(\frac{
p\left(k_1\vert j_1, i_1\right)}{ p^*\left(k_1\vert j_1, i_1\right)}\right) +
\max_{i_2,j_2}\sum_{k_2} p\left(k_2\vert j_2, i_2\right)\log\left(\frac{p\left(k_2\vert
j_2, i_2\right)}{p^*\left(k_2\vert j_2, i_2\right)}\right)\label{eq:kl_ad5}\\
&=\mathcal{KL}\left(B_1\right)+\mathcal{KL}\left(B_2\right). \label{eq:kl_ad6}
\end{aligned}$$ Eq. follows from the definition of $\mathcal{KL}\left(B_1 \otimes B_2\right)$, Eq. follows from the definition of $D_{KL}$ and $B_1\otimes B_2$, Eq. follows from additivity of the relative entropy for independent distributions, Eq. follows from basic algebra, and Eq. follows from the fact that $B_1^*$ and $B_2^*$ are behaviors achieving the minimum in Eq. for $B_1$ and $B_2$, respectively.
Distance based monotones
------------------------
We now introduce contextuality monotones based on geometric distances, in contrast with the previous defined quantifier which is based on entropic distances, replacing the relative entropy by some geometric distance defined over real vector spaces in Eq. [@BAC17; @AT17]. Let $D$ be any distance defined in real vector spaces $\mathds{R}^{K}$. We define the distance between two prepare-and-measure statistics $B$ and $B'$ as D(B, B’) \_[i,j]{} D(p(j,i),p(j,i)).\[eq:defD\_box\]We can now define the $D$-contextuality distance (B)\_[B’()]{}D(B,B’), \[eq:defD\]which quantifies the distance, with respect to $D$, from $B$ to the set of noncontextual prepare-and-measure statistics. We focus here on the contextuality quantifier obtained when we use the $\ell_1$ norm D\_1(x,y)=\_i |x\_i-y\_i|.
The $\ell_1$-contextuality distance is a resource monotone with respect to the free-operations in ${\mathcal{F}}$.
Given $B \in \mathcal{O}$, let $B^*$ be the noncontextual prepare-and-measure statistics achieving the minimum in Eq. . Given $T \in \mathcal{F}$, we have
$$\begin{aligned}
\mathcal{D}\left(T\left(B\right)\right)&\leq \max_{\tilde{i},\tilde{j}}\sum_{\tilde{k}} \left|p\left(\tilde{k}\vert \tilde{j},\tilde{i}\right)-p^*\left(\tilde{k}\vert \tilde{j}, \tilde{i}\right)\right| \label{eq:dist1}\\
&= \max_{\tilde{i},\tilde{j}} \sum_{\tilde{k}} \left| \sum_{i,j,k} q_O\left(\tilde{k}\vert k\right)\left(p\left(k\vert j,i\right)-p^*\left(k\vert j,i\right)\right)q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right)\right|\label{eq:dist2}\\
&\leq \max_{\tilde{i},\tilde{j}} \sum_{\tilde{k}} \sum_{i,j,k} q_O\left(\tilde{k}\vert k\right)q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right)\left| \left(p\left(k\vert j,i\right)-p^*\left(k\vert j,i\right)\right)\right|\label{eq:dist3}\\
&= \max_{\tilde{i},\tilde{j}} \sum_{i,j,k} q_M\left(j\vert \tilde{j}\right)q_P\left(i\vert \tilde{i}\right)\left| \left(p\left(k\vert j,i\right)-p^*\left(k\vert j,i\right)\right)\right|\label{eq:dist4}\\
&= \max_{i,j} \sum_k \left|p\left(k\vert j,i\right) - p^*\left(k\vert j,i\right)\right|\label{eq:dist5}\\
&=\mathcal{D}\left(B\right).
\end{aligned}$$
Eq. follows form the definition of $\mathcal{D}\left(T(B)\right)$ and the fact that $T\left(B^*\right) \in \mathsf{NC}\left(\mathcal{S}\right)$, Eq. follows from the definition of $T\in \mathcal{F}$, Eq. follows from the triangular inequality for the $\ell_1$ norm, Eq. follows from $\sum_{\tilde{k}} q_{O}\left(\tilde{k}\vert k\right)=1$ and Eq. follows from the fact that the average is smaller than the largest value.
The $\ell_1$-contextuality distance is subadditive under independent juxtapositions.
\[thm:D\_add\] Given two behaviors $B_1$ and $B_2$,we have that (B\_1 B\_2) (B\_1 )+ (B\_2).
Let $B_1^*$ and $B_2^*$ be the behaviors achieving the minimum in Eq. for $B_1$ and $B_2$, respectively. Then, we have $$\begin{aligned}
\mathcal{D}\left(B_1 \otimes B_2\right)&\leq D\left(B_1\otimes B_2,B_1^*\otimes B_2^*\right) \label{eq:D_ad1}\\
&= \max_{i_1,i_2,j_1,j_2} \sum_{k_1,k_2} \left|p\left(k_1k_2\vert j_1j_2, i_1i_2\right)-p^*\left(k_1k_2\vert j_1j_2, i_1i_2\right)\right|\label{eq:D_ad2}\\
&=\max_{i_1,i_2,j_1,j_2} \sum_{k_1,k_2} \left|p\left(k_1\vert j_1, i_1\right)p\left(k_2\vert j_2, i_2\right)-p^*\left(k_1\vert j_1, i_1\right)p^*\left(k_2\vert j_2, i_2\right)\right|\label{eq:D_ad3}\\
&\leq \max_{i_1,i_2,j_1,j_2} \sum_{k_1,k_2} \left[\left|p\left(k_1\vert j_1, i_1\right)p\left(k_2\vert j_2, i_2\right)-p^*\left(k_1\vert j_1, i_1\right)p\left(k_2\vert j_2, i_2\right)\right|\right.\nonumber \\
& \hspace{8em} + \left.\left|p^*\left(k_1\vert j_1, i_1\right)p\left(k_2\vert j_2, i_2\right)-p^*\left(k_1\vert j_1, i_1\right)p^*\left(k_2\vert j_2, i_2\right)\right|\right]\label{eq:D_ad4}\\
&\leq \max_{i_1,i_2,j_1,j_2} \left[\sum_{k_1, k_2} p\left(k_2\vert j_2, i_2\right)\left|p\left(k_1\vert j_1, i_1\right)-p^*\left(k_1\vert j_1, i_1\right)\right|\right.\nonumber \\
& \hspace{8em} + \left.\sum_{k_1,k_2}p^*\left(k_1\vert j_1, i_1\right)\left|p\left(k_2\vert j_2, i_2\right)-p^*\left(k_2\vert j_2, i_2\right)\right|\right]\label{eq:D_ad5} \\
&= \max_{i_1,i_2,j_1,j_2} \left[\sum_{k_1}\left|p\left(k_1\vert j_1, i_1\right)-p^*\left(k_1\vert j_1, i_1\right)\right|
+ \sum_{k_2}\left|p\left(k_2\vert j_2, i_2\right)-p^*\left(k_2\vert j_2, i_2\right)\right|\right]\label{eq:D_ad6} \\
&=\mathcal{D}\left(B_1 \right)+ \mathcal{D}\left(B_2\right)
\end{aligned}$$ Eq. follows from the definition of $\mathcal{KL}\left(B_1 \otimes B_2\right)$, Eq. follows from the definition of $D_{KL}$ and $B_1\otimes B_2$, Eq. follows from additivity of the relative entropy for independent distributions, Eq. follows from basic algebra, and Eq. follows from the definition of the fact that $B_1^*$ and $B_2^*$ be the behaviors achieving the minimum in Eq. for $B_1$ and $B_2$, respectively.
Trace Distance
--------------
Instead of taking the maximum over preparations and measurements in Eq. , we can take the average value to define the uniform $D$-contextuality distance $$\label{eq:def_trace_distance}
\mathcal{D}_u(B):= \frac{1}{2 I J}\min_{B^{\prime} \in \mathsf{NC}\left({\mathcal{S}}\right)}
D\left( B,B^{\prime} \right) $$ with $B^{\prime}$ taken over all noncontextual prepare-and-measure statistics, and $D$ being some distance defined over real vector spaces. Of special importance is the uniform contextuality distance defined by the trace norm $\ell_1$ [@NC00; @BAC17; @AT17].
Again, the trace distance $\mathcal{D}_u$ is subadditive under independent juxtapositions.
Given two behaviors $B_1$ and $B_2$,we have that \_u(B\_1 B\_2) \_u(B\_1 )+ \_u(B\_2).
The proof is analogous to the proof of Thm. \[thm:D\_add\].
Although $\mathcal{D}_u$ is not a monotone under the entire class of of free operations $\mathcal{F}$, it is a suitable contextuality quantifier when the sets of preparations and measurements are fixed, with the advantage that, unlike $\mathcal{L}$ and $\mathcal{D}$, the uniform contextuality distance $\mathcal{D}_u$ defined with the $\ell_1$ norm can be computed efficiently using linear programming.
Conclusion {#sec:conc}
==========
Motivated by the recognition of contextuality as a potential resource for computation and information processing, we develop a resource theory for generalized contextuality that can be applied to arbitrary prepare-and-measure experiments. We introduce a minimal set of free operations (minimal in the sense that any other possible, and physically meaningful, set of physical operations should contain ours as a subset) with a clear operational interpretation and explicit analytical parametrization and show that several natural contextuality quantifiers are indeed monotones under this class of free operations. With the recognition that membership testing in the set of noncontextual prepare-and-measure statistics can be done efficiently using linear programming, many of these quantifiers can also be computed efficiently in the same way. This framework is useful to classify, quantify, and manipulate contextuality as a formal resource. It would be interesting to investigate whether there is a maximally-contextual single prepare-and-measure statistics that serve as contextuality bits for all scenarios, or to identify what is the simplest scenario admitting inequivalent (not freely interconvertible) classes of contextuality. Another important issue is to investigate protocols for contextuality distillation relying only on the set of free operations. This framework provides a new interpretation of generalized contextuality, now considered a useful resource rather than an odd feature exhibited for quantum physics [@CSW10; @CSW14; @ATC14]. Indeed, as it has occurred with entanglement [@BG15; @HHHH09; @RK17; @SHN16] over the years, we expect that works like the present one can shed new light on the phenomenon, giving to it new insights and making it easier to understand.
The authors thank Marcelo Terra Cunha for all suggestions on the manuscript, and the International Institute of Physics for its support and hospitality. BA and CD also acknowledges financial support from the Brazilian ministries MEC and MCTIC, and CNPq.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We discuss the most general formulation of the Two-Higgs doublet model, which incorporates flavor changing neutral scalar interactions (FCNSI) and CP violation (CPV) from several sources. CP violation can arise either from Yukawa terms or from the Higgs potential, be it explicit or spontaneous. We show how the model, which is denoted as 2HDM-X, reduces to some versions known in the literature (2HDM-I,II,III), as well as some of their variants (top, lepton, dark) denoted here as 2HDM-IV. We also discuss another limit that includes CPV and Yukawa four textures to control FCNSI, which we denote as 2HDM-V. We evaluate the CPV asymmetry for the decay $h\to bcW$, which may allow to test the patterns of FCNSI and CPV, that arises in these models.'
author:
- 'J.L. Díaz-Cruz'
- 'A. Díaz-Furlong'
- 'J.H. Montes de Oca Y.'
title: 'The general Two-Higgs doublet eXtensions of the SM: a saucerful of secrets'
---
Introduction
============
Despite the success of the Standard Model (SM) in the gauge and fermion sectors, the Higgs sector remains the least tested aspect of the model [@Gunion:1989we], which leaves the puzzles associated with the mechanism of electroweak symmetry breaking (EWSB) still unsolved. On one hand, the analysis of raditive corrections within the SM [@Erler:2010wa; @Flacher:2008zq; @DiazCruz:2003qs; @Baur:2002gp], points towards the existence of a Higgs boson, with a mass of the order of the EW scale, which in turn could be detected at the LHC [@Carena:2002es; @Nath:2010zj]. On the other hand, the SM is often considered as an effective theory, valid up to an energy scale of $O(TeV)$, that eventually will be replaced by a more fundamental theory [@Bustamante:2009us], which will explain, among other things, the physics behind EWSB and perhaps even the origin of flavor. Many examples of candidate theories, which range from supersymmetry [@Ellis:2010wx; @susyrev; @Haber:2000jh] to strongly interacting models [@ArkaniHamed:2001nc; @Aranda:2007tg] as well as some extra dimensional scenarios [@Aranda:2002dz; @Chang:2010et; @Iltan:2007zz], include a multi-scalar Higgs sector. In particular, models with two scalar doublets have been studied extensively [@Haber:1978jt; @Liu:1987ng; @Wu:1994ja], as they include a rich structure with interesting phenomenology [@Carena:2000yx; @Ginzburg:2002wt; @Ginzburg:2004vp].
Several versions of the 2HDM have been studied in the literature [@Accomando:2006ga]. Some models (known as 2HDM-I and 2HDM-II) involve natural flavor conservation [@Glashow:1976nt], while other models (known as 2HDM-III) [@Accomando:2006ga], allow for the presence of flavor changing scalar interactions (FCNSI) at a level consistent with low-energy constraints [@DiazCruz:2004tr]. There are also some variants (known as top, lepton, neutrino), where one Higgs doublet couples predominantly to one type of fermion [@Atwood:2005bf], while in other models it is even possible to identify a candidate for dark matter [@DiazCruz:2007be]. The definition of all these models, depends on the Yukawa structure and symmetries of the Higgs sector [@DiazCruz:2004ss; @Aranda:2005st; @DiazCruz:2006ki; @Barbieri:2005kf; @Froggatt:2007qp], whose origin is still not known. The possible appearance of new sources of CP violation is another characteristic of these models [@Ginzburg:2005yw].
In this paper we aim to discuss the most general version of the Two-Higgs doublet model (2HDM), which incorporates flavor or CP violation from all possible sources [@Maniatis:2007vn; @Gerard:2007kn; @ElKaffas:2007rq]. We also discuss how the general model, denoted here as 2HDM-X, reduces in certain limits to the versions known as 2HDM-I,II,III, as well as some of their variants which we shall name as 2HDM-IV and 2HDM-V. The logic of the naming scheme that we adopt here, consists in identifying distinctive physical characteristics that can be associated with the models and have sufficient merit to single out them.
Within model I (2HDM-I) where only one Higgs doublet generates all gauge and fermion masses [@Carena:2002es], while the second doublet only knows about this through mixing, and thus the Higgs phenomenology will share some similarities with the SM, although the SM Higgs couplings will now be shared among the neutral scalar spectrum. The presence of a charged Higgs boson is clearly the signal beyond the SM. Within 2DHM-II one also has natural flavor conservation [@Glashow:1976nt], and its phenomenology will be similar to the 2HDM-I, although in this case the SM couplings are shared not only because of mixing, but also because of the Yukawa structure. On the other hand, the distinctive characteristic of 2HDM-III is the presence of FCNSI, which require a certain mechanism in order to suppress them, for instance one can imposes a certain texture for the Yukawa couplings [@Fritzsch:1977za], which will then predict a pattern of FCNSI Higgs couplings [@Cheng:1987rs]. Within all those models (2HDM I,II,III) [@Carcamo:2006dp; @Zhou:2003kd; @Aoki:2009ha], the Higgs doublets couple, in principle, with all fermion families, with a strength proportional to the fermion masses, modulo other parameters.
There are also other models where the Higgs doublets couple non-universally to the fermion families, which have also been discussed in the literature [@Atwood:2005bf; @Logan:2010ag; @Logan:2009uf], we shall denote this class of family non-universal models as 2HDM-IV. In principle, the general model includes CPV, which could arise from the same CPV phase that appears in the CKM matrix, as in the SM, from some other extra phase coming from the Yukawa sector or from the Higgs potential [@Gunion:2005ja]. However, in order to discuss which type of CP violation can appear in each case, we shall use the label 2HDM-V to denote the class of models which, besides containing a generic pattern of FCNSI, moduled by certain texture, will include new sources of CPV as well.
Our formulation of the 2HDM-X, which is discussed in detail in section 2, relies on the Higgs mass-eigenstates basis [@DiazCruz:1992uw]. It seems to us that this is more appropriate in order to relate the low-energy constraints on the parameters of the models, with the predicted high energy signatures to be searched at future colliders. The different models can be characterized by a set of invariants, signaling the possible appearence of CP violation [@Ginzburg:2005yw], either from the bosonic or fermionic sectors. Section 3 contains the discussion of general Yukawa couplings, with spontaneous or explicit CPV parameters for the Higgs bosons. We then show in section 4, our evaluation of the CPV asymmetry for the decay $h\to bcW$, which may allow to test these patterns of FCNSI and CPV at a future colliders. Finally our conclusions are included in section 5, while some technical details are included in the appendices.
A General formulation of the 2HDM and its limiting cases.
=========================================================
The Two-Higgs doublet extension of the SM includes two scalar doublets of equal hypercharge, denoted by: $\Phi_{1,2}=(\phi^+_{1,2}, \phi^0_{1,2})^T$. Depending on the Yukawa matrices $Y^q_{1,2}$ ($q=u,$ $d$) that are allowed, one defines the particular versions of the 2HDM. FCNSI appear at tree level when more than one Higgs doublet couples to both types of quarks ($u$ and $d$) [@susyrev], and a certain mechanism should be invoked in order to bring them under control. The CP properties of the Higgs boson depend on the symmetries of the potential [@Gunion:2005ja].
In order to clarify the discussion of the many models that have been presented in the literature, we shall present in the next subsections, a classification scheme for these models, which have different patterns of FCNSI and CP properties. We shall discuss in this paper first the most general formulation of the 2HDM-X, and will consider the 2HDM versions usually discussed in the literature, which are known as 2HDM-I,II,III, as well as some variants (2HDM-IV). We shall discuss then in detail, some cases that have not been considered before, which we denote as 2HDM-V. Although the 2HDM-X suffers from the FCNSI problem, we shall discuss it first in general terms, without referring the specific mechanism that is used to address the problem, which will be done later in this section.
A classification of models
--------------------------
We shall define here the different types of models according to their Yukawa structure, the Hermiticity of the Yukawa matrices and the CP properties of the bosonic Higgs sector. Thus, the most general version of the 2HDM is defined through the following assumptions:
i) In principle we allow each Higgs doublet to couple to both type of fermions, expecting that some particular structure of the Yukawa matrices is responsible for the suppression of flavor changing neutral scalar interactions (FCNSI).
ii) The Yukawa matrices are allowed in general to be non-Hermitian, i. e., $Y_{fi}\neq Y_{fi}^\dagger$ ($f=$u, d, l, $i=1,2$). The limit when the Yukawa matrices are hermitic defines a particular version of the models.
iii) The Higgs potential admits in principle both spontaneous or explicit CPV.
Then the known limiting models (2HDM I,II,III), are obtained by relaxing some of those assumptions, namely:
1. The 2HDM-I [@Lee:1973iz; @Haber:1978jt; @Hall:1981bc] is defined by considering that only one Higgs doublet generates the masses of all types of fermions, as it happens in the SM. This type of model can be obtained by assuming an additional $Z_2$ discrete symmetry. Under a variant of this model, where the second doublet does not mix with the first doublet, it is possible to identify a neutral scalar as a dark matter candidate [@Accomando:2006ga], which makes it very attractive.
2. For the so called 2HDM-II [@Donoghue:1978cj; @Haber:1978jt; @Hall:1981bc], each Higgs doublet couples only to one type of quark, and then FCNSI do not appear at tree level. A variant of the $Z_2$ discrete symmetry is considered here, similarly to the case of the 2HDM-I. Two limiting cases can be considered, namely: the 2HDM-IIa, with CP conserving Higgs sector, and THDM-IIb where the Higgs sector ir CP violating [@Osland:2008aw]. This model is also attractive because it corresponds to the Higgs sector of the Minimal supersymmetric standard model (MSSM), at tree-level [@Carena:2002es].
3. Within the 2HDM-III [@Cheng:1987rs], one considers all possible couplings among the Higgs doublets and fermions in the Yukawa sector; thus, it is possible to have FCNSI in this case. According to the extended classification that we try to motivate here, we shall also assume that within 2HDM-III the Yukawa matrices are Hermitic, whereas the Higgs potential is CP conserving. Thus, CP violation only arise from the CKM phase. A particular version of this model, widely studied in the literature, assumes Hermitic Yukawa matrices with 4-textures [@Fritzsch:1977za], which has under control the FCNSI problem [@Cheng:1987rs]. It also happens that when one considers loop effects within the MSSM, its Higgs sector also becomes of type III [@DiazCruz:2002er], which again makes attractive this version of the 2HDM. Although it is not often explicitly stated, we shall consider that within 2HDM-III the Higgs doublets couple in principle with all three families of quarks and leptons.
4. Family non-universal assignments are also possible, for instance we can have models where one doublet couples to all types of quarks, but a second doublet only couples to the 3rd family [@Atwood:2005bf]. Several possibilities have been considered in the literature, which we denote as 2HDM-IV, depending on whether the second doublet couples to the whole third family or only to the top (2HDM-IV-t) [@Logan:2010ag; @Logan:2009uf]. We shall also include within this category, those models where one doublet couples only to charged leptons or to neutrinos [@Logan:2010ag; @Logan:2009uf].
The properties of these models are summarized in tables \[table-0\], \[table-1\]. Table \[table-1\] shows the assumptions for the different types of the 2HDM which are considered in this work. Besides the cases, that have been discussed in the literature, we can define still another class of models, which we denote as 2HDM-V, where besides having FCNSI at tree level, also include extra sources of CP violation, either from the Yukawa or the Higgs sectors. Within this class, we shall consider the following sub-cases:
(a) The 2HDM-Va has Hermitian Yukawa matrices, but the Higgs sector is CP violating. To work out a concrete example we shall also consider a four-texture for the Yukawa matrices.
(b) For the 2HDM-Vb we will not assume Hermiticity in the Yukawa matrices, while the Higgs sector is CP conserving. Again, to work out an specific example we shall consider the four-texture case for the Yukawa matrices.
Model type Up quarks Down quarks Charged leptons Neutral leptons
------------ ----------- ------------- ----------------- -----------------
2HDM-I $H_1$ $H_1$ $H_1$ $H_1$
2HDM-II $H_2$ $H_1$ $H_1$ $H_2$
2HDM-III $H_{1,2}$ $H_{1,2}$ $H_{1,2}$ $H_{1,2}$
2HDM-IV $H_1$ $H_1$ $H_1$ $H_2$
: Higgs interaction with fermions for 2HDM types.[]{data-label="table-0"}
Model type FCNC Hermiticity Higgs sector CP
------------ -------------- -------------- --------------------
I x – –
II x – –
III $\checkmark$ $\checkmark$ $\checkmark$ (CKM)
Va $\checkmark$ $\checkmark$ x
Vb $\checkmark$ x $\checkmark$
: Symmetries under the different types of 2HDM’s[]{data-label="table-1"}
Model type $1^{\textrm{st}}$ family $2^{\textrm{nd}}$ family $3^{\textrm{rd}}$ family
--------------------- --------------------------------------- --------------------------------------- ---------------------------------------
I $H_1$ $H_1$ $H_1$
II $H_2\rightarrow u$ quarks $H_2\rightarrow u$ quarks $H_2\rightarrow u$ quarks
$H_1\rightarrow d$ quarks and leptons $H_1\rightarrow d$ quarks and leptons $H_2\rightarrow u$ quarks and leptons
III $H_1$ and $H_2$ $H_1$ and $H_2$ $H_1$ and $H_2$
IV-$t$, $b$, $\tau$ $H_1$ $H_1$ $H_2$
V $H_1$ and $H_2$ $H_1$ and $H_2$ $H_1$ and $H_2$
: Higgs couplings to the fermion families.[]{data-label="table-xtra"}
Solutions to the FCNC problem
-----------------------------
When both Higgs doublets couple to up- and down-type fermions, FCNSI are allowed [@susyrev]. An acceptable suppression for FCNSI can be achieved with the following mechanisms:
- *Universal Yukawa textures.* Suppression for FCNC can be achieved when a certain form of the Yukawa matrices that reproduce the observed fermion masses and mixing angles is implemented in the model. This could be done either by implementing the Frogart-Nielsen mechanism to generate the fermion mass hierarchies [@FN], or by studying a certain ansatz for the fermion mass matrices [@Fritzsch:1977za]. The first proposal for the Higgs boson couplings [@Cheng:1987rs], the so called Cheng-Sher ansazt, was based on the Fritzsch six-texture form of the mass matrices, namely: $$M_{l}=\left(
\begin{array}{ccc}
0 & C_{q} & 0 \\
C_{q}^{\ast } & 0 & B_{q} \\
0 & B_{q}^{\ast } & A_{q}\end{array}\right) .$$Then, by assuming that each Yukawa matrix $Y_{1,2}^{q}$ has the same hierarchy, one finds: $A_{q}\simeq m_{q_{3}}$, $B_{q}\simeq \sqrt{m_{q_{2}}m_{q_{3}}}$ and $C_{q}\simeq \sqrt{m_{q_{1}}m_{q_{2}}}$. Then, the fermion-fermion$^{\prime }$-Higgs boson ($f f^{\prime 0}$) couplings obey the following pattern: $Hf_{i}f_{j} \sim
\sqrt{m_{f_i}m_{f_j}} / m_{W}$, which is also known as the Cheng-Sher ansatz. This brings under control the FCNC problem, and it has been extensively studied in the literature to search for flavor-violating signals in the Higgs sector In our previous work we considered in detail the case of universal four-texture Yukawa matrices [@DiazCruz:2004tr], and derived the scalar-fermion interactions, showing that it was possible to satisfy current constraints from LFV and FCNC [@ourwork1; @ourwork2]. Predictions for Higgs phenomenology at the LHC was also studied in ref. [@ourwork3; @otherswork1]. We can consider this a universal model, in the sense that it was assumed that each Yukawa matrix $Y^q_{1,2}$ has the same hierarchy.
- *Radiative Suppression of FCNC.* One could keep FCNC under control if there exists a hierarchy between $Y^{u,d}_1$ and $Y^{u,d}_2$. Namely, a given set of Yukawa matrices is present at tree-level, but the other ones arise only as a radiative effect. This occurs for instance in the MSSM, where the type-II 2HDM structure is not protected by any symmetry, and is transformed into a type-III 2HDM, through the loop effects of sfermions and gauginos. Namely, the Yukawa couplings that are already present at tree-level in the MSSM ($Y^d_1, Y^u_2$) receive radiative corrections, while the terms ($Y^d_2, Y^u_1$) are induced at one-loop level.
- *Alignment of the Yukawa matrices* Another solution to the FCNC problem that have been discussed recently assumes that the Yukawa matrices could be aligned [@Pich:2009sp; @Jung:2010ik]. However, it seems that if such assumption holds at a high energy scale (much above the EW scale), it no longer holds at a low-energy scale [@Braeuninger:2010td].
The lagrangian for the 2HDM
===========================
The most general structure of the Yukawa lagrangian for the quark fields, can be written as follows:
$$\mathcal{L}_{Y}^{quarks}=\overline{q}_{L}^{0}Y_{1}^{D}\phi _{1}d_{R}^{0}+\overline{q}_{L}^{0}Y_{2}^{D}\phi _{2}d_{R}^{0}+\overline{q}_{L}^{0}Y_{1}^{U}\widetilde{\phi }_{1}u_{R}^{0}+\overline{q}_{L}^{0}Y_{2}^{U}\widetilde{\phi }_{2}u_{R}^{0}+h.c., \label{yukawa}$$
where $Y_{1,2}^{U,D}$ are the $3\times 3$ Yukawa matrices, $q_{L}$ denotes the left handed quarks doublets and $u_{R}$, $d_{R}$ correspond to the right handed singlets. Here $\widetilde{\phi
}_{1,2}=i\sigma _{2}\phi _{1,2}^{\ast }$. The superscript zero means that the quarks are weak eigenstates. After getting a correct SSB [@PortugalMinHix; @Ivanov:2006yq; @Maniatis:2006fs; @Ma:2010ya], the Higgs doublets are decomposed as follows:
$$\phi _{1}=\left(
\begin{array}{c}
\varphi _{1}^{+} \\
\frac{v_{1}+\varphi _{1}+i\chi _{1}}{\sqrt{2}}\end{array}\right) , \label{doblete1}$$
$$\phi _{2}=\left(
\begin{array}{c}
\varphi _{2}^{+} \\
\frac{e^{i\xi }v_{2}+\varphi _{2}+i\chi _{2}}{\sqrt{2}}\end{array}\right) . \label{doblete2}$$
where the v.e.v.’s $v_{1}$ and $v_{2}$ are real and positive, while the phase $\xi $ introduces spontaneous CP violation. Now, we transform the quarks to the mass eigenstate basis through the rotations: $u_{L,R}=U_{L,R}u_{L,R}^{0}$ , $d_{L,R}=D_{L,R}d_{L,R}^{0}$, to obtain:
$$\begin{aligned}
\mathcal{L}_{Y}^{quarks} &=&\overline{u}_{L}U_{L}Y_{1}^{D}\varphi
_{1}^{+}D_{R}^{\dagger
}d_{R}+\overline{d}_{L}D_{L}Y_{1}^{D}\frac{\varphi
_{1}+i\chi _{1}}{\sqrt{2}}D_{R}^{\dagger }d_{R} \nonumber \\
&&+\overline{u}_{L}U_{L}Y_{2}^{D}\varphi _{2}^{+}D_{R}^{\dagger }d_{R}+\overline{d}_{L}D_{L}Y_{2}^{D}\frac{\varphi _{2}+i\chi _{2}}{\sqrt{2}}D_{R}^{\dagger }d_{R} \nonumber\\
&&+\overline{u}_{L}U_{L}Y_{1}^{U}\frac{\varphi _{1}-i\chi _{1}}{\sqrt{2}}U_{R}^{\dagger }u_{R}-\overline{d}_{L}D_{L}^{\dagger
}Y_{1}^{U}\varphi
_{1}^{-}U_{R}^{\dagger }u_{R} \nonumber \\
&&+\overline{u}_{L}U_{L}Y_{2}^{U}\frac{\varphi _{2}-i\chi _{2}}{\sqrt{2}}U_{R}u_{R}-\overline{d}_{L}D_{L}^{\dagger }Y_{2}^{U}\varphi
_{2}^{-}U_{R}^{\dagger }u_{R} \nonumber\\
&&+\overline{u}_{L}M^{U}u_{R}+\overline{d}_{L}M^{D}d_{R}+h.c.,\end{aligned}$$
Then, the (diagonal) mass matrices are given as follows:
$$M^{U}=\frac{v_{1}}{\sqrt{2}}\widetilde{Y}_{1}^{U}+e^{-i\xi }\frac{v_{2}}{\sqrt{2}}\widetilde{Y}_{2}^{U} \label{mu}$$
and $$M^{D}=\frac{v_{1}}{\sqrt{2}}\widetilde{Y}_{1}^{D}+e^{i\xi }\frac{v_{2}}{\sqrt{2}}\widetilde{Y}_{2}^{D}, \label{md}$$
where $\widetilde{Y}_{1,2}^U=U_LY_{1,2}^UU_R^\dagger$ and $\widetilde{Y}_{1,2}^D=D_LY_{1,2}^D D_R^\dagger$. Then, one can split the Yukawa couplings into the neutral and charged terms, both for the up and down sector. The neutral couplings for the up sector are given in terms of (four-components) Dirac spinors as follows:
$$\begin{aligned}
\mathcal{L}_{up}^{neutral}
&=&\overline{u}\widetilde{Y}_{1}^{U}\frac{\varphi
_{1}-i\chi _{1}}{\sqrt{2}}P_{R}u+\overline{u}\widetilde{Y}_{2}^{U}\frac{\varphi _{2}-i\chi _{2}}{\sqrt{2}}P_{R}u \nonumber \\
&&+\overline{u}\widetilde{Y}_{1}^{U\dagger }\frac{\varphi _{1}+i\chi _{1}}{\sqrt{2}}P_{L}u+\overline{u}\widetilde{Y}_{2}^{U\dagger
}\frac{\varphi
_{2}+i\chi _{2}}{\sqrt{2}}P_{L}u \nonumber \\
&&+\overline{u}M^{U}u,\end{aligned}$$
where $P_{L,R}=\frac{\mathbb{I}\mp \gamma ^{5}}{2}$ are the chiral operators. In order to arrive to the final form of the Yukawa lagrangian we need to include the Higgs mass eigenstates. When one allows for the possibility of having CP violation in the Higgs potential, the CP even and CP-odd components get mixed [@Haber:2006ue]. This CPV Higgs mixing is included as follows,
$$\left(
\begin{array}{c}
\varphi _{1} \\
\varphi _{2} \\
\chi _{1} \\
\chi_{2}\end{array}
\right) =R\left(
\begin{array}{c}
H_{1} \\
H_{2} \\
H_{3} \\
H_{4}\end{array}\right) . \label{higgs3}$$
with $H_{4}$ is a Goldstone boson. The matrix $R$ can be obtain when by relating equations (\[doblete1\]) and (\[doblete2\]) with the physical Higgs mass eigenstate $$\Phi _{a}=\left(
\begin{array}{c}
G^{+}v_{a}+H^{+}w_{a} \\
\frac{v}{\sqrt{2}}v_{a}+\frac{1}{\sqrt{2}}\sum_{r=1}^4 \left(
q_{r1}v_{a}+q_{r2}e^{-i\theta _{23}}w_{a}\right) H_{r}\end{array}\right) , \label{phys-eigen}$$
where $a=1,2$, $r=1,...,4$ and $\hat{v}_{a}$, $\hat{w}_{a}$ are the components of the orthogonal eigenvectors of unit norm[^1]
$$\widehat{v}=\left(
\begin{array}{cc}
\hat{v}_{1}, & \hat{v}_{2}\end{array}\right) =\left(
\begin{array}{cc}
\cos \beta , & e^{i\xi }\sin \beta
\end{array}\right) \label{v}$$
and$$\widehat{w}=\left(
\begin{array}{cc}
\hat{w}_{1}, & \hat{w}_{2}\end{array}\right) =\left(
\begin{array}{cc}
-e^{-i\xi }\sin \beta , & \cos \beta
\end{array}\right) . \label{w}$$The values of $q_{ra}$ are written as combination of the $\theta
_{ij}$, which are the mixing angles appearing in the rotation matrix that diagonalize the mass matrix for neutral Higgs; table \[table-2\] shows the different values for the $q_r$’s.
$r$ $q_{r1}$ $q_{r2}$
----- -------------------------------------- -------------------------------------------------------
1 $\cos \theta_{12}\cos \theta _{13}$ $-\sin \theta _{12}-i\cos \theta {12}\sin\theta_{13}$
2 $\sin \theta _{12}\cos \theta _{13}$ $\cos \theta _{12}-i\sin \theta {12}\sin\theta_{13}$
3 $\sin \theta _{13}$ $i\cos \theta _{13}$
4 $i$ 0
: Mixing angles for Higgs bosons which consider spontaneous and explicit CPV [@Haber:2006ue].[]{data-label="table-2"}
It is convenient to write the following relation, for $a=1,2$ $$\varphi _{1}+i\chi _{1}=\sum_r \left( q_{r1}\cos \beta
-q_{r2}e^{-i\left( \theta _{23}+\xi \right) }\sin \beta \right)
H_{r} \label{n1}$$ and $$\varphi _{2}+i\chi _{2}=\sum_r \left( q_{r1}e^{i\xi }\sin \beta
+q_{r2}e^{-i\theta _{23}}\cos \beta \right) H_{r}. \label{n2}$$
Then, we arrive to the final general form of the neutral Higgs boson couplings for the up-type quarks:
$$\mathcal{L}_{up}^{neutral}=\overline{u}_{i}\left(S_{ijr}^{u}+\gamma^{5}P_{ijr}^{u}\right)
u_{j}H_{r}+\overline{u}_{i}M_{ij}^{U}u_{j}, \label{ygral}$$
with
$$\begin{aligned}
S_{ijr}^{u} &=&\frac{1}{2v}M_{ij}^{U}\left( q_{k1}^{\ast
}+q_{k1}-\tan \beta \left( q_{k2}^{\ast }e^{i\left( \theta _{23}+\xi
\right) }+q_{k2}e^{-i\left(
\theta _{23}+\xi \right) }\right) \right) \nonumber \\
&&+\frac{1}{2\sqrt{2}\cos \beta }\left( q_{k2}^{\ast }e^{i\theta _{23}}\widetilde{Y}_{2ij}^{U}+q_{k2}e^{-i\theta _{23}}\widetilde{Y}_{2ij}^{U\dagger }\right) \label{sugral}\end{aligned}$$
and
$$\begin{aligned}
P_{ijr}^{u} &=&\frac{1}{2v}M_{ij}^{U}\left( q_{k1}^{\ast
}-q_{k1}-\tan \beta \left( q_{k2}^{\ast }e^{i\left( \theta _{23}+\xi
\right) }-q_{k2}e^{-i\left(
\theta _{23}+\xi \right) }\right) \right) \nonumber \\
&&+\frac{1}{2\sqrt{2}\cos \beta }\left( q_{k2}^{\ast }e^{i\theta _{23}}\widetilde{Y}_{2ij}^{U}-q_{k2}e^{-i\theta _{23}}\widetilde{Y}_{2ij}^{U\dagger }\right). \label{pugral}\end{aligned}$$
Similarly, for the down-type quarks we find: $$\mathcal{L}_{down}^{neutral}=\overline{d}_{i}\left(
S_{ijr}^{d}+\gamma ^{5}P_{ijr}^{d}\right)
d_{j}H_{r}+\overline{d}_{i}M_{ij}^{D}d_{j},$$
with $$\begin{aligned}
S_{ijr}^{d} &=&\frac{1}{2v}M_{ij}^{D}\left[ q_{k1}+q_{k1}^{\ast
}-\tan \beta \left( q_{k2}^{\ast }e^{i\left( \theta _{23}+\xi
\right) }+q_{k2}e^{-i\left(
\theta _{23}+\xi \right) }\right) \right] \nonumber \\
&&+\frac{1}{2\sqrt{2}\cos \beta }\left( q_{k2}e^{-i\theta
_{23}}Y_{2}^{D}+q_{k2}^{\ast }e^{i\theta
_{23}}\widetilde{Y}_{2}^{D\dagger }\right) \label{sdgral}\end{aligned}$$
and $$\begin{aligned}
P_{ijr}^{d} &=&\frac{1}{2v}M_{ij}^{D}\left[ q_{k1}-q_{k1}^{\ast
}+\tan \beta \left( q_{k2}^{\ast }e^{i\left( \theta _{23}+\xi
\right) }-q_{k2}e^{-i\left(
\theta _{23}+\xi \right) }\right) \right] \nonumber \\
&&+\frac{1}{2\sqrt{2}\cos \beta }\left( q_{k2}e^{-i\theta _{23}}\widetilde{Y}_{2}^{D}-q_{k2}^{\ast }e^{i\theta _{23}}\widetilde{Y}_{2}^{D\dagger
}\right). \label{pdgral}\end{aligned}$$ On the other hand, the Yukawa couplings for charged states are given by: $$\begin{aligned}
\mathcal{L}_{Y}^{H^+} &=&\overline{u}\left[ \varphi _{1}^{+}V\left(
\frac{\sqrt{2}}{v_{1}}M^{D}-e^{i\xi }\tan{\beta}\widetilde{Y}_{2}^{D}\right) \frac{\mathbb{I}+\gamma ^{5}}{2}+\varphi _{2}^{+}V\widetilde{Y}_{2}^{D}\frac{\mathbb{I}+\gamma ^{5}}{2}\right. \nonumber \\
&&\left. -\varphi _{1}^{+}\frac{\mathbb{I}-\gamma ^{5}}{2}\left( \frac{\sqrt{2}}{v_{1}}M^{U}-e^{i\xi }\tan{\beta}\widetilde{Y}_{2}^{U\dag
}\right) V-\varphi _{2}^{+}\frac{\mathbb{I}-\gamma ^{5}}{2}\widetilde{Y}_{2}^{U\dag }V\right] d \nonumber \\
&&+h.c.\end{aligned}$$
where $V$ denotes the CKM matrix. The physical eigenstates for the charged Higgs boson $(H^+)$ can be obtain through the following rotation:
$$\left(
\begin{array}{c}
\varphi _{1}^{\pm } \\
\varphi _{2}^{\pm }\end{array}\right) =\left(
\begin{array}{cc}
\cos \beta & -e^{\mp i\xi }\sin \beta \\
e^{\pm i\xi }\sin \beta & \cos \beta\end{array}\right) \left(
\begin{array}{c}
G^{\pm } \\
H^{\pm }\end{array}\right)$$
Therefore, the Yukawa couplings for charged Higgs are $$\begin{aligned}
\mathcal{L}_{Y}^{H^+} &=&\overline{u}\left[ H^{+}e^{-i\xi
}M^{U}V\frac{\mathbb{I} -\gamma ^{5}}{\sqrt{2}}-H^{+}e^{-i\xi
}VM^{D}\frac{
\mathbb{I}+\gamma ^{5}}{\sqrt{2}}\right. \nonumber \\
&&\left. +\frac{1}{\cos \beta }H^{+}\left( V\widetilde{Y}_{2}^{D}\frac{\mathbb{I}+\gamma ^{5}}{2}-\widetilde{Y}_{2}^{U\dag }V\frac{\mathbb{I}-\gamma ^{5}}{2}\right) \right] d \nonumber \\
&&+h.c. \label{lch}\end{aligned}$$
Some limiting cases
===================
The THDM-V with explicit CP violation (2HDM-Va)
-----------------------------------------------
In this case we assume the hermiticity condition for the Yukawa matrices, but the Higgs sector could be CP violating. For simplicity we shall consider that the Yukawa matrices obey a four-texture form, and CP is violated explicitly in the Higgs sector.
As it is discussed in the appendix \[app1\], the assumption of universal 4-textures for the Yukawa matrices, allows to express one Yukawa matrix in terms of the quark masses, and parametrize the FCNSI in terms of the unknown coefficients $\chi _{ij}$, namely $\widetilde{Y}_{2ij}^{U}=\chi _{ij}\frac{\sqrt{m_{i}m_{j}}}{v}$, where the hermiticity condition reads $\chi _{ij}=\chi _{ij}^{\dag }$. These parameters can be constrained by considering all types of low energy FCNC transitions. Although these constraints are quite strong for transitions involving the first and second families, as well as for the b-quark, it turns out that they are rather mild for the top quark. Then, from (\[sugral\]) and (\[pugral\]), one obtains within 2HDM-Va, the following expressions for the couplings of the neutral Higgs bosons with up-type quarks, namely:
$$S_{ijr}^{u}=\frac{1}{2v}M_{ij}^{U}\left[ q_{r1}^{\ast }+q_{r1}-\tan
\beta
\left( q_{r2}^{\ast }+q_{r2}\right) \right] +\frac{\sqrt{m_{i}m_{j}}}{2\sqrt{2}v\cos \beta }\chi _{ij}\left( q_{r2}^{\ast }+q_{r2}\right)
\label{sva}$$
and $$P_{ijr}^{u}=\frac{1}{2v}M_{ij}^{U}\left[ q_{r1}^{\ast }-q_{r1}-\tan
\beta
\left( q_{r2}^{\ast }-q_{r2}\right) \right] +\frac{\sqrt{m_{i}m_{j}}}{2\sqrt{2}v\cos \beta }\chi _{ij}\left( q_{r2}^{\ast }-q_{r2}\right),
\label{pva}$$
similar expressions can be obtained for the down-type quarks and leptons, as well as for the charged Higgs couplings.
The Yukawa Lagragian for the 2HDM-Vb
------------------------------------
In this case we shall consider that the Higgs sector is CP conserving, while the Yukawa matrices could be non-hermitian. Then, without loss of generality, we can assume that $H_{3}$ is CP odd, while $H_{1}$ and $H_{2}$ are CP even. Then: $\cos \theta _{12}=\sin
\left( \beta -\alpha \right)$, $\sin \theta _{12}=\cos \left( \beta
-\alpha \right)$, $\sin \theta _{13}=0$, and $ e^{-i\theta
_{13}}=1$. The mixing angles $\alpha $ and $\beta $ that appear in the neutral Higgs mixing, corresponds to the standard notation. The expressions (\[phys-eigen\]) for the neutral Higgs masses eigenstates can be written now in terms of the angles $\alpha $ and $\beta $:
$$\begin{aligned}
\Phi _{a}^{0} &=&\frac{1}{\sqrt{2}}\left( v+h^{0}\sin \left( \beta
-\alpha
\right) +H^{0}\cos \left( \beta -\alpha \right) +iG^{0}\right) \widehat{v}_{a} \nonumber \\
&&+\frac{1}{\sqrt{2}}\left( h^{0}\cos \left( \beta -\alpha \right)
-H^{0}\sin \left( \beta -\alpha \right) +iA^{0}\right)
\widehat{w}_{a}, \label{higgscpc}\end{aligned}$$
where $a=1,2$, and for the CP-conserving limit $\widehat{v}_{a}$ and $\widehat{w}_{a}$ have a vanishing phase $\xi =0$. Additionally, when one assumes a 4-texture for the Yukawa matrices, the Higgs-fermion couplings further simplify as $\widetilde{Y}_{2ij}^{U}=\chi
_{ij}\frac{\sqrt{m_{i}m_{j}}}{v}$. Then, the corresponding coefficient equation for up sector and $h^0$ ($r=1$) are
$$S_{ij1}^{u}=\frac{1}{v}M_{ij}^{U}\left[ \sin (\beta -\alpha )-\tan
\beta \cos (\beta -\alpha )\right]
+\frac{\sqrt{m_{i}m_{j}}}{2\sqrt{2}v}\frac{\cos (\beta -\alpha
)}{\cos \beta }\left( \chi _{ij}+\chi _{ij}^{\dag }\right)
\label{svb}$$
$$P_{ij1}^{u}=\frac{\sqrt{m_{i}m_{j}}}{2\sqrt{2}v}\frac{\cos (\beta -\alpha )}{\cos \beta }\left( \chi _{ij}-\chi^{\dag }_{ij}\right) \label{pvb}$$
For $H^0$ $(r=2)$ one finds:
$$S_{ij2}^{u}=\frac{1}{v}M_{ij}^{U}\left[ \cos (\beta -\alpha )-\tan
\beta \sin (\beta -\alpha )\right]
+\frac{\sqrt{m_{i}m_{j}}}{2\sqrt{2}v}\frac{\sin (\beta -\alpha
)}{\cos \beta }\left( \chi _{ij}+\chi _{ij}^{\dag }\right) ,$$
$$P_{ij2}^{u}=\frac{\sqrt{m_{i}m_{j}}}{2\sqrt{2}v}\frac{\sin (\beta -\alpha )}{\cos \beta }\left( \chi _{ij}-\chi _{ij}^{\dag }\right)$$
Finally, for $A^0$ $(r=3)$ one obtains:
$$S_{ij3}^{u}=i\frac{\sqrt{m_{i}m_{j}}}{2\sqrt{2}v\cos \beta }\left(
\chi _{ij}-\chi _{ij}^{\dag }\right) , \label{svb3}$$
$$P_{ij3}^{u}=\frac{i}{2v}M_{ij}^{U}\tan \beta -i\frac{\sqrt{m_{i}m_{j}}}{2\sqrt{2}v\cos \beta }\left( \chi _{ij}+\chi _{ij}^{\dag }\right)
\label{pvb3}$$
The 2HDM of type I, II and III
------------------------------
It is interesting, and illustrative, to consider the limit when the general model becomes the 2HDM-III, within 2HDM-III, one has that the Yukawa matrices obey a 4-texture form, and also $Y_{f}=Y_{f}^{\dagger }$, namely: $$\widetilde{Y}_{2ij}^{U}=\chi _{ij}\frac{\sqrt{m_{i}m_{j}}}{v}.$$
The condition of Hermiticity means then $\chi _{ij}=\chi _{ij}^{\dag
}$. Within 2HDM III, we shall consider that the Higgs sector is CP conserving. Therefore the Yukawa couplings take the following form. For $h^0$ one gets,
$$\begin{aligned}
S_{ij1}^{u} &=&\frac{1}{v}M_{ij}^{U}\left( \sin (\beta -\alpha
)+\tan \beta
\cos (\beta -\alpha )\right) \nonumber \\
&&-\frac{\chi _{ij}\sqrt{m_{i}m_{j}}}{\sqrt{2}v}\frac{\cos (\beta -\alpha )}{\cos \beta },\end{aligned}$$
while $$P_{ij1}^{u}=0.$$Then for $H^0$,
$$S_{ij2}^{u}=-\frac{1}{v}M_{ij}^{U}\frac{\sin \alpha }{\cos \beta }+\frac{\chi _{ij}\sqrt{m_{i}m_{j}}}{\sqrt{2}v}\frac{\sin (\beta -\alpha
)}{\cos \beta },$$
and $$P_{ij2}^{u}=0.$$Then for $A^0$,
$$S_{ij3}^{u}=0,$$
and $$P_{ij3}^{u}=-i\frac{\chi _{ij}\sqrt{m_{i}m_{j}}}{\sqrt{2}v\cos \beta
}.$$
Model type $H_1$ $H_2$ $H_3$
------------ ----------------- ----------------- -----------------
2HDM-I $S\neq0$, $P=0$ $S\neq0$, $P=0$ $S=0$, $P\neq0$
2HDM-II $S\neq0$, $P=0$ $S\neq0$, $P=0$ $S=0$, $P\neq0$
2HDM-III $S\neq0$, $P=0$ $S\neq0$, $P=0$ $S=0$, $P\neq0$
2HDM-V $S$, $P\neq0$ $S$, $P\neq0$ $S$, $P\neq0$
: Yukawa couplings for neutral Higgs. The $S$ and $P$ can be for up and down sector.[]{data-label="table3"}
One can also reduce the general model to the 2HDM types I and II, by eliminating some of the Yukawa matrices $Y_2^{U,D}=0$ and $Y_1^U=Y_2^D=0$, accordingly. The tables \[table3\] and \[table4\] summarize the corresponding results, they include the expressions for the neutral Higgs couplings with up and down type quarks, and similar results hold for the leptons.
r $S^u_{ijr}$ $P^u_{ijr}$ $S^d_{ijr}$ $P^d_{ijr}$
--- ------------------------------------------ -------------------------------- ------------------------------------------ ---------------------------------
1 $-\frac{\cos\alpha}{v\sin\beta}M_{ij}^U$ $0$ $-\frac{\cos\alpha}{v\sin\beta}M_{ij}^D$ $0$
2 $-\frac{\sin\alpha}{v\sin\beta}M_{ij}^U$ $0$ $-\frac{\sin\alpha}{v\sin\beta}M_{ij}^D$ $0$
3 $0$ $\frac{i\cot\beta}{v}M_{ij}^U$ $0$ $-\frac{i\cot\beta}{v}M_{ij}^D$
: Explicit values of the Yukawa couplings for neutral Higgs in 2HDM-I.[]{data-label="table4"}
\[table5\]
r $S^u_{ijr}$ $P^u_{ijr}$ $S^d_{ijr}$ $P^d_{ijr}$
--- ------------------------------------------ -------------------------------- ------------------------------------------ --------------------------------
1 $-\frac{\cos\alpha}{v\sin\beta}M_{ij}^U$ $0$ $\frac{\sin\alpha}{v\cos\beta}M_{ij}^D$ $0$
2 $-\frac{\sin\alpha}{v\sin\beta}M_{ij}^U$ $0$ $-\frac{\cos\alpha}{v\cos\beta}M_{ij}^D$ $0$
3 $0$ $\frac{i\cot\beta}{v}M_{ij}^U$ $0$ $\frac{i\tan\beta}{v}M_{ij}^D$
: Explicit values of the Yukawa couplings for neutral Higgs in 2HDM-II.
Probing the CP violating Higgs couplings through the decay $h\to
c\bar{b}W $
================================================================
In this section we shall evaluate the asymmetry coefficient for the decay $h\to c\bar{b}W$ in order to analyze presence of both FCNSI and CPV within the 2HDM-X. In the SM the FCNC are suppressed, but in the 2HDM extensions these processes are found even at tree level. We consider the neutral Higgs boson decay $h\longrightarrow
W\overline{b}c$ at tree level. Two diagrams contribute to this decay, the first one is through the FCNC coupling $h\longrightarrow
\overline{t}^{\ast }c\longrightarrow W^{-}\overline{b}c$, its Feynman diagram is shown on left figure \[f1\]. The other one is through $h\longrightarrow W^{+\ast }W^{-}\longrightarrow
W^{-}\overline{b}c$, also shown in figure \[f1\].
The couplings of the neutral Higgs with the quarks and the charged boson W with the neutral Higgs are written as $i\left( S_{231}^{u}+\gamma ^{5}P_{231}^{u}\right) $ and $igM_{W}q_{11}g^{\mu \nu },$ respectively. The other vertices are the usual SM contribution. The average amplitude for these diagrams is thus
$$\overline{\vert \mathcal{M}\vert }^{2}=\overline{\vert
\mathcal{M}_1\vert}^{2}+\overline{\vert
\mathcal{M}_2\vert}^{2}+\overline{ \mathcal{M}_{1}^{\dagger }\mathcal{M}_{2}}+\overline{\mathcal{M}_{2}^{\dagger}\mathcal{M}_{1}}$$
![Tree level Feynman diagrams for the decay. Right diagram is for $h\longrightarrow W^{-}\overline{b}c$ while left diagram is for $h\longrightarrow W^{-}\overline{b}c$.[]{data-label="f1"}](diagram1a.eps "fig:") ![Tree level Feynman diagrams for the decay. Right diagram is for $h\longrightarrow W^{-}\overline{b}c$ while left diagram is for $h\longrightarrow W^{-}\overline{b}c$.[]{data-label="f1"}](diagram1b.eps "fig:")
We can obtain an approximation when the terms proportional to the charm and bottom masses are neglected. Then, the expressions for the squared amplitudes are $$\begin{aligned}
\overline{\left\vert \mathcal{M}_{1}\right\vert }^{2} &=&\frac{g^{2}}{4M_{W}^{2}}\left\vert P_{t}(q)\right\vert ^{2}[4\left\vert
S_{231}^{u}-P_{231}^{u}\right\vert ^{2}p_{1}\cdot p_{2}p_{1}\cdot
qp_{3}\cdot q+2\left\vert S_{231}^{u}-P_{231}^{u}\right\vert
^{2}M_{W}^{2}p_{2}\cdot qp_{3}\cdot q \nonumber \\
&&\left. +\left( \left\vert S_{231}^{u}+P_{231}^{u}\right\vert
^{2}m_{t}^{2}-\left\vert S_{231}^{u}-P_{231}^{u}\right\vert
^{2}q^{2}\right) \left( 2p_{1}\cdot p_{2}p_{1}\cdot
p_{3}+M_{W}^{2}p_{2}\cdot p_{3}\right) \right] , \label{m11f}\end{aligned}$$
$$\overline{\left\vert \mathcal{M}_{2}\right\vert }^{2}=g^{4}\left(
q_{11}\right) ^{2}\left\vert V_{cb}\right\vert
^{2}|P_{W}(k)|^{2}\left( M_{W}^{2}p_{2}\cdot p_{3}+2p_{1}\cdot
p_{2}p_{1}\cdot p_{3}\right) , \label{m22f}$$
$$\overline{\mathcal{M}_{1}^{\dagger
}\mathcal{M}_{2}}=\frac{g^{4}m_{t}}{M_{W}}\left( S_{231}^{u\ast
}+P_{231}^{u\ast }\right) q_{11}V_{cb}P_{t}^{\ast }(q)P_{W}(k)\left(
M_{W}^{2}p_{2}\cdot p_{3}+2p_{1}\cdot p_{2}p_{1}\cdot p_{3}\right)
\label{m1m2f}$$
and $$\overline{\mathcal{M}_{2}^{\dagger
}\mathcal{M}_{1}}=\frac{g^{4}m_{t}}{M_{W}}\left(
S_{231}^{u}+P_{231}^{u}\right) q_{11}V_{cb}P_{W}^*(k)P_{t}(q)\left(
M_{W}^{2}p_{2}\cdot p_{3}+2p_{1}\cdot p_{2}p_{1}\cdot p_{3}\right)
.\label{m2m1f}$$
where the W boson propagator is written in the Feynman-t’Hooft gauge $P_{W}(k)=\left( k^{2}-M_{W}^{2}+iM_{W}\Gamma _{W}\right)^{-1}$ and $P_{t}\left( q\right) =\left( q^{2}-m_{t}^{2}+im_{t}\Gamma
_{t}\right) ^{-1}$. In order to find the asymmetry coefficient we also need to calculate the conjugate decay, that is, $h\longrightarrow W^{+}b\overline{c}$. We denote the average amplitude as $$\overline{\vert \widetilde{\mathcal{M}}\vert }^{2} =\overline{\vert \widetilde{\mathcal{M}_{1}}\vert}^{2} +\overline{\vert\widetilde{\mathcal{M}_{2}}\vert}^{2}+\overline{\widetilde{\mathcal{M}_{1}}^{\dagger}\widetilde{\mathcal{M}_{2}}} +\overline{\widetilde{\mathcal{M}_{2}}^{\dagger }\widetilde{\mathcal{M}_{1}}}.$$
The square terms are the same as the above, $\overline{\left\vert \widetilde{\mathcal{M}}_{1,2}\right\vert }^{2}=\overline{\left\vert
\mathcal{M}_{1,2}\right\vert }^{2}$, while for the interference terms we have
$$\overline{\widetilde{\mathcal{M}}_{1}^{\dagger }\widetilde{\mathcal{M}}_{2}}=\frac{g^{4}m_{t}}{M_{W}}\left( S_{231}^{u}+P_{231}^{u}\right)
q_{11}V_{cb}P_{t}(q)P_{W}^{\ast}(k)\left( M_{W}^{2}p_{2}\cdot
p_{3}+2p_{1}\cdot p_{2}p_{1}\cdot p_{3}\right) \nonumber$$
and
$$\overline{\widetilde{\mathcal{M}}_{2}^{\dagger }\widetilde{\mathcal{M}}_{1}}=\frac{g^{4}m_{t}}{M_{W}}\left( S_{231}^{u\ast }+P_{231}^{u\ast }\right)
q_{11}V_{cb}P_{t}^{\ast}(q)P_{W}(k)\left( M_{W}^{2}p_{2}\cdot
p_{3}+2p_{1}\cdot p_{2}p_{1}\cdot p_{3}\right) .$$
Then, the width for the decay is $$\Gamma _{h\longrightarrow W\overline{b}c}=\frac{m_{h}}{256\pi
^{3}}\int
\int_{R_{xy}}\left(\overline{\left\vert \mathcal{M}_{1}\right\vert }^{2}+\overline{\left\vert \mathcal{M}_{2}\right\vert }^{2}+\overline{\mathcal{M}_{1}^{\dagger}\mathcal{M}_{2}}+\overline{\mathcal{M}_{2}^{\dagger }\mathcal{M}_{1}}\right)dxdy,$$ where the dimensionless variables are defined as $x=\frac{2E_{1}}{m_{h}}$ and $y=\frac{2E_{2}}{m_{h}}$. All details about the decay kinematic were included in the appendix \[app2\]. The definition for the asymmetry coefficient is $$A_{CPV}=\frac{\Gamma _{h\longrightarrow W^+\overline{b}c}-\Gamma
_{h\longrightarrow W^-b\overline{c}}}{\Gamma _{h\longrightarrow W^+\overline{b}c}+\Gamma _{h\longrightarrow W^-b\overline{c}}}. \label{asy}$$ The final result, for the decay asymmetry is given by: $$A_{CPV}\left( S_{ijk}^{u},P_{ijk}^{u},\textrm{Re}\left(
q_{k1}\right) ,m_{h}\right) =\frac{2V_{cb}\textrm{Re}\left(
q_{k1}\right) \textrm{Im}\left( S_{ijk}^{u}+P_{ijk}^{u}\right)
\left( J_{10}+J_{12}\right) }{f\left(
S_{ijk}^{u},P_{ijk}^{u},\textrm{Re}\left( q_{k1}\right)
,m_{h}\right) }, \label{asymmetry}$$where$$\begin{aligned}
f\left( S_{ijk}^{u},P_{ijk}^{u},\textrm{Re}\left( q_{k1}\right)
,m_{h}\right) &=&\frac{1}{4g}\left[ \left\vert
S_{ijk}^{u}-P_{ijk}^{u}\right\vert ^{2}\left(
J_{1}+J_{2}-J_{4}-J_{6}\right) +\left\vert
S_{ijk}^{u}+P_{ijk}^{u}\right\vert ^{2}\left( J_{3}+J_{5}\right)
\right]
\nonumber \\
&&+g\textrm{Re}\left( q_{k1}\right) ^{2}\left\vert V_{cb}\right\vert
^{2}\left( J_{7}+J_{8}\right) +2\textrm{Re}\left( q_{k1}\right)
\textrm{Re} \left( S_{ijk}^{u}+P_{ijk}^{u}\right) V_{cb}\left(
J_{9}+J_{11}\right) . \label{f-aux}\end{aligned}$$ The $J$’s are integrals obtained from the decay kinematic, which are shown in appendix \[app2\] as well as the others parameters defined in previous sections.
Asymmetry in 2HDM-Va
--------------------
Let us discuss now the resulting expression for $A_{CPV}$ for two subcases within 2HDM-V. We fix $i=2$ and $j=3$ in equations (\[sva\]) and (\[pva\])in order to obtain the appropriate parameters within 2HDM-Va, then we find: $$S_{231}^{u}=\frac{\sqrt{m_{c}m_{t}}}{2\sqrt{2}v\cos \beta }\chi
_{23}\left( q_{12}^{\ast }+q_{12}\right)$$
and $$P_{231}^{u}=\frac{\sqrt{m_{c}m_{t}}}{2\sqrt{2}v\cos \beta }\chi
_{23}\left( q_{12}^{\ast }-q_{12}\right) .$$Then, the asymmetry coefficient is
$$A_{2HDM-Va}=\frac{\sqrt{m_{c}m_{t}}V_{cb}\chi _{23}\left(
J_{10}+J_{12}\right)
\cos^2 \theta _{12}\cos \theta _{13}\sin \theta _{13}}{\sqrt{2}M_{W}f\left( \beta ,\theta _{12},\theta _{13},\chi
_{23},m_{h}\right) \cos \beta },$$
where $$\begin{aligned}
f\left( \beta ,\theta _{12},\theta _{13},\chi _{23},m_{h}\right) &=&\frac{m_{c}m_{t}\chi _{23}^{2}}{32M_{W}^{2}\cos ^{2}\beta }\left( \sin
^{2}\theta _{12}+\cos ^{2}\theta _{12}\sin ^{2}\theta _{13}\right)
\left(
J_{1}+J_{2}+J_{3}-J_{4}+J_{5}-J_{6}\right) \nonumber \\
&&+\left\vert V_{cb}\right\vert ^{2}\cos ^{2}\theta _{12}\cos
^{2}\theta
_{13}\left( J_{7}+J_{8}\right) -\frac{\sqrt{m_{c}m_{t}}V_{cb}\chi _{23}}{\sqrt{2}M_{W}\cos \beta }\cos \theta _{12}\cos \theta _{13}\sin
\theta _{12}\left( J_{9}+J_{11}\right) .\end{aligned}$$
Asymmetry in 2HDM-Vb
--------------------
The appendix \[app1\] shows the four-texture structure for Yukawa matrices, nevertheless for practical evaluation of the asymmetry we write the texture parameter in Euler complex form as $\chi_{23}=|\chi_{23}|e^{i\nu_{23}}$. Then, we fix the equations (\[svb\]) and (\[pvb\]) for $i=2$ and $j=3$ in order to obtain the required element for 2HDM-Vb, $$S_{231}^{u}=\frac{\cos (\beta -\alpha
)\sqrt{m_{c}m_{b}}}{2\sqrt{2}v\cos \beta }\left( \chi _{23}+\chi
_{23}^{\dagger }\right) \label{s321b}$$ and $$P_{231}^{u}=\frac{\cos (\beta -\alpha
)\sqrt{m_{c}m_{b}}}{2\sqrt{2}v\cos \beta }\left( \chi _{23}-\chi
_{23}^{\dagger }\right). \label{p231b}$$ Then, for this case the asymmetry coefficient is given by: $$A_{2HDM-Vb}=\frac{gV_{cb}\sqrt{m_{c}m_{t}}\left\vert \chi
_{23}\right\vert \sin
\nu _{23}\cos (\beta -\alpha )\sin \left( \beta -\alpha \right) }{\sqrt{2}M_{W}\cos \beta f\left( \alpha ,\beta ,\chi _{23},m_{h}\right)
}\left( J_{10}+J_{12}\right), \label{asymmetryb}$$ where $$\begin{aligned}
f\left( \alpha ,\beta ,\chi _{23},m_{h}\right) &=&\frac{gm_{c}m_{t}}{32M_{W}^{2}}\frac{\cos ^{2}(\beta -\alpha )}{\cos ^{2}\beta
}\left\vert \chi _{23}\right\vert ^{2}\left(
J_{1}+J_{2}-J_{4}-J_{6}+J_{3}+J_{5}\right)
\nonumber \\
&&+\frac{g\sqrt{m_{c}m_{t}}V_{cb}}{\sqrt{2}M_{w}}\left\vert \chi
_{23}\right\vert \frac{\cos \nu _{23}\sin \left( \beta -\alpha
\right) \cos
(\beta -\alpha )}{\cos \beta }\left( J_{9}+J_{11}\right) \nonumber \\
&&+g\sin ^{2}\left( \beta -\alpha \right) \left\vert
V_{cb}\right\vert ^{2}\left( J_{7}+J_{8}\right). \label{f-auxb}\end{aligned}$$
Numerical results
-----------------
We shall discuss in detail the result for 2HDM-Vb. The asymmetry depends of the five free parameters. One of them is the Higgs boson mass which appears in the $J$ integrals, the other ones are the mixing angles $\alpha$, $\beta$ and the complex parameter $\chi_{23}$. The mixing angle $\beta$ is taken within the values $1<\tan\beta<50$ [@pdg]. For the mixing angle $\alpha$ we study three scenarios, $\alpha < \beta$, $\alpha \approx \beta$ and $\alpha
> \beta$. The phase $\nu_{23}$ is fixed to the value $0.1$ in order to analyzed a similar value to the phase from the CKM matrix. For each scenario we take two possible values for $|\chi_{23}|$. Therefore, the scenarios studied here are:
i) $\alpha < \beta$ for $|\chi_{23}|=0.9$ and $|\chi_{23}|=0.1$, figure \[scenario1\]
ii) $\alpha \approx \beta$ for $|\chi_{23}|=0.9$ and $|\chi_{23}|=0.1$, figure \[scenario2\].
iii) $\alpha > \beta$ for $|\chi_{23}|=0.9$ and $|\chi_{23}|=0.1$, figure \[scenario3\].
![The asymmetry as function of $\tan\beta$ for scenario 1, on left for $|\chi_{23}|=0.1$ and on right for $|\chi_{23}|=0.9$ []{data-label="scenario1"}](a1a.EPS "fig:") ![The asymmetry as function of $\tan\beta$ for scenario 1, on left for $|\chi_{23}|=0.1$ and on right for $|\chi_{23}|=0.9$ []{data-label="scenario1"}](a1b.EPS "fig:")\
![The asymmetry as function of $\tan\beta$ for scenario 2, on left for $|\chi_{23}|=0.1$ and on right for $|\chi_{23}|=0.9$ []{data-label="scenario2"}](a2a.EPS "fig:") ![The asymmetry as function of $\tan\beta$ for scenario 2, on left for $|\chi_{23}|=0.1$ and on right for $|\chi_{23}|=0.9$ []{data-label="scenario2"}](a2b.EPS "fig:")\
![The asymmetry as function of $\tan\beta$ for scenario 3, on left for $|\chi_{23}|=0.1$ and on right for $|\chi_{23}|=0.9$ []{data-label="scenario3"}](a3a.EPS "fig:") ![The asymmetry as function of $\tan\beta$ for scenario 3, on left for $|\chi_{23}|=0.1$ and on right for $|\chi_{23}|=0.9$ []{data-label="scenario3"}](a3b.EPS "fig:")\
We use the reported values $m_t=171.2$ GeV, $m_b=4.2$ GeV, $m_c=1.27$ GeV, $M_W=80.39$ GeV and $\sin\theta_W=0.231$ [@pdg].
From figs \[scenario1\], \[scenario2\] and \[scenario3\] we obtain asymmetry values of the order $10^{-3}$ to $10^{-2}$ ($10^{-4}$ to $10^{-3}$ and $10^{-3}$) for scenario i) ( ii) and iii))within 2HDM-Vb. We have also analyzed the numerical results for the CP asummetry for case 2HDM-Va. We also find that the size of this asymmetry depens strongly on the phases.
Conclusions
===========
In this paper we have present a broad discussion of the most general formulation of the Two-Higgs doublet extension of the SM, which we name as 2HDM-X. Then, we have defined in a model named 2HDM-V, which has the possibility of including both FCNC and CPV, and have presented the corresponding Lagrangian for both the neutral and charged Higgs sectors.
The limits when 2HDM-X reduces to one of the known versions (2HDM-I, II, III) has also been discussed; in these cases each pattern of Higgs-Yukawa couplings holds for all families. To identify the class of family non-universal models, we have used the label 2HDM-IV, where we include models where one Higgs doublet couples only to a certain type of fermion, for instance to the top quark or the third family, or to neutrinos only.
Finally, we have also evaluated the CPV asymmetry for the decay $h\rightarrow c\bar{b}W$, which allows to test the presence of both FCNC and CPV that associated with model V. We found that for certain optimal range of parameters the decay asymmetry could be of $O(10^{-2})$ to $O(10^{-4}$. These asymmetry values for three scenarios were obtained in the case of the 2HDM-Vb. Similar results arise within 2HDM-Va. The asymmetry behavior has a dependency proportional to the mixing complex parameter $\chi_{23}$. The mixing angles $\alpha$ and $\beta$ control the shape of the graphs. The asymmetry keeps same shape for the Higgs boson mass range between $115 $ GeV and $160$ GeV.
In order to detect this asymmetry we could have to resort to a linear collider, since the final state seems difficult to reconstruct at a hadron collider. Although a final conclusion would require a detailed simulation study, which we plan to address in a future publication [@working-progress].
**Acknowledgments.**
We would like to thank Sistema Nacional de Investigadores (Mexico) and CONACYT (Mexico).
2HDM-III with four-Textures {#app1}
===========================
Here we shall summarize the result for 2HDM-III, namely we assume that both Yukawa matrices $Y^q_1$ and $Y^q_2$ have the four-texture form and are Hermitic; following the conventions of [@DiazCruz:2004tr], the quark mass matrix is then written as:
$$M_q= \left( \begin{array}{ccc}
0 & C_{q} & 0 \\
C_{q}^{*} & \tilde{B}_{q} & B_{q} \\
0 & B_{q}^{*} & A_{q}
\end{array}\right).$$
when $\tilde{B}_{q}\to 0$ one recovers the six-texture form. We also consider the hierarchy:\
$\mid A_{q}\mid \, \gg \, \mid \tilde{B}_{q}\mid,\mid B_{q}\mid
,\mid C_{q}\mid$, which is supported by the observed fermion masses.
Because of the hermicity condition, both $\tilde{B}_{q}$ and $A_{q}$ are real parameters, while the phases of $C_q$ and $B_q$, $\Phi_{B_q,C_q}$, can be removed from the mass matrix $M_q$ by defining: $M_q=P_q^\dagger \tilde{M}_q P_q$, where $P_q=diag[1,
e^{i\Phi_{C_q}}, e^{i(\Phi_{B_q}+\Phi_{C_q})}]$, and the mass matrix $\tilde{M}_q$ includes only the real parts of $M_q$. The diagonalization of $\tilde{M}_q$ is then obtained by an orthogonal matrix $O_q$, such that the diagonal mass matrix is: $\bar{M}_{q} =
O_q^{T}\tilde{M}_{q}O_q$.
The lagrangian (2) can be expanded in terms of the mass-eigenstates for the neutral ($h^0,H^0,A^0$) and charged Higgs bosons ($H^\pm$). The interactions of the neutral Higgs bosons with the d-type and u-type are given by ($u,u'=u,c,t.$ and $d,d\,'=d,s,b.$),
$$\begin{aligned}
{\cal{L}}_Y^{q} & = & \frac{g}{2}\left(\frac{m_d}{m_W}\right)
\bar{d}\left[\frac{ \, \cos\alpha}{\cos\beta}\delta_{dd'}+
\frac{\sqrt{2} \, \sin(\alpha - \beta)}{g \, \cos\beta}
\left(\frac{m_W}{m_d}\right)(\tilde{Y}_2^d)_{dd'}\right]d\,'H^{0}
\nonumber \\
& &+ \frac{g}{2}\left(\frac{m_d}{m_W}\right)\bar{d}
\left[-\frac{\sin\alpha}{\cos\beta} \delta_{dd'}+ \frac{\sqrt{2} \,
\cos(\alpha - \beta)}{g \, \cos\beta}
\left(\frac{m_W}{m_d}\right)(\tilde{Y}_2^d)_{dd'}\right]d\,' h^{0}
\nonumber \\
& &+ \frac{ig}{2}\left(\frac{m_d}{m_W}\right)\bar{d}
\left[-\tan\beta \delta_{dd'}+ \frac{\sqrt{2} }{g \, \cos\beta}
\left(\frac{m_W}{m_d}\right)(\tilde{Y}_2^d)_{dd'}\right]
\gamma^{5}}d\,' A^{0 \nonumber \\
& &+ \frac{g}{2}\left(\frac{m_u}{m_W}\right)
\bar{u}\left[\frac{ \, \sin\alpha}{\sin\beta}\delta_{uu'}-
\frac{\sqrt{2} \, \sin(\alpha - \beta)}{g \, \sin\beta}
\left(\frac{m_W}{m_u}\right)(\tilde{Y}_2^u)_{uu'}\right]u'H^{0}
\nonumber \\
& &+ \frac{g}{2}\left(\frac{m_u}{m_W}\right)\bar{u}
\left[\frac{\cos\alpha}{\sin\beta} \delta_{uu'}- \frac{\sqrt{2} \,
\cos(\alpha - \beta)}{g \, \sin\beta}
\left(\frac{m_W}{m_u}\right)(\tilde{Y}_2^u)_{uu'}\right]u' h^{0}
\nonumber \\
& &+ \frac{ig}{2}\left(\frac{m_u}{m_W}\right)\bar{u}
\left[-\cot\beta \delta_{uu'} + \frac{\sqrt{2} }{g \, \sin\beta}
\left(\frac{m_W}{m_u}\right)(\tilde{Y}_2^u)_{uu'}\right]
\gamma^{5}}u' A^{0.\end{aligned}$$
The first term, proportional to $\delta_{qq'}$ corresponds to the modification of the 2HDM-II over the SM result, while the term proportional to $\tilde{Y}_2^q$ denotes the new contribution from 2HDM-III. Thus, the $f f' \phi^0$ couplings respect CP-invariance, despite the fact that the Yukawa matrices include complex phases; this follows because of the Hermiticity conditions imposed on both $Y_1^q$ and $Y_2^q$.
The corrections to the quark flavor conserving (FC) and flavor violating (FV) couplings, depend on the rotated matrix: $\tilde{Y}_{2}^{q} = O_q^{T}P_qY_{2}^{q}P_q^\dagger O_q$. We will evaluate $\tilde{Y}_{2}^{q}$ assuming that $Y_2^q$ has a four-texture form, namely:
$$Y_{2}^{q} = \left( \begin{array}{ccc}
0 & C_2^q & 0 \\
C_2^{q*} & \tilde{B}_2^q & B_2^q \\
0 & B_2^{q*} & A_2^q
\end{array}\right), \qquad
\mid A_2^q\mid \, \gg \, \mid \tilde{B}_2^q\mid,\mid B_2^q\mid ,\mid
C_2^q\mid.$$
The matrix that diagonalizes the real matrix $\tilde{M}_{q}$ with the four-texture form, is given by:
$$O_q = \left( \begin{array}{ccc}
\sqrt{\frac{\lambda^q_{2}\lambda^q_{3}(A_q-\lambda^q_{1})}{A_q(\lambda^q_{2}-\lambda^q_{1})
(\lambda^q_{3}-\lambda^q_{1})}}& \eta_q
\sqrt{\frac{\lambda^q_{1}\lambda^q_{3}
(\lambda^q_{2}-A_q)}{A_q(\lambda^q_{2}-\lambda^q_{1})(\lambda^q_{3}-\lambda^q_{2})}}
&
\sqrt{\frac{\lambda^q_{1}\lambda^q_{2}(A_q-\lambda^q_{3})}{A_q(\lambda^q_{3}-
\lambda^q_{1})(\lambda^q_{3}-\lambda^q_{2})}} \\
-\eta_q
\sqrt{\frac{\lambda^q_{1}(\lambda^q_{1}-A_q)}{(\lambda^q_{2}-\lambda^q_{1})
(\lambda^q_{3}-\lambda^q_{1})}} &
\sqrt{\frac{\lambda^q_{2}(A_q-\lambda^q_{2})}
{(\lambda^q_{2}-\lambda^q_{1})(\lambda^q_{3}-\lambda^q_{2})}} &
\sqrt{
\frac{\lambda^q_{3}(\lambda^q_{3}-A_q)}{(\lambda^q_{3}-\lambda^q_{1})(\lambda^q_{3}-
\lambda^q_{2})}} \\
\eta_q
\sqrt{\frac{\lambda^q_{1}(A_q-\lambda^q_{2})(A_q-\lambda^q_{3})}{A_q(\lambda^q_{2}
-\lambda^q_{1})(\lambda^q_{3}-\lambda^q_{1})}} &
-\sqrt{\frac{\lambda^q_{2}(A_q
-\lambda^q_{1})(\lambda^q_{3}-A_q)}{A_q(\lambda^q_{2}-\lambda^q_{1})(\lambda^q_{3}
-\lambda^q_{2})}} &
\sqrt{\frac{\lambda^q_{3}(A_q-\lambda^q_{1})(A_q-\lambda^q_{2})}
{A_q(\lambda^q_{3}-\lambda^q_{1})(\lambda^q_{3}-\lambda^q_{2})}}
\end{array}\right),$$
where $m^q_1 = \mid \lambda^q _1\mid$, $m^q_2 = \mid \lambda^q
_2\mid$, $m^q_3 = \mid \lambda^q _3\mid$, and $\eta_q = \lambda^q_2/
m^q_2$ $(q=u,d)$. With $m_u= m^u_1$, $m_c= m^u_2$, and $m_t= m^u_3$; $m_d= m^d_1$, $m_s= m^d_2$, and $m_b= m^d_3$.
Then the rotated form $\tilde {Y}_2^q$ has the general form,
$$\begin{aligned}
\tilde {Y}_2^q & = & O_q^TP_qY_{2}^qP_q^{\dagger}O_q \nonumber \\
& = &\left( \begin{array}{ccc}
(\tilde {Y}_2^q)_{11} & (\tilde {Y}_2^q)_{12} & (\tilde {Y}_2^q)_{13} \\
(\tilde {Y}_2^q)_{21} & (\tilde {Y}_2^q)_{22} & (\tilde {Y}_2^q)_{23} \\
(\tilde {Y}_2^q)_{31} & (\tilde {Y}_2^q)_{32} & (\tilde
{Y}_2^q)_{33}
\end{array}\right).\end{aligned}$$
However, the full expressions for the resulting elements have a complicated form, as it can be appreciated, for instance, by looking at the element $(\tilde{Y}_{2}^q)_{22}$, which is displayed here:
$$\begin{aligned}
(\tilde{Y}_2^q)_{22} &=& \eta_q [C^{q*}_2 e^{i\Phi_{C_q}} +C^q_2
e^{-i\Phi_{C_q}}] \frac{(A_q-\lambda^q_{2})}{m^q_3-\lambda^q_2 }
\sqrt{\frac{m^q_1 m^q_3 }{A_q m^q_2}} +
\tilde{B}^q_2 \frac{A_q-\lambda^q_2}{ m^q_3-\lambda^q_2 }\nonumber \\
& & + A^q_2 \frac{A_q-\lambda^q_2}{ m^q_3-\lambda^q_2 } - [B^{q*}_2
e^{i\Phi_{B_q}} + B^q_2 e^{-i\Phi_{B_q}}]
\sqrt{\frac{(A_q-\lambda^q_{2})(m^q_3-A_q) } {m^q_3- \lambda^q_2}}\end{aligned}$$
where we have taken the limits: $|A_q|, m^q_3, m^q_2 \gg m^q_1$. The free-parameters are: $\tilde{B^q_{2}}, B^q_{2}, A^q_{2}, A_q$.
To derive a better suited approximation, we will consider the elements of the Yukawa matrix $Y_2^l$ as having the same hierarchy as the full mass matrix, namely:
$$\begin{aligned}
C^q_{2} & = & c^q_{2}\sqrt{\frac{m^q_{1}m^q_{2}m^q_{3}}{A_q}} \\
B^q_{2} & = & b^q_{2}\sqrt{(A_q - \lambda^q_{2})(m^q_{3}-A_q)} \\
\tilde{B}^q_{2} & = & \tilde{b}^q_{2}(m^q_{3}-A_q + \lambda^q_{2}) \\
A^q_{2} & = & a^q_{2}A_q.\end{aligned}$$
Then, in order to keep the same hierarchy for the elements of the mass matrix, we find that $A_q$ must fall within the interval $
(m^q_3- m^q_2) \leq A_q \leq m^q_3$. Thus, we propose the following relation for $A_q$:
$$A_q = m^q_{3}(1 -\beta_q z_q),$$
where $z_q = m^q_{2}/m^q_{3} \ll 1$ and $0 \leq \beta_q \leq 1$.
Then, we introduce the matrix $\tilde{\chi}^q$ as follows:
$$\begin{aligned}
\left( \tilde {Y}_2^q \right)_{ij}
&=& \frac{\sqrt{m^q_i m^q_j}}{v} \, \tilde{\chi}^q_{ij} \nonumber\\
&=&\frac{\sqrt{m^q_i m^q_j}}{v}\, {\chi}^q_{ij} \, e^{i
\vartheta^q_{ij}}\end{aligned}$$
which differs from the usual Cheng-Sher ansatz not only because of the appearance of the complex phases, but also in the form of the real parts ${\chi}^q_{ij} = |\tilde{\chi}^q_{ij}|$.
Expanding in powers of $z_q$, one finds that the elements of the matrix $\tilde{\chi}^q$ have the following general expressions:
$$\begin{aligned}
\tilde{\chi}^q_{11} & = & [\tilde{b}^q_2-(c^{q*}_2e^{i\Phi_{C_q}}
+c^q_2e^{-i\Phi_{C_q}} )]\eta_q
+[a^q_2+\tilde{b}^q_2-(b^{q*}_2e^{i\Phi_{B_q}} + b^q_2e^{-i\Phi_{B_q}} )]
\beta_q \nonumber \\
\tilde{\chi}^q_{12} & = & (c^q_2e^{-i\Phi_{C_q}}-\tilde{b}^q_2)
-\eta_q[a^q_2+ \tilde{b}^q_2-(b^{q*}_2e^{i\Phi_{B_q}} +
b^q_2e^{-i\Phi_{B_q}} )] \beta_q
\nonumber \\
\tilde{\chi}^q_{13} & = & (a^q_2-b^q_2e^{-i\Phi_{B_q}}) \eta_q
\sqrt{\beta_q}
\nonumber \\
\tilde{\chi}^q_{22} & = & \tilde{b}^q_2\eta_q
+[a^q_2+\tilde{b}^q_2-(b^{q*}_2e^{i\Phi_{B_q}}
+b^q_2e^{-i\Phi_{B_q}} )]
\beta_q \nonumber \\
\tilde{\chi}^q_{23} & = & (b^q_2e^{-i\Phi_{B_q}}-a^q_2)
\sqrt{\beta_q} \nonumber \\
\tilde{\chi}^q_{33} & = & a^q_2\end{aligned}$$
While the diagonal elements $\tilde{\chi}^q_{ii}$ are real, we notice (Eqs. 14) the appearance of the phases in the off-diagonal elements, which are essentially unconstrained by present low-energy phenomena. As we will see next, these phases modify the pattern of flavor violation in the Higgs sector. For instance, while the Cheng-Sher ansatz predicts that the FCNC couplings $(\tilde{Y}_2^q)_{13}$ and $(\tilde{Y}_2^q)_{23}$ vanish when $a_2^q
= b_2^q$, in our case this is no longer valid for $\cos\Phi_{B_q}
\neq 1$. Furthermore the FCNC couplings satisfy several relations, such as: $|\tilde{\chi}^q_{23}| = |\tilde{\chi}^q_{13}|$, which simplifies the parameter analysis. Similar expressions can be obtained for the lepton sector.
Decay kinematics for $h\rightarrow c\bar{b}W$ {#app2}
=============================================
For sake of simplicity we introduce the dimensionless scaled variables
$$\mu _{i}=\frac{m_{i}^{2}}{m_{h}^{2}} \label{mu}$$
and
$$\left(
\begin{array}{ccc}
x, & y, & z\end{array}\right) =\left(
\begin{array}{ccc}
\frac{2E_{1}}{m_{h}}, & \frac{2E_{2}}{m_{h}}, & \frac{2E_{3}}{m_{h}}\end{array}\right) . \label{xyz}$$
With this notation we can write the energy conservation as
$$x+y+z=2. \label{ce2}$$
In the Higgs rest frame, we just consider the contribution of the $\mu _{1}$, because $\mu _{1}>>\mu _{2},\mu _{3}$, and find the momentum expressions
$$\begin{aligned}
p_{1}\cdot p_{2} &=& \frac{m_{h}^{2}}{2}\left( x+y+\mu _{1}-1\right) , \\
p_{1}\cdot p_{3} &=& \frac{m_{h}^{2}}{2}\left( 1-y-\mu _{1}\right) , \\
p_{2}\cdot p_{3} &=& \frac{m_{h}^{2}}{2}\left( 1-x+\mu _{1}\right) , \\
p_{1}\cdot q &=& \frac{m_{h}^{2}}{2}\left( x+y+\mu _{1}-1\right) , \\
p_{2}\cdot q &=& \frac{m_{h}^{2}}{2}\left( x+y-\mu _{1}-1\right) , \\
p_{3}\cdot q &=& \frac{m_{h}^{2}}{2}\left( 2-x-y\right) , \\
k^{2} &=& m_{h}^{2}\left( 1+\mu _{1}-x\right) \\
q^{2} &=& m_{h}^{2}\left(x+y-1\right). \label{ma}\end{aligned}$$
Now, the functions $\left\vert P_{t}(q)\right\vert ^{2}$, $\left\vert P_{W}(q)\right\vert ^{2}$, $P_{t}^{\ast }(q)P_{W}(k)$and $P_{t}(q)P_{W}^*(k)$ with the dimensionless variables can be written as
$$\left\vert P_{t}(q)\right\vert
^{2}=\frac{1}{m_{h}^{4}}\frac{1}{\left( x+y-1-\mu \right) ^{2}+\mu
\Gamma ^{2}}, \label{pt2dls}$$
$$\left\vert P_{W}(k)\right\vert
^{2}=\frac{1}{m_{h}^{4}}\frac{1}{\left( 1-x\right) ^{2}+\mu
_{1}\gamma ^{2}}, \label{pw2}$$
$$P_{t}^{\ast }(q)P_{W}(k)=\frac{\left( x+y-1-\mu \right) \left( 1-x\right) +\sqrt{\mu \mu _{1}}\gamma \Gamma +i\left[ \sqrt{\mu }\Gamma \left(
1-x\right) -\sqrt{\mu _{1}}\gamma \left( x+y-1-\mu \right) \right] }{m_{h}^{4}\left[ \left( x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}\right]
\left[ \left( 1-x\right) ^{2}+\mu _{1}\gamma ^{2}\right] },
\label{pt*pw}$$
and$$P_{t}(q)P_{W}^{\ast }(q)=\frac{\left( x+y-1-\mu \right) \left( 1-x\right) +\sqrt{\mu \mu _{1}}\gamma \Gamma -i\left[ \sqrt{\mu }\Gamma \left(
1-x\right) -\sqrt{\mu _{1}}\gamma \left( x+y-1-\mu \right) \right] }{m_{h}^{4}\left[ \left( x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}\right]
\left[ \left( 1-x\right) ^{2}+\mu _{1}\gamma ^{2}\right] },
\label{ptpw*}$$ here $\mu =\frac{m_{t}^{2}}{m_{h}^{2}}$, $\Gamma ^{2}=\frac{\Gamma
_{t}^{2}}{m_{h}^{2}}$ and $\gamma ^{2}=\frac{\Gamma
_{W}^{2}}{m_{h}^{2}}$, with $\Gamma_t\approx1.28$ GeV and $\Gamma_W\approx2.14$ GeV are the SM full width decay for top quark and $W$ boson, respectively [@pdg]. The three body decay rate is given by the formula
$$d\Gamma _{h\longrightarrow
W\overline{b}c}=\frac{\overline{\left\vert \mathcal{M}\right\vert
}^{2}}{2m_{h}}\left[ \frac{d^{3}\overrightarrow{p_{1}}}{\left(
2\pi \right) ^{3}2E_{1}}\right] \left[ \frac{d^{3}\overrightarrow{p_{2}}}{\left( 2\pi \right) ^{3}2E_{2}}\right] \left[ \frac{d^{3}\overrightarrow{p_{3}}}{\left( 2\pi \right) ^{3}2E_{3}}\right] \left( 2\pi \right)
^{4}\delta ^{4}\left( p-p_{1}-p_{2}-p_{3}\right). \label{golden}$$
Using the delta function to perform the $\overrightarrow{p_{3}}$ integral and setting the polar axis along $\overrightarrow{p_{1}}$, we have $$\Gamma _{h\longrightarrow W\overline{b}c}=\Gamma_{11}+\Gamma_{22}+\Gamma_{12}, \label{g11}$$ where we define $$\Gamma _{11}=\frac{m_{h}}{256\pi ^{3}}\int
\int_{R_{xy}}\overline{\left\vert \mathcal{M}_{1}\right\vert
}^{2}dxdy, \label{g11}$$ $$\Gamma _{22}=\frac{m_{h}}{256\pi ^{3}}\int
\int_{R_{xy}}\overline{\left\vert \mathcal{M}_{2}\right\vert
}^{2}dxdy, \label{g22}$$ and $$\Gamma _{12}=\frac{m_{h}}{256\pi ^{3}}\int \int_{R_{xy}}\left( \overline{\mathcal{M}_{1}^{\dagger}\mathcal{M}_{2}}+\overline{\mathcal{M}_{2}^{\dagger
}\mathcal{M}_{1}}\right)dxdy, \label{g12}$$ with the $R_{xy}$ region is defined by $$\frac{1}{2}\left( 2-x-\sqrt{x^{2}-4\mu _{1}}\right) \leq y\leq \frac{1}{2}\left( 2-x+\sqrt{x^{2}-4\mu _{1}}\right)$$ and $$2\sqrt{\mu _{1}}\leq x\leq 1+\mu _{1}.$$ We can obtain the next results whether we write equations (\[m11f\]), (\[m22f\]), (\[m1m2f\]) and (\[m2m1f\]) with dimensionless parameters and substitute in equations (\[g11\]), (\[g22\]) and (\[g12\]), $$\Gamma _{11}=\frac{2g^{2}m_{h}}{\left( 16\pi \right) ^{3}}\left[
\left\vert S_{231}^{u}-P_{231}^{u}\right\vert ^{2}\left(
J_{1}+J_{2}-J_{4}-J_{6}\right)
+\left\vert S_{231}^{u}+P_{231}^{u}\right\vert ^{2}\left( J_{3}+J_{5}\right) \right] ,$$$$\Gamma _{22}=\frac{g^{4}q_{11}^{2}\left\vert V_{cb}\right\vert ^{2}m_{h}}{512\pi ^{3}}\left[ J_{7}+J_{8}\right] ,$$$$\Gamma _{12}=\frac{\left\vert S_{231}^{u}+P_{231}^{u}\right\vert
g^{3}q_{11}V_{cb}m_{h}}{256\pi ^{3}}\left( J_{9}\sin \theta
+J_{10}\cos \theta +J_{11}\sin \theta +J_{12}\cos \theta \right) ,$$and$$\widetilde{\Gamma }_{12}=\frac{\left\vert
S_{231}^{u}+P_{231}^{u}\right\vert g^{3}q_{11}V_{cb}m_{h}}{256\pi
^{3}}\left( J_{9}\sin \theta -J_{10}\cos \theta +J_{11}\sin \theta
-J_{12}\cos \theta \right) .$$The $J_{i}$ integrals, for $i=1,...,12$, are given by
![Graphics for the integrals of the $\Gamma_{11}$.[]{data-label="j1to6"}](j1to6.EPS "fig:")\
![Graphics for the integrals of the $\Gamma_{22}$ and $\Gamma_{12}$.[]{data-label="j7to12"}](j7to12.EPS "fig:")\
$$J_{1}=\frac{1}{\mu _{1}}\int \int_{R_{xy}} \frac{\left( x+y-\mu _{1}-1\right) \left( x+y+\mu _{1}-1\right) \left(
2-x-y\right) }{\left( x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}}dxdy,
\label{j1}$$
$$J_{2}=\int \int_{R_{xy}} \frac{\left( x+y-\mu _{1}-1\right) \left(
2-x-y\right) }{\left( x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}}dxdy
\label{j2}$$
$$J_{3}=\frac{\mu }{\mu _{1}}\int \int_{R_{xy}} \frac{\left( x+y-\mu _{1}-1\right) \left( 1-y-\mu _{1}\right) }{\left(
x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}}dxdy, \label{j3}$$
$$J_{4}=\frac{1}{\mu _{1}}\int \int_{R_{xy}} \frac{\left( x+y-1\right) \left( x+y-\mu _{1}-1\right) \left( 1-y-\mu
_{1}\right) }{\left( x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}}dxdy,
\label{j4}$$
$$J_{5}=\mu \int \int_{R_{xy}} \frac{1-x+\mu _{1}}{\left( x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}}dxdy, \label{j5}$$
$$J_{6}=\int \int_{R_{xy}} \frac{\left( x+y-1\right) \left( 1-x+\mu _{1}\right) }{\left( x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}}dxdy, \label{j6}$$
$$J_{7}=\mu _{1}\int \int_{R_{xy}} \frac{\left( 1-x+\mu _{1}\right)
}{\left( 1-x\right) ^{2}+\mu _{1}\gamma ^{2}}dxdy, \label{j7}$$
$$J_{8}=\int \int_{R_{xy}} \frac{\left( x+y+\mu _{1}-1\right) \left(
1-y-\mu _{1}\right) }{\left( 1-x\right) ^{2}+\mu _{1}\gamma
^{2}}dxdy, \label{j8}$$
$$J_{9}=\sqrt{\mu \mu _{1}}\int \int_{R_{xy}} \frac{\left( 1-x+\mu _{1}\right) \left[ \left( x+y-1-\mu \right) \left(
1-x\right) +\sqrt{\mu \mu _{1}}\gamma \Gamma \right] }{\left[ \left(
x+y-1-\mu \right)
^{2}+\mu \Gamma ^{2}\right] \left[ \left( 1-x\right) ^{2}+\mu _{1}\gamma ^{2}\right] }dxdy, \label{j9}$$
$$J_{10}=\sqrt{\mu \mu _{1}}\int \int_{R_{xy}} \frac{\left( 1-x+\mu _{1}\right) \left[ \sqrt{\mu }\Gamma \left( 1-x\right) -\sqrt{\mu _{1}}\gamma \left( x+y-1-\mu \right) \right] }{\left[ \left(
x+y-1-\mu \right) ^{2}+\mu \Gamma ^{2}\right] \left[ \left(
1-x\right) ^{2}+\mu _{1}\gamma ^{2}\right] }dxdy, \label{j10}$$
$$J_{11}=\sqrt{\frac{\mu }{\mu _{1}}}\int \int_{R_{xy}} \frac{\left( x+y+\mu _{1}-1\right) \left( 1-y-\mu _{1}\right) \left[
\left( x+y-1-\mu \right) \left( 1-x\right) +\sqrt{\mu \mu
_{1}}\gamma \Gamma \right] }{\left[ \left( x+y-1-\mu \right)
^{2}+\mu \Gamma ^{2}\right] \left[ \left( 1-x\right) ^{2}+\mu
_{1}\gamma ^{2}\right] }dxdy, \label{j11}$$
$$J_{12}=\sqrt{\frac{\mu }{\mu _{1}}}\int \int_{R_{xy}} \frac{\left( x+y+\mu _{1}-1\right) \left( 1-y-\mu _{1}\right) \left[ \sqrt{\mu }\Gamma \left( 1-x\right) -\sqrt{\mu _{1}}\gamma \left( x+y-1-\mu \right) \right] }{\left[ \left( x+y-1-\mu \right) ^{2}+\mu \Gamma
^{2}\right] \left[ \left( 1-x\right) ^{2}+\mu _{1}\gamma ^{2}\right]
}dxdy. \label{j12}$$
The graphics for the $J_i$, $i=1,...,12$, are shown in the figures \[j1to6\] and \[j7to12\].
J. F. Gunion, H. E. Haber, G. L. Kane and S. Dawson
J. Erler, Phys. Rev. D [**81**]{}, 051301 (2010) arXiv:1002.1320 \[hep-ph\].
H. Flacher, M. Goebel, J. Haller, A. Hocker, K. Moenig and J. Stelzer, Eur. Phys. J. C [**60**]{}, 543 (2009) \[arXiv:0811.0009 \[hep-ph\]\].
J. L. Diaz-Cruz and D. A. Lopez-Falcon, Phys. Lett. B [**568**]{}, 245 (2003) \[arXiv:hep-ph/0304212\]. U. Baur,eConfC010630:P1WG1,2001. hep-ph/0202001
M. S. Carena and H. E. Haber, Prog. Part. Nucl. Phys. [**50**]{}, 63 (2003) \[arXiv:hep-ph/0208209\]. P. Nath [*et al.*]{}, Nucl. Phys. Proc. Suppl. [**200-202**]{}, 185 (2010) \[arXiv:1001.2693 \[hep-ph\]\]. M. Bustamante, L. Cieri and J. Ellis, arXiv:0911.4409 \[hep-ph\]. J. Ellis, Int. J. Mod. Phys. A [**25**]{}, 2409 (2010) \[arXiv:1004.0648 \[hep-ph\]\]. See, for instance, recent reviews in “Perspectives on Supersymmetry”, ed. G.L. Kane, World Scientific Publishing Co., 1998; H.E. Haber, Nucl. Phys. Proc. Suppl. [**101**]{}, 217 (2001), \[hep-ph/0103095.\]
H. E. Haber, Nucl. Phys. Proc. Suppl. **101** 217 (2001), hep-ph/0103095.
N. Arkani-Hamed, A. Cohen and H. Georgi, Phys. Lett. B[**513**]{} (2001) 232 \[arXiv: hep-h/0105239\]. A. Aranda, J. L. Diaz-Cruz, J. Hernandez-Sanchez and R. Noriega-Papaqui, Phys. Lett. B [**658**]{}, 57 (2007) \[arXiv:0708.3821 \[hep-ph\]\]. A. Aranda, C. Balazs and J. L. Diaz-Cruz, Nucl. Phys. B [**670**]{}, 90 (2003) \[arXiv:hep-ph/0212133\]. W. F. Chang, J. N. Ng and A. P. Spray, arXiv:1004.2953 \[hep-ph\]. E. O. Iltan, Eur. Phys. J. C [**51**]{}, 689 (2007) \[arXiv:hep-ph/0511241\]. H. E. Haber, G. L. Kane and T. Sterling, Nucl. Phys. B [**161**]{}, 493 (1979). J. Liu and L. Wolfenstein, Nucl. Phys. B [**289**]{}, 1 (1987). Y. L. Wu and L. Wolfenstein, Phys. Rev. Lett. [**73**]{}, 1762 (1994) \[arXiv:hep-ph/9409421\]. M. Carena and others, Report of the Tevatron Higgs working group (2000), hep-ph/0010338.
I. F. Ginzburg, M. Krawczyk and P. Osland, arXiv:hep-ph/0211371. I. F. Ginzburg and M. Krawczyk, Phys. Rev. D [**72**]{}, 115013 (2005) \[arXiv:hep-ph/0408011\]. E. Accomando [*et al.*]{}, arXiv:hep-ph/0608079. S. L. Glashow and S. Weinberg, Phys. Rev. D [**15**]{}, 1958 (1977). J. L. Diaz-Cruz, R. Noriega-Papaqui and A. Rosado, Phys. Rev. D [**69**]{}, 095002 (2004) \[arXiv:hep-ph/0401194\]. J. L. Diaz-Cruz, Phys. Rev. Lett. [**100**]{}, 221802 (2008) \[arXiv:0711.0488 \[hep-ph\]\].
D. Atwood, S. Bar-Shalom and A. Soni, Phys. Lett. B [**635**]{}, 112 (2006) \[arXiv:hep-ph/0502234\]. J. L. Diaz-Cruz, Mod. Phys. Lett. A [**20**]{}, 2397 (2005) \[arXiv:hep-ph/0409216\]. A. Aranda, J. L. Diaz-Cruz and A. Rosado, Int. J. Mod. Phys. A [**22**]{}, 1417 (2007) \[arXiv:hep-ph/0507230\]. J. L. Diaz-Cruz and A. Rosado, Rev. Mex. Fis. [**53**]{}, 396 (2007) \[arXiv:hep-ph/0610167\]. R. Barbieri and L. J. Hall, arXiv:hep-ph/0510243. C. D. Froggatt, R. Nevzorov, H. B. Nielsen and D. Thompson, Phys. Lett. B [**657**]{}, 95 (2007) \[arXiv:0708.2903 \[hep-ph\]\]. I. F. Ginzburg, Acta Phys. Polon. B [**37**]{}, 1161 (2006) \[arXiv:hep-ph/0512102\]. M. Maniatis, A. von Manteuffel and O. Nachtmann, Eur. Phys. J. C [**57**]{}, 719 (2008) \[arXiv:0707.3344 \[hep-ph\]\]. J. M. Gerard and M. Herquet, Phys. Rev. Lett. [**98**]{}, 251802 (2007) \[arXiv:hep-ph/0703051\]. A. W. El Kaffas, P. Osland and O. M. Ogreid, Nonlin. Phenom. Complex Syst. [**10**]{}, 347 (2007) \[arXiv:hep-ph/0702097\]. H. Fritzsch, Phys. Lett. **B**70 (1977) 436.
T. P. Cheng and M. Sher, Phys. Rev. D [**35**]{}, 3484 (1987). A. E. Carcamo Hernandez, R. Martinez and J. A. Rodriguez, Eur. Phys. J. C [**50**]{}, 935 (2007) \[arXiv:hep-ph/0606190\]. Y. F. Zhou, J. Phys. G [**30**]{}, 783 (2004) \[arXiv:hep-ph/0307240\]. M. Aoki, S. Kanemura, K. Tsumura and K. Yagyu, Phys. Rev. D [**80**]{}, 015017 (2009) \[arXiv:0902.4665 \[hep-ph\]\].
H. E. Logan and D. MacLennan, Phys. Rev. D [**81**]{}, 075016 (2010) \[arXiv:1002.4916 \[hep-ph\]\]. H. E. Logan and D. MacLennan, Phys. Rev. D [**79**]{}, 115022 (2009) \[arXiv:0903.2246 \[hep-ph\]\]. J. F. Gunion and H. E. Haber, Phys. Rev. D [**72**]{}, 095002 (2005) \[arXiv:hep-ph/0506227\]. J. L. Diaz-Cruz and A. Mendez, Nucl. Phys. B [**380**]{}, 39 (1992). T.D. Lee, Phys. Rev. D8, 1226 (1973).
L. J. Hall and M. B. Wise, Nucl. Phys. **B**187, 397, (1981).
J.F. Donoghue and L. F. Li, Phys. Rev. **D**19, 945 (1979).
P. Osland, P. N. Pandita and L. Selbuz, Phys. Rev. D [**78**]{}, 015003 (2008) \[arXiv:0802.0060 \[hep-ph\]\]. J. L. Diaz-Cruz, JHEP [**0305**]{}, 036 (2003) \[arXiv:hep-ph/0207030\]. C. D. Frogatt and H. B. Nielsen, Nucl. Phys. B[**147**]{}, 277 (1979).
J. L. Diaz-Cruz and G. Lopez Castro, Phys. Lett. B [**301**]{}, 405 (1993); J. L. Diaz-Cruz, R. Noriega-Papaqui and A. Rosado, Phys. Rev. D [**71**]{}, 015014 (2005) \[arXiv:hep-ph/0410391\]; A. Aranda, J. Lorenzo Diaz-Cruz, J. Hernandez-Sanchez and E. Ma, Phys. Rev. D [**81**]{}, 075010 (2010) \[arXiv:1001.4057 \[hep-ph\]\]; J. L. Diaz-Cruz and J. J. Toscano, Phys. Rev. D [**62**]{}, 116005 (2000) \[arXiv:hep-ph/9910233\]; U. Cotti, J. L. Diaz-Cruz, R. Gaitan, H. Gonzales and A. Hernandez-Galeana, Phys. Rev. D [**66**]{}, 015004 (2002) \[arXiv:hep-ph/0205170\]; J. L. Diaz-Cruz, D. K. Ghosh and S. Moretti, Phys. Lett. B [**679**]{}, 376 (2009) \[arXiv:0809.5158 \[hep-ph\]\]; J. L. Diaz-Cruz, J. Hernandez–Sanchez, S. Moretti, R. Noriega-Papaqui and A. Rosado, Phys. Rev. D [**79**]{}, 095025 (2009) \[arXiv:0902.4490 \[hep-ph\]\]; C. Balazs, J. L. Diaz-Cruz, H. J. He, T. M. P. Tait and C. P. Yuan, Phys. Rev. D [**59**]{}, 055016 (1999) \[arXiv:hep-ph/9807349\]; J. L. Diaz-Cruz and O. A. Sampayo, Int. J. Mod. Phys. A [**8**]{}, 4339 (1993); J. L. Diaz-Cruz and O. A. Sampayo, Phys. Rev. D [**50**]{}, 6820 (1994).
A. Arhrib, M. Capdequi Peyranere, W. Hollik and S. Penaranda, arXiv:hep-ph/0307391; L. Randall, JHEP [**0802**]{}, 084 (2008) \[arXiv:0711.4360 \[hep-ph\]\]; S. Bejar, J. Guasch and J. Sola, Nucl. Phys. B [**675**]{}, 270 (2003) \[arXiv:hep-ph/0307144\]; A. Arhrib, Phys. Rev. D [**72**]{}, 075016 (2005) \[arXiv:hep-ph/0510107\]. A. Pich and P. Tuzon, Phys. Rev. **D**80 091702 (2009), hep-ph/09081554.
, M. Jung, A. Pich and P. Tuzon, hep-ph/10060470
Braeuninger, Carolin B. and Ibarra, Alejandro and Simonetto, Cristoforo, Phys. Lett. **B**692 189 (2010), hep-ph/1005.5706
N. Barros e Sa, A. Barroso, P. Ferreira and R. Santos, PoS C [**HARGED2008**]{}, 014 (2008) \[arXiv:0906.5453 \[hep-ph\]\]; A. Barroso, P. M. Ferreira and R. Santos, Phys. Lett. B [**652**]{}, 181 (2007) \[arXiv:hep-ph/0702098\]; A. Barroso, P. M. Ferreira and R. Santos, Afr. J. Math. Phys. [**3**]{}, 103 (2006) \[arXiv:hep-ph/0507329\]; A. Barroso, P. M. Ferreira and R. Santos, Phys. Lett. B [**632**]{}, 684 (2006) \[arXiv:hep-ph/0507224\].
I. P. Ivanov, Phys. Rev. D [**75**]{}, 035001 (2007) \[Erratum-ibid. D [**76**]{}, 039902 (2007)\] \[arXiv:hep-ph/0609018\].
M. Maniatis, A. von Manteuffel, O. Nachtmann and F. Nagel, Eur. Phys. J. C [**48**]{}, 805 (2006) \[arXiv:hep-ph/0605184\]. E. Ma and M. Maniatis, arXiv:1005.3305 \[hep-ph\]. H. E. Haber and D. O’Neil, Phys. Rev. D [**74**]{}, 015018 (2006) \[arXiv:hep-ph/0602242\]. K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010)
J. L. Díaz-Cruz, J. H. Montes de Oca Y., working in progress.
[^1]: Here we shall follow closely the notation of Haber and O’Neil [@Haber:2006ue].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: '[We report on the generation of continuous-wave squeezed vacuum states of light at the telecommunication wavelength of 1550nm. The squeezed vacuum states were produced by typeI optical parametric amplification (OPA) in a standing-wave cavity built around a periodically poled potassium titanyl phosphate (PPKTP) crystal. A non-classical noise reduction of 5.3dB below the shot noise was observed by means of balanced homodyne detection.]{}'
author:
- Moritz Mehmet
- Sebastian Steinlechner
- Tobias Eberle
- Henning Vahlbruch
- André Thüring
- Karsten Danzmann
- Roman Schnabel
title: 'Observation of continuous-wave squeezed light at 1550nm'
---
Squeezed states of light were proposed to improve the sensitivity of laser interferometers for the detection of gravitational waves (GW) [@Cav81], and to establish quantum communication channels [@YHa86], e.g. for quantum key distribution [@Ralph99; @Hillery00]. For any application of squeezed states of light, a low decoherence level is required, i.e. optical loss and thermally driven noise sources need to be minimized. In this respect the laser wavelength of 1550nm has emerged as a very interesting topic. Firstly, at this wavelength conventional silica based telecom glass fibers show low optical loss and can be used for the transmission of squeezed states. Losses of as low as 0.2dB/km were already measured in the late 70’s [@Miya79], and ultra low loss (ULL) fibers with an attenuation of 0[.]{}17-0[.]{}18dB/km are commercially available today [@Li08]. Secondly, at this wavelength, crystalline silicon constitutes an excellent test mass material for interferometric applications with low optical loss and high mechanical quality [@McGuigan].
GW detectors require the generation of squeezed states in a single spatio-temporal mode of continuous-wave light, whereas quantum channels can also be established in the pulsed laser regime. In the past years, squeezed states at wavelengths beyond 1.5$\mu$m were mainly generated in the latter regime. Noise powers of 6.8dB below vacuum noise at 1.5$\mu$m[@Dong08], 3.2dB at 1.535$\mu$m[@ETZH07], and 1.7 dB at 1.55$\mu$m[@NSHMYG02] were observed. Very recently, continuous-wave squeezed vacuum states at 1560nm were generated by an optical parametric oscillator based on periodically poled LiNbO$_3$ (PPLN), and a nonclassical noise suppression of 2.3dB was observed [@Feng08].
Here, we report on the generation of continuous-wave squeezed vacuum states at a wavelength of 1550nm based on periodically poled potassium titanyl phosphate (PPKTP). Squeezing of 5.3dB was observed by balanced homodyne detection. The visibility of the mode-matching between the squeezed field and a spatially filtered local oscillator beam was measured to be 99%, thereby proving high spatial mode quality of the squeezed states.
The light source in our setup, as schematically depicted in Fig. \[setup\], was a high power erbium micro fiber laser providing about 1.6W of continuous-wave radiation at 1550nm. The laser beam was first sent through a ring mode cleaner (MC) cavity with a finesse of 350 and a line width of 1[.]{}2MHz for p-polarized light. Thus reducing mode distortions of the laser’s TEM$_{00}$ spatial mode profile as well as its phase and amplitude fluctuations at frequencies above the MC linewidth.
![Schematic of the setup. After being sent through a mode cleaner (MC) cavity, one part of the light is used as a control beam for the OPA and the local oscillator for balanced homodyne detection. The other part is frequency doubled in a SHG cavity to provide the 775nm field to pump the OPA. The squeezed field leaves the OPA in the counter direction to the pump, and is measured with the homodyne detector. PBS: polarizing beam splitter; DBS: dichroic beam splitter; HBS: homodyne beam splitter; MC: mode cleaner cavity; PD: photo diode; EOM: electro-optical modulator.[]{data-label="setup"}](Fig1.eps)
Approximately 10mW of the transmitted light served as a local oscillator (LO) for balanced homodyne detection, while the remaining power of about one 1W was used for second harmonic generation (SHG) to provide the frequency doubled pump field for the OPA. Both, SHG and OPA were realized as single-ended standing-wave cavities formed by two mirrors and the non-linear crystal in between. In both cavities we employed a PPKTP crystal of dimension $10\times$2$\times$1mm$^3$ with flat, anti-reflection (AR) coated front and end faces. Inside a polyoxymethylene (POM) housing, each crystal is embedded in a copper fixture mounted on a Peltier element. Together with an integrated thermistor this enabled us to actively fine-tune the crystal temperature for efficient nonlinear coupling. A highly reflective (HR) mirror with a power reflectivity r$>$99[.]{}98% for both the fundamental and second harmonic field faces one AR-side of the crystal and a piezo-driven out-coupling mirror was mounted on the opposite side. The OPA out-coupling mirror had 90% and 20% power reflectivity for 1550nm and 775nm, respectively. For the SHG we also used 90% reflectivity for the fundamental but only a marginal reflectivity for the second harmonic. The mirrors and the ring-piezo were mounted inside aluminum blocks that were rigidly attached to the POM housing. Considering the refractive index of $n$=1.816 for PPKTP at 1550nm and the spacing of 20mm between crystal end faces and mirrors, the cavity waist size $w_0$, free spectral range FSR, and line width (FWHM) were calculated to be $\omega_0$=60$\mu$m, FSR=2.6GHz, and FWHM= 43MHz, respectively. When the SHG cavity was locked on resonance it produced up to 800mW at 775nm, which was separated from the fundamental by a dichroic beam splitter (DBS). The harmonic beam passed a combination of a half waveplate and a polarizing beam splitter for pump power adjustment, a Faraday isolator to prevent the SHG from retro-reflected light, and an electro optical modulator (EOM), and was mode matched to the TEM$_{00}$-mode of another MC cavity (MC$_{775}$) with characteristics equal to those of MC$_{1550}$. The transmitted beam was then carefully aligned to match the OPA-cavity TEM$_{00}$ mode. The length control of the cavities in our setup was accomplished by means of a modulation/demodulation (Pound-Drever-Hall, PDH) scheme utilizing custom made EOMs and matched photo detectors. Details on the particular implementation can be found in Fig. \[setup\]. The squeezed states left the OPA in the counter direction to the second-harmonic pump, where another DBS separated the two of them. The measurement of field quadratures variances was accomplished by means of balanced homodyne detection, for which the squeezed field was subsequently made to interfere with the LO on a 50/50-beam splitter. A piezo-actuated steering mirror was employed to shift the LO phase relative to the squeezed field. To adjust the visibility we injected a control beam through the HR back side of the OPA. This control beam was matched to o the OPA TEM$_{00}$ mode. The light that was transmitted propagated congruent to the mode to be squeezed, and, by locking the OPA cavity length, could be used to overlap with the LO on the homodyne beam splitter (HBS). We reached a fringe visibility of 99[.]{}0%. The two outputs of the 50/50-beam splitter were each focused down and detected by a pair of Epitaxx ETX-500 photodiodes. The difference current was fed to a spectrum analyzer.
To verify our detector’s linearity we took measurements of the vacuum noise power against the incident LO power at a sideband frequency of 5MHz as depicted in Fig. \[fig1\]. Changing the LO power by a factor of two, entailed a 3dB shift of the corresponding noise trace, showing that the detector was quantum noise limited and operated linearly in the measurement regime.
![Noise power levels of the homodyne detector were measured at different LO powers at a centre frequency of 5MHz with the signal port blocked. Box sizes indicate the standard deviation of the fit and an estimated $\pm$5% uncertainty of the power meter used. The graph shows that our homodyne detector was quantum noise limited and operated linearly within our measurement regime.[]{data-label="fig1"}](Fig2.eps)
We found the optimum pump power for our OPA to be 300mW, yielding a noise reduction of 5.3dB in the squeezed quadrature. This entailed an increase of 9.8dB in the anti-squeezed quadrature. To switch between the two, a piezo-actuated mirror was used to phase shift the LO with respect to the squeezed field . The measured noise curves are depicted in Fig. \[fig2\]. Trace (a) is the measured vacuum noise when the signal port of the HBS is blocked. The associated power of the incident LO was approximately 4 mW. Upon opening the signal port and injecting the squeezed field of the resonant OPA, trace (d) was recorded by linearly sweeping the LO-phase, thereby changing the measured quadrature from anti-squeezed to squeezed values. By holding the homodyne angle fixed, continuous traces of the squeezing (b) and anti-squeezing (c) were recorded. All traces were recorded at a sideband frequency of 5MHz and are, apart from (d), averaged twice. The contribution of electronic dark noise of our detector was negligible (18dB below the shot noise) and was not subtracted from the measured data.
![Noise powers of the squeezed light emitted by the OPA at a sideband frequency of 5MHz normalized to the shot-noise level (trace (a)). All traces were recorded with a resolution bandwidth of 300kHz and a video bandwidth of 300Hz. Squeezing (b) and anti-squeezing (c) curves were averaged twice. Curve (d) was recorded by linearly sweeping the LO-phase which continuously rotated the measured quadrature from anti-squeezing to squeezing.[]{data-label="fig2"}](Fig3.eps)
The observed squeezed noise power was 5.3dB below shot noise, however, the observed anti-squeezing was about 10dB above shot noise, revealing an uncertainty product of about a factor of three above the minimum uncertainty. With an increased pump power we observed further increased anti-squeezing, but a constant squeezing level. Following the argumentation in [@Vahlb08] this observation implies that our measurement was not limited by phase noise [@Taken07; @Franzen06] but by optical losses. With 0.25% residual reflectance of our crystal AR coatings and 0.1%/cm absorption loss within the crystal we estimate the escape efficiency of the OPA cavity to be 90%. Together with a propagation loss of approximately 3%, we estimate the quantum efficiency of our photo detectors to be 90%$\pm4$%. We therefore expect that higher levels of squeezing from PPKTP could be observed in future utilizing better photo diodes and an OPA optimized for better escape efficiency. We note that PPKTP has already been successfully applied for the generation of squeezed and entangled states at wavelengths between 532nm and 1064nm [@Hetet07; @Aoki06; @Taken07; @Goda08NatPhys; @Gross08] with the maximum squeezing strength of 9dB observed at 860nm in [@Taken07]. The strongest squeezing to date was reported in [@Vahlb08] where a MgO:LiNbO$_{3}$ crystal enabled the observation of a noise reduction of 10dB below shot noise at 1064nm. However, at 1550nm the phase matching condition of this material is uncomfortably high and temperature gradients would significantly complicate the stable operation of a squeezed light source. This makes PPKTP the preferable material for the generation of squeezed light at 1550nm.
In conclusion, we have demonstrated strong squeezing at the telecommunication wavelength of 1550nm. Our experiment proved that PPKTP is an effective material for the generation of squeezed states at this wavelength. The spatio-temporal mode of the squeezed field had a high purity ensuring the compatibility with quantum memories and quantum repeaters. By implementing a control scheme according to [@Vahlb06] squeezing in the detection band of current GW detectors can be realized. These detectors are operated at 1064nm [@Goda08NatPhys], however, future detector designs might consider silicon as test mass material and the laser wavelength of 1550nm in order to the reduce the thermal noise floor.
The authors thank the German Research Foundation and the Centre for Quantum Engineering and Space-Time Research QUEST for financial support.
[99]{}
C. M. Caves,“Quantum-mechanical noise in an interferometer”, Phys. Rev. D [**23**]{}, 1693 (1981). Y. Yamamoto and H. A. Haus, “Preparation, measurement and information capacity of optical quantum states”, Rev. Mod. Phys. **58**, 1001 (1986). T.C. Ralph, “Continuous variable quantum cryptography”, Phys. Rev. A [**61**]{} 010303 (1999). M. Hillary, “Quantum cryptography with squeezed states”, Phys. Rev. A [**61**]{} 022309 (2000). T. Miya, Y. Terunuma, T. Hosaka, and T. Moyashito, “Ultimate low-loss single-mode fiber at 1.55$\mu$m”, Electron. Lett. [**15**]{} p.106-108 (1979). M. Li and Daniel A. Nolan, “Optical Transmission Fiber Design Evolution”, Journal of Lightwave Technolgy [**26**]{} p.1079 (2008). D. F. McGuigan, C. C. Lam, R. Q. Gram, A. W. Hoffman, D. H. Douglass, H. W. Gutche, “Measurements of the Mechanical Q of Single-Crystal Silicon at Low Temperatures”, J. Low Temp. Phys. [**30**]{}, 621 (1978). R. Dong, J. Heersink, J. Corney, P. Drummond, U. Andersen, G. Leuchs, “Experimental evidence for Raman-induced limits to efficient squeezing in optical fibers”, Opt. Lett., [**33**]{}, 116-118 (2008). Y. Eto, T. Tajima, Y. Zhang, and T. Hirano, “Observation of squeezed light at 1.535$\mu$m using a pulsed homodyne detector”, Opt. Lett. 32, 1698 (2007). N. Nishizawa, K. Sone, J. Higuchi, M. Mori, K. Yamane, and T. Goto,“Squeezed Vacuum Generation Using Symmetric Nonlinear Polarization Interferometer”, Jpn. J. Appl. Phys. [**41**]{} L130 (2002). J. Feng, X. Tian, Y. Li, and K. Zhanga, “Generation of a squeezing vacuum at a telecommunication wavelength with periodically poled LiNbO$_3$”, Appl. Phys. Lett. [**92**]{} 221102 (2008). H. Vahlbruch, M. Mehmet, S. Chelkowski, B. Hage, A. Franzen, N. Lastzka, S. Go[ß]{}ler, K. Danzmann, and R. Schnabel, “Observation of Squeezed Light with 10-dB Quantum-Noise Reduction”, Phys. Rev. Lett. [**100**]{} 33602 (2008). A.Franzen, B. Hage, J. DiGuglielmo, J. Fiurášek, R. Schnabel, “ Experimental Demonstration of Continuous Variable Purification of Squeezed States”, Phys. Rev. Lett. [**97**]{} 150505 (2006). Y. Takeno, M. Yukawa, H. Yonezawa, and A. Furusawa, “Observation of -9 dB quadrature squeezing with improvement of phase stability in homodyne measurement”, Opt. Express [**15**]{} 4321-4327 (2007). G. Hétet, O. Glöckl, K. A. Pilypas, C. C. Harb, B. C. Buchler, H.-A. Bachor, and P. K. Lam, “Squeezed light for bandwidth-limited atom optics experiments at the rubidium D1 line ”, J. Phys. B [**40**]{} 221-226 (2007). T. Aoki, G. Takahashi, and A. Furusawa, “Squeezing at 946nm with periodically poled KTiOPO”, Opt. Express [**14**]{} 6930-6935 (2006). K. Goda, O. Miyakawa, E. E. Mikhailov, S. Saraf, R. Adhikari, K. McKenzie, R. Ward, S. Vass, A. J. Weinstein, and N. Mavalvala, “A quantum-enhanced prototype gravitational-wave detector”, Nat. Phys. [**4**]{} 472-476 (2008). N. Grosse, S. Assad, M. Mehmet, R. Schnabel, T. Symul, and P.K. Lam, “Observation of Entanglement between Two Light Beams Spanning an Octave in Optical Frequency”, Phys. Rev. Lett. [**100**]{} 243601 (2008). H. Vahlbruch, S. Chelkowski, B. Hage, A. Franzen, K. Danzmann, and R. Schnabel, “Coherent Control of Vacuum Squeezing in the Gravitational-Wave Detection Band”, Phys. Rev. Lett. [**97**]{} 011101 (2006).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Since the identification of these stars by Morgan et al. in 1943, various definitions have been proposed for the stars of the Lambda Bootis group. We present here the various definitions which have been given to these objects in order to induce a general discussion on this topic.'
author:
- 'R. Faraggiana'
- 'M. Gerbaldi'
title: Definitions of the Lambda Boo stars
---
Introduction {#intr}
============
The criteria to detect this class of peculiar A-type stars rely upon the choices made by various authors in the last 50 years. Therefore several definitions of lambda Boo stars are found in the literature.
Both photometric and spectroscopic criteria have been used. Some definitions so far proposed, concern only stars of spectral type near A0 while others include A and F stars; no restrictions appear about the luminosity class and therefore the evolutionary stage of the lambda Boo stars.
The common character of these stars, according to the various definitions, is the weakness of the metallic lines; however requirements such as high $v\sin i$ and deficiency of specific elements are introduced by some, but not all authors; the same remark concerns their kinematic properties.
We shall present below the criteria used by various authors in order to understand the differences between the different lists of such stars published up to now, and to open a discussion for the future.
The discovery of the Lambda Boo Stars
=====================================
[**... a definition based on Spectral Classification criteria ...**]{}
Morgan et al. (1943) gave in fact implicitly the first definition of this group describing the peculiarities of Lambda Boo itself.
“The spectral type of Lambda Boo is near A0, as far as can it be determined. The spectral lines, while not unusually broad, are very weak, so that the only features easily visible are a weak K line and the Balmer series of hydrogen”. However these authors did not define a group of stars; this was done later on from spectroscopy by Burbidge and Burbidge (1956). Occasionally stars similar to Lambda Boo were discovered later by Slettebak. (Slettebak 1952 and 1954).
[**... and the first abundance analysis ...**]{}
Burbidge and Burbidge (1956) have analyzed Lambda Boo and 29 Cyg and they found a metal deficiency for the elements Mg, Ca, Fe, Sc, Ti and Sr. Later on, Baschek and Searle suggested that the oxygen abundance should be normal (quoted in the Annual Report of the Director, Mount Wilson and Palomar Observatories, 1962-63 page 12). This was shown by Kodaira (1967) with infra-red spectra.
Very soon the photometry was used to detect Lambda Boo stars (Parenago 1958).
In 1965 Sargent (1965) showed that these stars can be distinguished from other weak lined stars such as horizontal branch stars by the fact that their space velocities are those of Population I stars and that they have moderately large rotational velocities.
In 1968, Settebak et al. introduced the first spectroscopic definition of the lambda Boo class : “These objects are defined spectroscopically as A-type stars (as classified from Ca[ii]{} K line to Balmer line ratio) with weakened metallic lines. They may be distinguished from other stars with the same characteristics (such as horizontal branch stars) by the fact that all show moderately large rotational velocities and small space velocities”
From an abundance analysis of 5 so-defined Lambda Boo stars, Baschek and Searle (1969) found that only 3 of them (Lambda Boo, 29 Cygn and $\pi^1$ Ori) form a group from the composition point of view; these authors suggested that these stars constitute a type of peculiar A stars.
We recall that at that time, a list of 7 lambda Boo stars was available (see for example Sargent, 1965), detected by spectroscopy or photometry. Occasionally a star with low abundances was discovered and often related to the lambda Bootis stars (see for example ADS 3910B in Sargent, 1966).
So from the beginning some confusion exists in the literature about which objects should be considered as “lambda Boo stars”. It is clear that insufficient credit has been given to the true definition of this class found in Baschek & Searle (1969); “...we define the Lambda Boo stars as stars whose composition resembles that of Lambda Boo itself...”.
The eighties
============
The lambda Boo stars were forgotten until the 1980s when both photometric and spectroscopic researches underwent a revival of interest starting with the paper by Hauck and Slettebak (1983) where the spectroscopic definition was expanded to the A-F stars. Since then, different candidates have been selected by different criteria, either photometric (for example: Hauck (1986) or spectroscopic (for example: Abt 1984a).
At that time it became clear that the peculiarities of the Lambda Boo spectrum in the UV were easily detectable (Cucchiaro et al. 1980 ) even at the low TD1 resolution. Baschek et al. (1984) pointed out that characteristic features of the lambda Boo stars can be easily seen on low resolution IUE spectra. Faraggiana et al. (1990) extended this research and defined the UV criteria useful to detect the lambda Boo stars in the range 120-200 nm.
An extensive spectroscopic classification
=========================================
An extensive spectroscopic classification has been made by Gray (1988) in the classical wavelength range. Gray (1991) described in details the spectroscopic peculiarities detectable in the photographic domain at moderate resolution, providing a precise working definition of the lambda Boo stars : “Spectra of these stars are characterized by a weak Mg[ii]{} 4481 line, a K line type of A0 or slightly later and hydrogen-line type between A0 and F0. For their hydrogen line type, the metallic-line spectrum is weak”. Moreover their space velocities are those of Population I stars, and their rotational velocities are moderately high. The shape of the hydrogen lines profiles, peculiar in some lambda Boo stars, is introduced as a further criterium to separate two classes of these objects.
In 1990, Renson et al. collected all the stars that in the literature have been called lambda Boo or candidate lambda Boo at least once, as well as objects called “weak-lines” stars that may be lambda Boo candidates. This catalogue contains 101 stars.
The confusing situation is well illustrated by the two lists of lambda Boo candidates extracted from the same sample of stars (the Bright Star Catalogue) and based on similar classification by Abt (1984a), Abt and Morrell (1995) and by Gray (1988). Few stars are in common between these authors and some stars classified as Lambda Boo by Abt are considered normal by Gray and Garrison (1987).
On the basis of the selection by Gray (1988 and 1991) and UV criteria by Faraggiana et al. (1990), a list of stars fulfilling the visible and/or UV properties of lambda Boo itself has been established by Gerbaldi and Faraggiana (1993). We consider all these stars to be reliable Lambda Boo candidates on the basis of the fact that all the stars classified as Lambda Boo by Gray and observed by IUE belong to the same class according to UV criteria, and vice versa not one star among those rejected on the basis of UV criteria appears as lambda Boo in Gray’s list.
The nineties
============
The abundance determination by Venn and Lambert (1990), their interpretation of the abundance pattern of the lambda Boo stars and the IR excess detected by IRAS around some lambda Boo stars (Sadakane and Nishida 1986) were the starting points for observations in new directions :
- detection of lambda Boo stars in young clusters by Gray and Corbally (1993) and by Levato et al.(1994).
- detection of gas or dust shells around the Lambda Boo stars (Holweger & Rentzsch-Holm 1995; Grady et al. 1996; King & Patten 1992; Holweger & Stürenburg 1991; Bohlender & Walker 1994; King 1994; Hauck et al. 1995; Hauck et al. 1997)
- new abundance analysis (Stürenburg 1993)
- discussion on the position in the HR diagramme (Gerbaldi et al. 1993; Iliev & Barzova 1995)
- observations of Lambda Boo candidates with asteroseismic techniques (Paunzen et al. 1997) and detection of pulsating objects among them.
Charbonneau (1993) and Turcotte and Charbonneau (1993) computed diffusion effects in the atmosphere of a lambda Boo stars giving time scale for the duration of this phenomenon in the context of accreted material on the surface.
A new hypothesis on the origin of the Lambda Boo stars in the framework of the evolution of a close binary system has been recently developed by Andrievsky (1997).
The revival of “theoretical” interest for these stars prompted search for new candidates and it lead Gray (1997) to re-iterate and expand upon what he feels to be the best optical spectroscopic definition for the class of lambda Boo stars.
At the same time a new definition of the lambda Boo stars is given by Pauzen et al. (1997): “ Pop I hydrogen burning metal poor (except of C, N, O and S) A-type stars”; a list of 45 stars is given in their catalogue.
Only 8 stars among the 26 proposed by Abt (1984a) are present in this catalogue. We notice also that Abt and Morrell (1995) classified 46 lambda Boo stars, but only 9 are in common with the catalogue by Paunzen et al. and for 9 other stars the classification by Paunzen et al. as lambda Boo stars is not shared by Abt and Morrell.
Moreover, in the framework of spectral analysis of binary stars some authors have classifed the nearly unseen component as a lambda Boo star (Griffin et al. 1992; Hack et al. 1997).
[**What will be the group of Lambda Boo stars in the next millennium ?**]{}
Abt, H.A.: 1984a, in: the [*MK Process and Stellar Classification*]{}, ed. R.F. Garrison, David Dunlap Obs. Toronto, p. 340 Abt, H.A.: 1984b, [*Astrophys. J*]{}, [**285**]{}, 247 Abt, H.A. Morrell, N.: 1995, [*Astrophys. J. Suppl.*]{}, [**99**]{}, 135 Andrievsky, S.M.: 1997, [*Astron. Astrophys.*]{}, [**321**]{}, 838 Baschek, B. and Searle, L.: 1969, [*Astrophys. J*]{}, [**155**]{}, 537 Baschek B., Heck, A., Jaschek, C., Jaschek, M., Kopper, J., Scholtz, M., Wehrse, R.: 1984, [*Astron. Astrophys.*]{}, [**131**]{}, 378 Burbidge, R.M. and Burbidge, G.R.: 1956, [*Astrophys. J*]{}, [**124**]{}, 116 Bohlender, DA., Walker G.A.H.: 1994, [*Mon. Not. R. Astron. Soc.*]{}, [**266**]{}, 891 Charbonneau, P.: 1993, [*Astrophys. J*]{}, [**405**]{}, 720 Cucchiaro, A., Jaschek, M., Jaschek, C., Macau-Hercot, D.: 1980, [*Astron. Astrophys. Suppl.*]{}, [**40**]{}, 207 Faraggiana, R., Gerbaldi, M., Boehm, C.,: 1990, [*Astron. Astrophys.*]{}, [**235**]{}, 311 Gerbaldi, M., Faraggiana, R.: 1993, [*ASP Conf. Series*]{}, [**44**]{}, 368 Gerbaldi, M., Zorec, J., Castelli, F., Faraggiana, R.: 1993, [*ASP Conf. Serie*]{}, [**44**]{}, 413 Grady, C.A., McCollom, B., Ramley, L.A., England, M.N., Groebner, A., Schlegel, M.: 1996, [*Astrophys. J.*]{}, [**464**]{}, L183 Gray, R.O.: 1988, [*Astron.J.*]{}, [**95**]{}, 220 Gray, R.O.: 1991, in [*Precision Photometry*]{}, eds. A.G. Davis Philip, A.R. Upgren, K. A. Janes L. Davis Press, page 309 Gray, R.O.: 1997, [*Third Colloquium on Faint Blue Stars*]{}, to be published Gray, R.O. and Garrison, R.F. : 1987, [*Astrophys. J. Suppl.*]{}, [**65**]{}, 581). Gray, R.O. and Corbally, C.J.: 1993, [*Astron. J.*]{}, [**106**]{}, 632 Griffin, R.E.M., Schroder, K.P., Mish, A., Griffin, R.F.: 1992, [*Astron. Astrophys.*]{}, [**254**]{}, 289 Hack, M., Polosukhina, N.S., Malanushenko, V.P., Castelli, F.: 1997, [*Astron. Astrophys.*]{}, [**319**]{}, 637 Hauck, B.: 1986, [*Astron. Astrophys.*]{}, [**154**]{}, 349 Hauck, B., Ballereau, D., Chauville, J.: 1995, [*Astron. Astrophys. Sup*]{}, [**109**]{}, 505 Hauck, B., Ballereau , D., Chauville, J.: 1997, to be published, [*Astrophys. Astrophys. Sup.*]{} Hauck, B. and Slettebak, A.,: 1983, [*Astron. Astrophys.*]{}, [**127**]{}, 231 Holweger, H. Stürenburg, S.: 1991, [*Astron. Astrophys.*]{}, [**252**]{}, 255 Holweger, H. and Rentzsch-Holm I.: 1995, [*Astron. Astrophys.*]{}, [**303**]{}, 819 Iliev, I.Kh. ,Barzova, I.S.,: 1995, [*Astron. Astrophys.*]{}, [**302**]{}, 735 King, J.R.: 1994, [*Mon. Not. R. Astron. Soc.*]{}, [**269**]{}, 209 King, J.R., Patten, B.M.: 1992, [*Mon. Not. R. Astron. Soc.*]{}, [**256**]{}, 571 Kodaira, K.: 1967, [*Publ. Astr. Soc. Japan*]{}, [**19**]{}, 556 Levato, H., Malaroda, S., Grosso, M., Morrell, N.I.: 1994, [*ASP Conf. Serie*]{}, [**60**]{}, 93 Morgan, W.W., Keenan, P.C., Kellman, E.: 1943, [*An Atlas of Stellar Spectra*]{}, University of Chicago press Parenago, P.P.: 1958, [*Sov. Astron.*]{}, [**2**]{}, 151 Paunzen, E., Kuschnig, R., Handler, G., Gelbmann, M., Weiss, W.W. : 1997, [*Astron. Astrophys.*]{}, [**124**]{}, 23 Paunzen, E., Weiss, W.W., Heiter, U., North, P. : 1997, [*Astron. Astrophys. Sup.*]{}, [**123**]{}, 93 Renson , P., Faraggiana, R., Boehm, C.: 1990, [*Bull. Inf. CDS*]{}, [**38**]{}, 137 Sadakane K. and Nishida M. 1986, [*Pub. Astron. Soc. Pac.*]{}, [**98**]{}, 685 Sargent, W.L.W.: 1965 in [*The magnetic and related stars*]{}, AAS-NASA Symposium, Greenbelt, ed. R.C. Cameron, page 329 Sargent, W.L.W.: 1965, [*Astrophys. J*]{}, [**142**]{}, 787 Sargent, W.L.W.: 1966, [*Astrophys. J*]{}, [**144**]{}, 1128 Slettebak A.: 1952, [*Astrophys. J*]{}, [**115**]{}, 575 Slettebak A. 1954, [*Astrophys. J*]{}, [**119**]{}, 146 Slettebak, A., Wright, R.R. and Graham, J.A.: 1968, [*Astron. J.*]{}, [**73**]{}, 152 Stürenburg, S.: 1993, [*Astron. Astrophys.*]{}, [**277**]{}, 139 Turcotte, S., Charbonneau, P.: 1993, [*Astrophys. J*]{}, [**413**]{}, 376 Venn, K.A., Lambert, D.L.: 1990, [*Astrophys. J*]{}, [**363**]{}, 234
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In the calculation of cross sections for infrared-safe observables in high energy collisions at next-to-leading order, one approach is to perform all of the integrations, including the virtual loop integration, numerically. One would use a subtraction scheme that removes infrared and collinear divergences from the integrand in a style similar to that used for real emission graphs. Then one would perform the loop integration by Monte Carlo integration along with the integrations over final state momenta. In this paper, we explore how one can perform the numerical integration. We study the $N$-photon scattering amplitude with a massless electron loop in order to have a case with a singular integrand that is not, however, so singular as to require the subtractions. We report results for $N = 4$, $N = 5$ with left-handed couplings, and $N=6$.'
author:
- Zoltán Nagy
- 'Davison E. Soper'
date: 10 November 2006
title: 'Numerical integration of one-loop Feynman diagrams for $N$-photon amplitudes '
---
Introduction {#sec:introduction}
============
The calculation of cross sections in the Standard Model and its extensions at next-to-leading order (NLO) in perturbation theory inevitably involves computing virtual loop Feynman diagrams. The standard method for this involves computing the loop integrals analytically. Once the one loop amplitude is known analytically, the result can be inserted into a calculation of the cross section in which integrals over the momenta of final state particles are performed numerically. This is the method that was introduced in Ref. [@ERT] and is used, for example, in the packages [MCFM]{} [@MCFM] and [NLOJet++]{} [@NLOJet].
This approach is powerful and has been successfully applied to a number of processes of experimental interest. There has been considerable progress [@progress] in expanding the range of processes for which an analytical answer is known.[^1] One may hope that the analytical approach may develop into a completely automatic way of generating scattering amplitudes for a wide class of processes. However, the complexity of the results produced by known analytical methods grows rapidly with the number of partons involved in the scattering. For this reason, there may be limits to the range of processes for which analytical methods are useful.
One wonders whether a wider range of processes might be amenable to calculation if one were, instead, to use numerical integration for the virtual loop integrals. In a calculation of a cross section, the numerical integration would be performed along with the integrations over the momenta of final state particles, so that there would be a single integration over a large number of variables, with the integration performed by Monte Carlo style numerical integration. The purely numerical approach will inevitably have its limitations, just as the analytical approach does. However the nature of the limitations will be different. For this reason, we believe that one should try to develop the numerical method as far as possible and see how far back the limitations can be pushed. Eventually, this should involve trying several variations on the basic theme of performing the integrations numerically.
There are already some methods available for doing the virtual loop integrals numerically. In one method [@beowulfPRL; @beowulfPRD], one performs the integral over the energy flowing around the loop analytically by closing the integration contour in the upper or lower half plane and evaluating the residues of the poles in the complex energy plane that come from the propagator denominators. This is a purely algebraic step. Then the integral over the space momentum is performed numerically. There are infrared divergences, but these cancel inside the integrals between real and virtual graphs that make up a NLO cross section. This method has been applied to $e^+e^- \to 3\ {\rm jets}$ and is completely practical in that application. For more complicated processes, we do not know how to arrange a calculation in this style without subtractions. One could add subtractions to the method of Refs. [@beowulfPRL; @beowulfPRD], but for this paper we have chosen a different approach.
Another method [@Binoth] involves transforming the loop integral into the standard Feynman parameter representation that one uses for analytically evaluating such integrals. Then the integral over the Feynman parameters is to be performed numerically. This method shows promise, but is limited by the complexity introduced by expanding the numerator functions involved. The method introduced in this paper makes use of the Feynman parameter representation while avoiding the complexities introduced by the numerator function.
This paper represents the second step of a program for calculating virtual loop integrals numerically. In the first step [@NSsubtractions], we attacked the problem of infrared divergences. Typically, the integrals that one wants to evaluate have infrared divergences associated with the momenta of particles in the loop becoming collinear with the momenta of massless external particles or becoming soft. In Ref. [@NSsubtractions], we proposed a subtraction scheme in which one subtracts certain counter terms from the integrand, then adds these same counter terms back. After summing over graphs and performing the integrals analytically, the counter terms that we added back have a simple form that is easily included in the calculation of a cross section. Meanwhile, the main integrand minus the counter term integrands combine to make an integrand that is free of singularities strong enough to make the integral divergent. Thus one can numerically integrate the main integrand minus the counter-term integrands.
Despite the beauty of this approach, it is one thing to say that one can numerically integrate the combined integrand and it is another thing to do it. One needs a practical method for doing it. That is what we propose in this paper.
In order to keep our discussion reasonably simple, we attack a simple problem in which the counter terms are not present because the original integral is infrared finite. The problem is to compute the amplitude in quantum electrodynamics for scattering of two photons to produce $N-2$ photons by means of an electron loop. Our formulas include the possibility of a non-zero electron mass, but in order to face up to the problem of infrared singularities that appear when the electron mass vanishes, we concentrate on the mass zero case.
The process is illustrated in Fig. \[fig:Nphoton\]. Electron line $n$ in the loop carries momentum $l - Q_n$, where $Q_n$ is fixed and we integrate over $l$. The momentum carried out of the graph by external photon $n$ is[^2] $$%
P_n = Q_{n+1} - Q_n \;\;,
\label{eq:Qndef}
%$$ with $P_n^2 = 0$. The propagator denominators provide factors that would lead to logarithmic divergences after integration over the soft and collinear regions. However, these divergences are cancelled. For each electron line there is a factor $(\s l - \s Q_n)$. Thus the numerator provides a factor that removes the soft divergence from the integration region $(l - Q_n)\to 0$. Similarly at each vertex there is a factor $(\s l - \s Q_{n+1})\ \s\epsilon_n(P_n)\ (\s l - \s Q_{n})$, where $\epsilon_n(P_n)$ is the polarization vector of the photon. In the collinear limit $(l - Q_n) \to x P$, this gives a factor $-x(1-x) \s P_n \s\epsilon_n(P_n) \s P_n = - 2x(1-x) \s P_n\ \epsilon_n(P_n)\cdot P_n$. This vanishes because $\epsilon_n(P_n)\cdot P_n = 0$. Thus the numerator also provides a factor that removes each collinear divergence. The loop integral is also finite in the ultraviolet as long as $N>4$. (For $N=4$ the integral is divergent by power counting, so a special treatment, discussed later in this paper, is needed.) Thus we can present an algorithm that is uncluttered by the counter terms by means of using the scattering of two photons to produce $N-2$ photons. We reserve the full case of massless quantum chromodynamics for a future paper.
The amplitude {#sec:amplitude}
=============
We wish to calculate the amplitude for scattering of two photons to produce $N-2$ photons by means of a (massless) electron loop. However, we formulate the problem in a more general fashion. The amplitude for any one loop graph can be represented as $$%
{\cal M} = \int\! \frac{d^4 l}{(2\pi)^4}\ e^N N(l)
\prod_{i=1}^N\frac{1}{ (l - Q_i)^2 - m_i^2 + \mi 0}
\;\;.
\label{eq:lspace0}
%$$ Here there is a loop with $N$ propagators as illustrated in Fig. \[fig:Nphoton\]. The $n$th propagator carries momentum $l - Q_n$ and represents a particle with mass $m_n$. At the $n$th vertex, momentum $P_n = Q_{n+1} - Q_n$ leaves the graph. In the case to be considered, all the $m_n$ and $P_n^2$ vanish, but we leave the masses and external leg virtualities open in the general formulas. There is a coupling $e$ for each vertex, where $e$ is the charge of the fermion. There is a numerator factor that, for the photon scattering case with zero electron mass, has the form $$%
N(l) =
{\rm Tr}\left\{
\s\epsilon_N(P_N)\ (\s l - \s Q_{N})\cdots
\s\epsilon_1(P_1)\ (\s l - \s Q_{1})
\right\}
\;\;,
\label{eq:numeratordef}
%$$ where $\epsilon_i(P_i)$ is the polarization vector of photon $i$ and $e$ is the electromagnetic coupling.[^3] In other examples, one would have a different numerator function. The only property that we really need is that $N(l)$ is a polynomial in $l$.
It will prove convenient to modify this by inserting factors $\mi m_0^2$ in the numerator and the denominator, where $m_0^2$ is an arbitrary parameter that we can take to be of the order of a typical dot product $Q_i\cdot Q_j$. This factor is not absolutely needed for the purposes of this paper, but it is quite useful in the case of the subtraction terms to be considered in future papers and is at least mildly helpful in the analysis of this paper. With this extra factor, we write $$%
{\cal M} = \int\! \frac{d^4 l}{(2\pi)^4}\ \frac{\mi m_0^2\, e^N N(l)}{\mi m_0^2}
\prod_{i=1}^N\frac{1}{ (l - Q_i)^2 - m_i^2 + \mi 0}
\;\;.
\label{eq:lspace}
%$$
Representation with Feynman parameters {#sec:feynman}
======================================
In principle, it is possible to perform the integration represented in Eq. (\[eq:lspace\]) directly by Monte Carlo numerical integration on a suitably deformed integration contour. We have looked into this and conclude that it may be a practical method. However, one has to pay attention to the singularities on the surfaces $(l - Q_i)^2 = m_i^2$. The geometry of these surfaces and of their intersections is somewhat complicated. There is a standard method for simplifying the singularity structure: changing to a Feynman parameter representation. One way of using this method has been emphasized in the context of numerical integrations in Ref. [@Binoth]. It is the Feynman parameter method that we explore in this paper.
The Feynman parameter representation of Eq. (\[eq:lspace\]) is $$%
\label{eq:Feynman1}
{\cal M} = \Gamma(N+1)
\int_0^1\! dx^0\int_0^1\! dx^1\cdots \int _0^1\! dx^N\
\delta\!\left(\sum_{i=0}^N x^i - 1\right)\
\int\! \frac{d^4 l}{(2\pi)^4}\
\frac{\mi m_0^2\,e^N N(l)}{ [D(l)+ \mi 0]^{N+1}}
\;\;.
%$$ The denominator here is $$%
D(l) =
\sum_{i=1}^N x^i [(l - Q_i)^2 - m_i^2 ]
+\mi x^0 m_0^2
\;\;.
%$$ The denominator comes with a “$+\mi 0$” prescription for avoiding the possibility that the integrand has a pole on the integration contour and the $+\mi x^0 m_0^2$ term serves the same purpose. With repeated use of $\sum_{i=1}^N x^i = 1 - x^0$, the denominator can be simplified to $$%
D(l) = \frac{1}{1-x^0}
\left\{
\tilde l^2 + \Lambda^2(x)
\right\}
\;\;.
%$$ Here $$%
\tilde l = \sum_{i=1}^N x^i (l - Q_i)
\label{eq:tildeldef}
%$$ and $$%
\Lambda^2(x) = \frac{1}{2} \sum_{i,j = 1}^N x^i x^j S_{ij}
+ \mi \sum_{j=1}^N x^0 x^j m_0^2
\;\;,
\label{eq:Lambdasqdef}
%$$ where we have defined $$\label{eq:Sijdef}
\begin{split}
%
S_{ij} ={}&
(Q_i - Q_j)^2 - m_i^2 - m_j^2
\;\;.
%
\end{split}$$ We change integration variables from $l$ to $\tilde l$ as given in Eq. (\[eq:tildeldef\]). The inverse relation is $$%
\label{eq:tildelinverse}
l = l(\tilde l,x)
\equiv
\frac{1}{1-x^0}\,\left(\tilde l + \sum_{i = 1}^N x^i Q_i\right)
\;\;.
%$$ With these results, we have $$\label{eq:lxspace}
\begin{split}
%
{\cal M} ={}&
\mi m_0^2 e^N\Gamma(N+1)
\int_0^1\! dx^1\cdots \int _0^1\! dx^N\
\theta\!\left(\sum_{i=1}^N x^i < 1\right)
\left(\sum_{i=1}^N x^i\right)^{N-3}
\\
& \times \int\! \frac{d^4 \tilde l}{(2\pi)^4}\
\frac{ N\!\left(l(\tilde l,x)\right)}
{ \left[\tilde l^2 + \Lambda^{\!2}(x) + \mi 0\right]^{N+1}}
\;\;.
%
\end{split}$$ Here we understand that the Feynman parameter $x^0$ is given by $x^0 = 1-
\sum_{i=1}^N x^i$.
If we wished to perform the integration analytically, the next step would be carry out the integration over $\tilde l$. However for a numerical integration, such a step would be a step in the wrong direction. Performing the $\tilde l$ integration analytically would require expanding the complicated numerator function in powers of $\tilde l$. For this reason, we leave the $\tilde l$ integration to be carried out numerically after a little simplification.
The simplification is to change variables from $\tilde l$ to a momentum $\ell$ that has been scaled by a factor $\Lambda(x)$ and rotated in the complex plane: $$\begin{split}
%
\tilde l^{\mu}(x,\ell) ={}&
{\textstyle{\frac{1}{2}}}
\Lambda(x)
\left\{
(1 - \mi)\ell^{ \mu}
+
(1 + \mi)P^\mu_\nu \ell^{\nu}
\right\}
\;\;.
\label{eq:elldef}
%
\end{split}$$ Here $\hat \ell^\mu = P^\mu_\nu \ell^{\nu}$ is the parity transform of $\ell$: $\hat \ell^0 = \ell^0$, $\hat \ell^j = - \ell^j$ for $j \in \{1,2,3\}$. We have defined $\Lambda(x)$ for real $x$ and $m_0^2 \to 0$ to be $\sqrt{\Lambda^2(x)}$ if $\Lambda^2(x)$ is positive and $\mi\sqrt{-\Lambda^2(x)}$ if $\Lambda^2(x)$ is negative. The square of $\tilde l$ is $$\begin{split}
%
\tilde l^2
={}&
\Lambda^2(x)\,
\ell^{\mu} P_{\mu\nu}\ell^{\nu}
\;\;.
%
\end{split}$$ Note that $\ell^{\mu} P_{\mu\nu}\ell^{\nu}$ is the square of $\ell$ with a euclidian inner product and is thus strictly positive.
Our integral now is $$\begin{split}
%
{\cal M} ={}&
- m_0^2 e^N \Gamma(N+1)
\int\! \frac{d^4 \ell}{(2\pi)^4}\
\frac{1}{[1 + \ell^{\mu} P_{\mu\nu}\ell^{\nu}]^{N+1}}
\\
&
\times
\int_0^1\! dx^1\cdots \int _0^1\! dx^N\
\theta\!\left(\sum_{i=1}^N x^i < 1\right)\,
\left(\sum_{i=1}^N x^i\right)^{N-3}\
\frac{N(l(x,\ell))}
{[\Lambda^{\!2}(x) + \mi 0]^{N-1}}
\;\;.
\label{eq:lxspacemod}
%
\end{split}$$ The function $l(x,\ell)$ in the numerator function is obtained by combining Eqs. (\[eq:tildelinverse\]) and (\[eq:elldef\]): $$%
l^\mu(x,\ell) =
\frac{1}{1-x^0}
\left[
{\textstyle{\frac{1}{2}}}
\Lambda(x)
\left\{
(1 - \mi)\ell^{ \mu}
+
(1 + \mi)P^\mu_\nu \ell^{\nu}
\right\}
+\sum_{j=1}^N x^j Q^\mu_j
\right]
\;\;.
\label{eq:elltol}
%$$ It is a somewhat subtle matter to verify that the complex rotations involved in defining $\ell$ are consistent with the $+\mi0$ prescription in the original denominator. We examine this issue in Appendix \[app:wick\].
Notice that in the numerator function the momentum on line $n$ is $$%
l^\mu(x,\ell) - Q_n^\mu
=
\frac{1}{1-x^0}
\left[
{\textstyle{\frac{1}{2}}}
\Lambda(x)
\left\{
(1 - \mi)\ell^{ \mu}
+
(1 + \mi)P^\mu_\nu \ell^{\nu}
\right\}
+K_n^\mu(x)
\right]
\;\;,
\label{eq:numeratorn}
%$$ where $$%
K_n^\mu(x) = \sum_{j=1}^N x^j (Q^\mu_j - Q^\mu_n)
\;\;.
\label{eq:Kndef}
%$$ We shall meet $K_n(x)$ later in Sec. \[sec:pinch\] when we study pinch singularities. For the moment, we note simply that in the final formula (\[eq:lxspacemod\]) both the numerator and the denominator are invariant under shifts $Q_i \to Q_i + \Delta Q$ of the reference momenta $Q_i$.
Contour deformation in Feynman parameter space {#deformation}
==============================================
The integral in Eq. (\[eq:lxspacemod\]) is not yet directly suitable for Monte Carlo integration. The problem is that the quadratic function $\Lambda^2(x)$ vanishes on a surface in the space of the Feynman parameters. Evidently, the integrand is singular on this surface. For $x^0 > 0$, $\Lambda^2(x)$ does not vanish for real $x^j$, but on the plane $x^0 = 0$, $\Lambda^2(x)$ vanishes for certain real values of the other $x^j$. If we don’t do something about this singularity, the numerical integral will diverge. The something that we should do is deform the integration contour in the direction indicated by the $+\mi 0$ prescription. That is, we write the integral as $$\begin{split}
%
{\cal M} ={}&
- m_0^2 e^N \Gamma(N+1)
\int\! \frac{d^4 \ell}{(2\pi)^4}\
\frac{1}{[1 + \ell^{\mu} P_{\mu\nu}\ell^{\nu}]^{N+1}}
\int_{C}d z\,
\left(\sum_{i=1}^N z^i\right)^{\!\!N-3}\
\frac{N(l(z,\ell))}{\big[\Lambda^{2}(z)\big]^{N-1}}
\\
\equiv{}&
- m_0^2 e^N \Gamma(N+1)
\int\! \frac{d^4 \ell}{(2\pi)^4}
\frac{1}{[1 + \ell^{\mu} P_{\mu\nu}\ell^{\nu}]^{N+1}}
\\ & \times
\int_{0}^{1}\!d\xi^{1}\cdots\int_{0}^{1}\!d\xi^{N}\
\theta\left(\sum_{i = 1}^N \xi^i < 1\right)\,
\det\!\left(\frac{dz}{d\xi}\right)\,
\left(\sum_{i=1}^N z^i\right)^{\!\!N-3}\
\frac{N(l(z(\xi),\ell))}
{\big[\Lambda^{2}(z(\xi))\big]^{N-1}}
\;\;.
\label{eq:lxspacedeformed}
%
\end{split}$$ Here we integrate over real parameters $\xi^i$ for $i \in \{0,1,\dots,N\}$ with $\sum_{i=0}^N \xi^i = 1$, so that we have displayed the integral as an integral over $N$ parameters $\xi^1,\dots,\xi^N$ with $\xi^0 \equiv 1 - \sum_{i = 1}^N \xi^i$. The integration range is $0 < \xi^i$ for $i \in \{0,1,\dots,N\}$. The original integral was over real parameters $x^i$ with $\sum_{i=0}^N x^i = 1$ with this same range, $0 < x^i$. The contour is defined by specifying complex functions $z^i(\xi)$ for $i \in \{0,1,\dots,N\}$ with $\sum_{i=0}^N z^i = 1$.
In moving the integration contour we make use of the multidimensional version of the widely used one dimensional contour integration formula. A simple proof is given in Ref. [@beowulfPRD]. The essence of the theorem is that we can move the integration contour as long as we start in the direction indicated by the $+\mi 0$ prescription and do not encounter any singularities of the integrand along the way. In addition, the boundary surfaces of the contour have to remain fixed. Since the surfaces $z^i = 0$ are boundary surfaces of the contour before deformation, they should remain boundary surfaces after the deformation. The original integral covers the region $0 < x^i$ for $i \in \{1,\dots,N\}$ and the $\xi^i$ cover this same range, $0 < \xi^i$. Thus we demand $$%
\label{eq:endpoints1}
z^i(\xi) \to 0 \hskip 1 cm {\rm as}\ \xi^i \to 0
%$$ for $i \in \{0,1,\dots, N\}$.
We adopt a simple ansatz for the contour in the complex $z$-space:[^4] $$%
\label{eq:contour}
z^{i}(\xi) = \frac{\xi^{i}+\mi \eta^{i}(\xi)}
{1+\mi\sum_{j=0}^N\eta^{j}(\xi)}
\;\;.
%$$ Here the $\eta^{i}$ variables are functions of the integration parameters $\xi^{i}$. With this ansatz, the constraint that $\sum_i z^i = 1$ is automatically satisfied: $$%
\sum_{i=0}^N \xi^{i} = 1\qquad \Longrightarrow\qquad
\sum_{i=0}^N z^{i} = 1
\;\;.
%$$ In order to satisfy Eq. (\[eq:endpoints1\]), we require $$%
\label{eq:endpoints2}
\eta^i(\xi) \to 0 \hskip 1 cm {\rm as}\ \xi^i \to 0
%$$ for $i \in \{0,1,\dots, N\}$.
There are certain conditions to be imposed on the contour choice in order to be consistent with the “$+\mi 0$” prescription in the original integral. Note first that $$%
\label{eq:LambdaFactors}
\Lambda^2(z) = \frac{\Lambda^2(\xi + \mi \eta(\xi))}
{(1 + \mi \sum_{j=0}^N \eta^j(\xi))^2}
\;\;.
%$$ Next, note that $\Lambda^2$ with argument $\xi + \mi \eta$ appears in the numerator. In order to analyze Eq. (\[eq:LambdaFactors\]), it is convenient to give a special name ${\cal S}(x)$ to the quadratic function that forms the first part of $\Lambda^2$ in Eq. (\[eq:Lambdasqdef\]), $$%
\Lambda^2(x) = {\cal S}(x) + \mi\sum_{j=1}^N x^0 x^j \ m_0^2
\;\;,
%$$ where $$%
{\cal S}(x) = \frac{1}{2} \sum_{i,j = 1}^N x^i x^j S_{ij}
\;\;.
%$$ A sufficient condition for the choice of the $\eta^i(\xi)$ is as follows. First, we choose $$%
\label{eq:eta0}
\eta^0 = 0
\;\;.
%$$ This is the simplest way to satisfy Eq. (\[eq:endpoints2\]) for $\eta^0$. With this choice for $\eta^0$, we have $$%
\label{eq:Lambdaexpansion}
\Lambda^2(\xi + \mi \eta) =
{\cal S}(\xi) - {\cal S}(\eta)
- m_0^2\,\xi^0 \sum_{j=1}^N \eta^j
+ \mi \sum_{i=1}^N \eta^i(\xi)\, w_{i}(\xi)
+ \mi m_0^2\,
\xi^0(1-\xi^0)
\;\;,
%$$ where $$%
w_i(\xi) \equiv \frac{\partial {\cal S}(\xi)}{\partial \xi^i}
= \sum_{j=1}^N S_{ij}\xi^j
\;\;.
%$$ Our condition for the choice of the $\eta^i$ for $i \in \{1,\dots,N\}$ is that $$%
\label{eq:posimag}
\sum_{i=1}^N \eta^i(\xi) w_i(\xi) \ge 0
\;\;,
%$$ with $\sum \eta^i w_i > 0$ except at a point on the boundary of the integration region.
Suppose, now, that the condition (\[eq:posimag\]) is satisfied. Do we then have an allowed contour deformation? Consider the family of contour deformations $\eta^i(\xi;\lambda) = \lambda\, \eta^i(\xi)$ with $0 < \lambda \le 1$.
We first consider infinitesimal values of $\lambda$. We have, to first order in $\lambda$, $$\begin{split}
%
\Lambda^2(z) = {}& \left[
{\cal S}(\xi)
- \lambda m_0^2\,\xi^0 \sum_{j=1}^N \eta^j
+ \mi \lambda \sum_{i=1}^N \eta^i(\xi)\, w_{i}(\xi)
+\mi m_0^2\, \xi^0(1-\xi^0)\right]
\\ & \times
\left[
1 - 2\mi \lambda \sum_{j=1}^N \eta^j(\xi)
\right]
+ {\cal O}(\lambda^2)
\\
= {} &
{\cal S}(\xi)
- \lambda m_0^2\,\xi^0 \sum_{j=1}^N \eta^j
+ 2 \lambda \xi^0 (1 - \xi^0) m_0^2 \sum_{j=1}^N \eta^j
\\&
+ \mi \lambda \sum_{i=1}^N \eta^i(\xi)\, w_{i}(\xi)
- 2\mi \lambda {\cal S}(\xi) \sum_{j=1}^N \eta^j(\xi)
+ \mi m_0^2\, \xi^0(1-\xi^0)
+ {\cal O}(\lambda^2)
\;\;.
%
\end{split}$$ In the neighborhood of any point $\xi$ with $\xi^0 >0$, $\Lambda^2(z)$ has a positive imaginary part even with $\lambda = 0$. For $\xi^0 = 0$, the contour deformation gives $\Lambda^2(z)$ a positive imaginary part in a neighborhood of any point $\xi$ where the real part, ${\cal S}(\xi)$, vanishes. This is the meaning of the “$+ \mi 0$” prescription. We may consider that we start with a value of $\lambda$ that is just infinitesimally greater than zero, so that the contour does not actually pass through any poles of the integrand in the interior of the integration region.
Now we turn to larger values of $\lambda$. We have $$%
\Lambda^2(z) = \frac{\Lambda^2(\xi + \mi \lambda \eta(\xi))}
{(1 + \mi \lambda \sum_j \eta^j(\xi))^2}
\;\;.
%$$ Assuming that the $\eta^i(\xi)$ are smooth functions, this is a smooth function of $\xi$. (Note here that $1 + \mi \lambda \sum_j \eta^j(\xi)$ cannot vanish because its real part is 1). Furthermore, when $\lambda > 0$, $\Lambda^2(z)$ is never zero in the interior of the integration region. This is because, according to Eq. (\[eq:Lambdaexpansion\]), the imaginary part of $\Lambda^2(\xi + \mi \lambda \eta(\xi))$ is positive. Thus $1/[\Lambda^2(z)]^{N-1}$ is an analytic function of $z$ in the interior of the entire region covered by the family of deformations. For the boundary of the integration region, there are some issues of convergence that one should check. We do so in Appendix \[app:DeformCheck\]. Anticipating the result of this check, we conclude that the integral is independent of the amount of deformation and we can set $\lambda = 1$.
It is remarkable that the imaginary part of $\Lambda^2(z)$ is not necessarily positive on all of the deformed contour. What is crucial is that the deformation starts in the right direction and that, as the contour is deformed, it does not cross any poles.
A standard contour deformation {#standarddeformation}
==============================
A convenient choice for the deformation function $\eta^i(\xi)$ for $i \in \{1,\dots,N\}$ is $$%
\label{eq:etadef}
\eta^i(\xi) = (\lambda/m^2)\, \xi^i w_i(\xi)
\;\;.
%$$ Here $\lambda$ is an adjustable dimensionless constant and $m^2$ is a parameter with the dimension of squared mass that we insert because $S_{ij}$ and thus $w_i$ has dimension of squared mass. Note that with this choice the requirement (\[eq:endpoints2\]) that $\eta^i(\xi)$ vanish when $\xi^i$ vanishes is automatically met. This deformation gives $$%
{\cal S}(\xi + \mi\eta) =
{\cal S}(\xi) - {\cal S}(\eta)
+ \mi (\lambda/m^2)\sum_{i=1}^N \xi^i\, [w_i(\xi)]^2
\;\;.
%$$ Evidently the imaginary part of $\Lambda^2(\xi + \mi\eta)$ has the right sign.
Eq. (\[eq:etadef\]) can be thought of as specifying a basic deformation. We can add other deformations to this. In our numerical work for this paper we have added one more deformation, as specified in Appendix \[app:extradeform\].
Pinch singularities {#sec:pinch}
===================
The integrand is singular for $\xi^0 \to 0$ at any real point $\xi$ with ${\cal S}(\xi) = 0$. We have seen in the previous section that the standard contour deformation keeps the contour away from this singularity as long as there is some index $i \in \{1,\dots,N\}$ such that $\xi^i > 0$ and $w_i(\xi) \ne 0$.
What about a point $\xi$ with ${\cal S}(\xi) = 0$ such that there is [*no*]{} index $i \in \{1,\dots,N\}$ such that $\xi^i > 0$ and $w_i(\xi) \ne 0$. In this case, the integration contour is pinched in the sense that there is no allowed contour deformation that can give ${\cal S}(\xi+ \mi \eta)$ a positive imaginary part at this point $\xi$. To see this, recall from Eq. (\[eq:Lambdaexpansion\]) that, when $\xi^0 = 0$, $$%
\label{eq:ImLambda}
{\rm Im}\,{\cal S}(\xi + \mi \eta) = \sum_{i=1}^N \eta^i w_i(\xi)
\;\;.
%$$ Consider a point $\xi$ such that ${\cal S}(\xi) = 0 $ and such that for each index $i \in \{1,\dots,N\}$, $w_i(\xi) \ne 0$ implies $\xi^i = 0$. For any allowed deformation $\eta(\xi)$, we must have $\eta^i = 0$ for all $i \in \{1,\dots,N\}$ such that $\xi^i = 0$. Thus for each index $i \in \{1,\dots,N\}$, $w_i(\xi) \ne 0$ implies $\eta^i = 0$. From Eq. (\[eq:ImLambda\]) we conclude that ${\rm Im}\,{\cal S}(\xi + \mi \eta)$ must vanish at the point in question for any allowed choice of the $\eta^i$. We conclude that a real point $\xi$ with $\xi^0 = 0$ and with ${\cal S}(\xi) = 0$ is a pinch singular point if, and only if, $$%
\label{eq:pinchcondition1}
\xi^i w_i(\xi) = 0\hskip 1 cm {\rm for\ every}\ i \in \{1,\dots,N\}
\;\;.
%$$ We also note that the point $\xi^0 = 1$, $\xi^i = 0$ for $i \in \{1,\dots,N\}$, is a pinch singular point. This singularity corresponds to the ultraviolet region of the original loop integration.
With a little algebra, one can translate the condition for a pinch singularity with $\xi^0 = 0$. At one of these points, we have $$%
\label{eq:pinchcondition2}
{\rm either}\quad \xi^i = 0\quad{\rm or}\quad K_i^2 - m_i^2 = 0
%$$ for each $i \in \{1,\dots,N\}$, where $K^\mu_i(\xi)$ was given earlier in Eq. (\[eq:Kndef\]). When $\xi^0 = 0$, these vectors have the properties that $K_i - K_{i+1} = P_i$ and $\sum \xi_i K_i = 0$. Thus Eq. (\[eq:pinchcondition2\]) is the well known condition for a pinch singularity (see Bjorken and Drell [@bjorkenanddrell]). It says that for each propagator $i$ around the loop, there is a momentum $K_i$ such that momentum conservation is obeyed at the vertices and each $K_i$ around the loop is either on shell or else the corresponding $\xi^i$ is zero and such that the space-time separations $\Delta x_i^\mu = \xi^i K_i^\mu$ around the loop sum to zero.
Notice that the momenta $K^\mu_i(\xi)$ appear in the numerator function. According to Eq. (\[eq:numeratorn\]), the momentum for line $i$ in the numerator function in the case that $\xi$ is at a contour pinch (so $\Lambda(\xi) = 0$) is $K^\mu_i(\xi)$.
There are two types of pinch singular points that are always present if we have massless kinematics (with no external momenta collinear to each other) and one more that can be present.
Soft singularity {#sec:soft}
----------------
The first kind of pinch singular point that is always present if we have massless kinematics is the one corresponding to a loop propagator momentum that vanishes. If $m_n = 0$ for some $n$ then $S_{nn} = 0$. This means that $\Lambda^2(\xi) = 0$ when all of the $\xi^i$ vanish except for $\xi^n$, which is then $\xi^n = 1$. This is a pinch singular point because all of the $z^i(\xi)$ are fixed: $z^i = 0$ for $i \ne n$, $z^n = 1$. This point corresponds to the momentum of line $n$ in the momentum space representation vanishing. In our photon scattering example, there is a singularity at this point but because of the zero from the numerator function it is not strong enough to produce a divergence.
Collinear singularity {#sec:collinear}
---------------------
The second kind of pinch singular point that is always present if we have massless kinematics is the one corresponding to two loop propagator momenta becoming collinear to an external momentum. If $m_n = m_{n+1} = 0$ for some $n$ and if $P_n^2 = (Q_{n+1} - Q_n)^2 = 0$, then $S_{nn} = S_{n+1,n+1} = S_{n+1,n} = 0$. This means that $\Lambda^2(\xi) = 0$ when all of the $\xi^i$ vanish except for $\xi^n$ and $\xi^{n+1}$. It also means that $w_n(\xi) = w_{n+1}(\xi) = 0$, so that this is a pinch singular point according to the condition (\[eq:pinchcondition1\]). This point corresponds to the momentum of lines $n$ and $n+1$ in the momentum space representation being collinear with $P_n$. In our photon scattering example, there is a singularity along this line but because of the zero from the numerator function it is not strong enough to produce a divergence.
Double parton scattering singularity {#sec:dps}
------------------------------------
A third type of pinch singular point can be present if a special condition holds for the external momenta. This singularity corresponds to double parton scattering and is illustrated in Fig. \[fig:dps\]. Imagine that incoming parton $A$ splits into two collinear partons. Imagine also that incoming parton with index $B$ splits into two collinear partons. One of the partons from $A$ and one from $B$ could meet and produce a group of final state partons. The other parton from $A$ could meet the other parton from $B$ and produce a second group of final state partons. For this to happen, we need at least two external lines in each group of outgoing partons produced. Thus we need at least four outgoing external particles. Thus we need $N \ge 6$.
This picture satisfies the criteria of Eq. (\[eq:pinchcondition2\]) for a pinch singularity. In the Feynman parameter space, the singularity occurs along a one dimensional line in the interior of the space. We work out where this line is in Appendix \[sec:appendixdps\].
Now, the pinch singularity conditions hold only for certain special choices of the external momenta. However, if $N$ is large, it is usual that the kinematics is close to a pinch singularity condition for some of the graphs. For this reason, in a numerical program, one should check for each graph if such a nearly pinched contour occurs. In the event that it does, one should put a high density of integration points near the almost singular line.
Ultraviolet subtraction {#sec:UV}
=======================
Some graphs are ultraviolet divergent. For instance, in the photon scattering case, there is an ultraviolet divergence for $N = 4$ (and for $N = 2$, but we do not consider that case.) In the representation (\[eq:lxspacedeformed\]), the divergence appears as a divergence from the integration over Feynman parameters near $\xi^0 = 1$, with all of the other $\xi^i$ near zero. The reader can check that with a numerator function proportional to $N$ powers of the loop momentum, this region does give a logarithmic divergence for $N = 4$. If the graph considered is ultraviolet divergent, it needs an ultraviolet subtraction, so that we calculate $$%
{\cal M}_{\rm net} = {\cal M} - {\cal M}_{\rm uv}
\;\;.
%$$ In a numerical integration, we subtract the integrand of ${\cal M}_{\rm uv}$ from the integrand of ${\cal M}$, then integrate. We arrange that the singularities of the integrand cancel to a degree sufficient to remove the divergence. The subtraction term is defined in Ref. [@NSsubtractions] so that it reproduces the result of $\overline{\rm MS}$ subtraction (which we use because it is gauge invariant). In the photon scattering case, the sum over graphs of the subtraction term vanishes, corresponding to the fact that there is no elementary four photon vertex. Thus the result after summing over graphs does not depend on the $\overline{\rm MS}$ renormalization scale.
The subtraction term from Eq. (A.37) of Ref. [@NSsubtractions] is $$\begin{split}
%
{\cal M}_{\rm uv} ={}&
\int\! \frac{d^4 l}{(2\pi)^4}\
\frac{\mi m_0^2\,e^4 }{\mi m_0^2}
\Bigg\{
\frac{N(l,l,l,l)- 32 \prod_{j=1}^4 l\cdot \epsilon_j(P_j)}
{ [l^2 - \mu^2 e^{-4/3}+ \mi 0]^4}
+\frac{32 \prod_{j=1}^4 l\cdot \epsilon_j(P_j)}
{ [l^2 - \mu^2 e^{-3/2}+ \mi 0]^4}
\Bigg\}
\;\;.
\label{eq:UVsubtraction}
%
\end{split}$$ Here $\mu^2$ is the $\overline{\rm MS}$ renormalization scale, which can be anything we like since the net counter-term is zero. For the numerator function in the first term, we have adopted the notation that the ordinary numerator function $N(l)$ in Eq. (\[eq:numeratordef\]) is written $N(k_4,k_3,k_2,k_1)$ where $k_n = l - Q_n$. In this notation, $N(l,l,l,l)$ is the standard numerator function with each propagator momentum set equal to $l$.
We can now apply the same transformations as for the starting graph to obtain the representation $$\begin{split}
%
{\cal M}_{\rm uv} ={}&
- m_0^2 e^4 \Gamma(5)
\int\! \frac{d^4 \ell}{(2\pi)^4}\
\frac{1}{[1 + \ell^{\mu} P_{\mu\nu}\ell^{\nu}]^{5}}
\\
& \times
\int_0^1\! dx^1\cdots \int _0^1\! dx^4\
\theta\!\left(\sum_{i=1}^4 x^i < 1\right)\,
\left(\sum_{i=1}^4 x^i\right)
\\
& \times
\Bigg\{
\frac{N(l,l,l,l)- 32 \prod_{j=1}^4 l\cdot \epsilon_j(P_j)}
{[\Lambda_{4/3}^{\!2}(x) + \mi 0]^{3}}
+
\frac{32 \prod_{j=1}^4 \tilde l\cdot \epsilon_j(P_j)}
{[\Lambda_{3/2}^{\!2}(x) + \mi 0]^{3}}
\Bigg\}
\;\;.
\label{eq:lxspaceUV}
%
\end{split}$$ Here $$\begin{split}
%
\Lambda_{4/3}^{\!2}(x) ={}&
- (1-x^0)^2 \mu^2 e^{-4/3} + \mi x^0(1-x^0) m_0^2
\;\;,
\\
\Lambda_{3/2}^{\!2}(x) ={}&
- (1-x^0)^2 \mu^2 e^{-3/2} + \mi x^0(1-x^0) m_0^2
\;\;,
%
\end{split}$$ where we have used $x^0 = 1 - \sum_{j=1}^4 x^j$. In the numerator of the first term, $l$ is a function $l(x,\ell)$, $$%
l^\mu(x,\ell) = \frac{1}{1-x^0}
\left[
{\textstyle{\frac{1}{2}}}
\Lambda_{4/3}(x)
\left\{
(1 - \mi)\ell^{ \mu}
+
(1 + \mi)P^\mu_\nu \ell^{\nu}
\right\}
\right]
\;\;.
\label{eq:elltolUV1}
%$$ In the second term, $\tilde l$ in the numerator is a function $\tilde l(x,\ell)$, $$%
\tilde l^\mu(x,\ell) = \frac{1}{1-x^0}
\left[
{\textstyle{\frac{1}{2}}}
\Lambda_{3/2}(x)
\left\{
(1 - \mi)\ell^{ \mu}
+
(1 + \mi)P^\mu_\nu \ell^{\nu}
\right\}
\right]
\;\;.
\label{eq:elltolUV2}
%$$ The reader can check that for the photon scattering case with $N=4$ the integrand for ultraviolet subtraction matches that of the starting graph in the region $x^0 \to 1$, so that if we subtract the integrand from the counter-term graph from the integrand for the starting graph, the resulting integral will be convergent.
The Monte Carlo Integration {#sec:MonteCarlo}
===========================
We have implemented the integration in Eq. (\[eq:lxspacedeformed\]) as computer code [@whereiscode]. The integration is performed by the Monte Carlo method. This is a standard method, but it may be good to indicate what is involved. First, we note that we do not simply feed the integrand to a program that can integrate “any” function. There are many reasons for this, but the most important is that we do not have just [*any*]{} function but a function with a known singularity structure, a structure that is generic to loop diagrams in quantum field theory with massless kinematics. We can take advantage of our knowledge of how the integrand behaves.
To proceed, we note that we have an integral of the form $$%
{\cal M} = \int\! d^4 \ell\
\int_0^1 \!d\xi^0 \int_0^1\!d\xi^1 \cdots \int_0^1\!d\xi^N
\delta\!\left(\sum_{i=0}^{N} \xi^i - 1\right)\
f(\ell,\xi)\;\;.
\label{integralform}
%$$ In a Monte Carlo integration, we choose $N_{\rm pts}$ points $\{\ell_j,\xi_j\}$ at random with a density $\rho(\ell,\xi)$ and evaluate the integrand $f(\xi)$ at these points. Then the integral is $$%
{\cal M} = \lim_{N_{\rm pts} \to \infty} \frac{1}{N_{\rm pts}}
\sum_{j=1}^{N_{\rm pts}}
\frac{f(\ell_j,\xi_j)}{\rho(\ell_j,\xi_j)}
\;\;.
%$$ The integration error with a finite number of points is proportional to $1/\sqrt{N_{\rm pts}}$. The coefficient of $1/\sqrt{N_{\rm points}}$ in the error is smallest if $$%
\rho(\ell,\xi) \approx {\it const.}\times |f(\ell,\xi)|
\;\;.
%$$ That is the ideal, but it is not really possible to achieve this ideal to the degree that one has a one part per mill error with one million points. However, one would certainly like to keep $|f(\ell,\xi)|/\rho(\ell,\xi)$ from being very large. In particular, $f(\ell,\xi)$ is singular along certain lines in the space of the $\xi$ (the collinear singularities) and at certain points (the soft singularities). We need to arrange that $\rho$ is singular at the same places that $f$ is singular, so that ${f(\ell,\xi)}/{\rho(\ell,\xi)}$ is [*not*]{} singular anywhere. Since $|f(\ell,\xi)|$ can be very large near other lines associated with double parton scattering, we also need to arrange that $\rho(\ell,\xi)$ is similarly large near these lines.
We construct the desired density in the form $$%
\rho(\ell,\xi) = \rho_\ell(\ell)
\sum_{J=1}^{N_{\rm alg}} \alpha_J\,\rho_J(\xi)
\;\;.
%$$ Here $\int\! d^4\ell\, \rho_\ell(\ell) = 1$, the sum of the $\alpha_J$ is 1, and there are several density functions $\rho_J$ with $\int\! d\xi\, \rho_J(\xi) = 1$. Each $\rho_J$ corresponds to a certain algorithm for choosing a point $\xi$. For each new integration point, the computer chooses which algorithm to use with probability $\alpha_J$. The various sampling algorithms are designed to put points into regions in which the denominator is small, based on the coefficients $S_{ij}$. We omit describing the details of the sampling methods since these are likely to change in future implementations of this style of calculation.
Points $\ell$ are chosen with a simple distribution $\rho_\ell(\ell)$. In the calculation of the numerator, we average between the numerator calculated with $\ell$ and the numerator calculated with $-\ell$.
Having outlined how $\cal M$ is calculated by Monte Carlo integration, we pause to suggest how the calculation of a cross section (for, say, Higgs production) would work. There one would have one-loop amplitudes $\cal M$ expressed as integrals and one would need to multiply $\cal M$ by a function $h(P)$ of the external momenta that represents a tree amplitude and a definition of the observable to be measured. One would need the integral of this over the external momenta $P$. One would perform all of the integrations together. That is, one would choose points $\{P,\ell,\xi\}$ and calculate the contributions from the virtual graphs times tree graphs to the desired cross section according to $$%
I = \lim_{N_{\rm pts} \to \infty} \frac{1}{N_{\rm pts}}
\sum_{j=1}^{N_{\rm pts}}
\frac{f(\ell_j,\xi_j;P_j)}{\rho(\ell_j,\xi_j,P_j)}\ h(P_j)
\;\;.
%$$ Here $f$ is the integrand of $\cal M$ as above. The function $\rho$ is the net density of points in $\ell$, $\xi$, and $P$. Thus what one would use is not $\cal M$ itself but rather the integrand for $\cal M$.
Checks on the calculation {#sec:checks}
=========================
As discussed in the previous section, we have implemented the integration in Eq. (\[eq:lxspacedeformed\]) as computer code [@whereiscode]. With this code, there are a number of internal checks that can be performed on the computation. First, we can replace the real integrand by a function that has soft or collinear singularities but is simple enough to easily integrate analytically. Then we can compare the numerical result to the analytical result. This checks that the functions $\rho_i(\xi)$ and $\rho(\ell)$ correspond to the true probabilities with which points $\xi$ and $\ell$ are chosen. Then we can vary the amount of deformation (both for the real integral and for the test integrals). When we integrate over a different contour, the integrand is quite different. Nevertheless, the $(N+4)$-dimensional Cauchy theorem guarantees that the integral should be unchanged, provided that the integration is being performed correctly. Thus invariance under change of contour is a powerful check. Another check is to change the value of the parameter $m_0$. At the start, $\cal M$ is proportional to $m_0^2/m_0^2$ and is trivially independent of $m_0$. However in the integral as performed, Eq. (\[eq:lxspacedeformed\]), $m_0$ is deeply embedded into the structure of the integrand, so that it is a non-trivial check on the integration that the result does not change when we change $m_0$. Next, we can replace one of the photon polarization vectors $\epsilon_n(P_n)$ by $P_n$. This gives a non-zero result for each Feynman graph, but should give zero for the complete amplitude summed over graphs. Additionally, we can change the the definition of the polarization vectors $\epsilon_n(P_n)$. For reasons of good numerical convergence, we normally use polarization vectors appropriate for Coulomb gauge, but we can switch to a null-plane gauge. The two amplitudes should differ by a phase, so that $|{\cal M}|$ is unchanged. Another check is obtained by replacing the vector current by a left-handed current or a right-handed current. For even $N$, the left-handed and right-handed results should be the same, while for odd $N$ they should be opposite. For another check, we can reformulate the integral so that we do not define $\ell$ with a scale $\Lambda(x)^{-1}$. In this formulation, the denominator is $$%
[{\cal S}(x)
+ \mi x^0(1-x^0) m_0^2 (1 + \ell^{\mu} P_{\mu\nu}\ell^{\nu})]^{N+1}
\;\;.
%$$ The structure of the integral is quite different, but the result should be the same. For four photons, there is one additional test: the result should be independent of the renormalization parameter $\mu$.
We have subjected the code [@whereiscode] to these checks. The numerical precision of the result is often not high and we have not used every check for every choice of $N$ and external momenta and polarizations. Nevertheless, we have found that where we have tried them, the various checks are always passed. We note that a better check would be to obtain the same results with completely independently written code. We have not done that.
Results {#sec:results}
=======
In this section, we use this code to test how well the method described works. The result for a given choice of helicities of the photons has a phase that depends on the precise definition of the photon polarization vectors $\epsilon_i$. However, the absolute value of the scattering amplitude ${\cal M}$ is independent of this conventions used to define the $\epsilon_i$, so we concentrate on $|{\cal M}|$. Our convention for defining ${\cal M}$ is specified in Eq. (\[eq:lspace0\]). Since $|{\cal M}|$ is proportional to $\alpha^{N/2}$ and has mass dimension $4-N$, we exhibit $|{\cal M}|\times (\sqrt{s})^{N-4}/\alpha^{N/2}$ in our plots. We specify helicities in the form $h_1,h_2,h_3,\dots,h_N$, where 1 and 2 are the incoming particles and, following convention, $h_1$ and $h_2$ are actually the negative of the physical helicities of the incoming photons.
$N = 4$ {#sec:results4}
-------
We begin with $N = 4$, light-by-light scattering. Here we use the subtraction for the ultraviolet divergence in each graph as described in Sec. \[sec:UV\]. For the $N=4$ case the result is known and has been presented in a convenient form in Ref. [@lightbylight]. For the two helicity choices $+$$+$$+$$+$ and $+$$+$$+$$-$, $|{\cal M}|/\alpha^2 = 8$. Our numerical results agree with this. For the choice $+$$+$$-$$-$, the result depends on the value of the scattering angle $\theta$. In Fig. \[fig:Fourphotons\] we exhibit the prediction of Ref. [@lightbylight] versus $\theta - \pi/2$ as a curve and a selection of points obtained by numerical integration as points with error bars. The error bars represent the statistical uncertainty in the Monte Carlo integration. It is not easy to see the error bars in the figure. The fractional errors range from 0.0022 to 0.0034. The points were generated using $10^6$ Monte Carlo points for each of six graphs.
$N = 5$ {#sec:results5}
-------
We turn next to $N=5$. Since the five photon matrix element vanishes after summing over graphs, we use a massless vector boson that couples to the electron with the left-handed part of the photon coupling. The final state phase space has four dimensions, which does not lend itself to a simple plot. Accordingly we have chosen an arbitrary point for the final state momenta $\{\vec p_3, \vec p_4, \vec p_5\}$: $$\begin{split}
%
\vec p_3 ={}& (33.5,15.9,25.0)
\;\;,
\\
\vec p_4 ={}& (-12.5,15.3,0.3)
\;\;,
\\
\vec p_5 ={}& (-21.0,-31.2,-25.3)
\;\;.
%
\end{split}$$ We take photon 1 to have momentum $\vec p_1$ along the $-z$-axis (so the physical incoming momentum is along the $+z$-axis), and we take $\vec p_2$ along the $+z$-axis. Then we create new momentum configurations by rotating the final state through angle $\theta$ about the $y$-axis. In Fig. \[fig:Fivephotons\], we plot computed values of $\sqrt s\,|{\cal M}|/\alpha^{5/2}$ versus $\theta$. The points were generated using $10^6$ Monte Carlo points for each of 24 graphs.
$N = 6$ {#sec:results6}
-------
Finally, we compute the six photon amplitude. Here analytic results are known for the helicity choices $+$$+$$+$$+$$+$$+$ and $+$$+$$+$$+$$+$$-$: for these helicity choices, the amplitude should vanish [@Mahlon]. There is also a non-zero analytical result for the choice $+$$+$$-$$-$$-$$-$ [@MahlonTahoe]. We compute $s\,|{\cal M}|/\alpha^{3}$ for these helicity choices and also for $+$$-$$-$$+$$+$$-$, for which we know of no analytic result. Following what we did for $N=5$, we choose an arbitrary point for the final state momenta $\{\vec p_3, \vec p_4, \vec p_5, \vec p_6\}$: $$\begin{split}
%
\vec p_3 ={}& (33.5,15.9,25.0)
\;\;,
\\
\vec p_4 ={}& (-12.5,15.3,0.3)
\;\;,
\\
\vec p_5 ={}& (-10.0,-18.0,-3.3)
\;\;,
\\
\vec p_6 ={}& (-11.0,-13.2,-22.0)
\;\;.
%
\end{split}$$ We choose $\vec p_1$ and $\vec p_2$ as we did for $N=5$. Then we create new momentum configurations by rotating the final state through angle $\theta$ about the $y$-axis. In Fig. \[fig:Sixphotons\], we plot computed values of $s\,|{\cal M}|/\alpha^{3}$ versus $\theta$. For $+$$+$$+$$+$$+$$+$ helicities, we compute the amplitude at $\theta = 0, 0.2, 0.4, \dots$. The results are consistent with the known result of zero. For $+$$+$$+$$+$$+$$-$ helicities, we compute the amplitude at $\theta = 0.1, 0.3, 0.5, \dots$. The results are again consistent with zero. For $+$$+$$-$$-$$-$$-$ we compare the numerical results to the analytical results of Ref. [@MahlonTahoe] (top curve) and find good agreement. For the helicity choice $+$$-$$-$$+$$+$$-$, the results lie in the range from 2000 to 8000 and exhibit some variation as the final state momenta are varied. We do not have an analytical curve with which to compare. The points were generated using $10^6$ Monte Carlo points for each of 120 graphs.[^5]
Conclusions {#sec:conclusions}
===========
In the calculation of cross sections for infrared-safe observables in high energy collisions at next-to-leading order, one must treat the real emission of one parton beyond the Born level and one must include virtual loop corrections to the Born graphs. Most calculations follow the method of Ref. [@ERT], in which the integration over real emission momenta is performed numerically while the integration over virtual loop momenta is performed analytically. One can, however, perform all of the integrations numerically.[^6]
In one approach to the calculation of loop diagrams by numerical integration, one would use a subtraction scheme [@NSsubtractions] that removes infrared and collinear divergences from the integrand in a style similar to that used for real emission graphs. Then one would perform the loop integration by Monte Carlo integration along with the integrations over final state momenta. In this paper, we have explored how one would perform the numerical integration. We have studied the $N$-photon scattering amplitude with a massless electron loop in order to have a case with a singular integrand that is not, however, so singular as to require the subtractions of Ref. [@NSsubtractions].
One could perform the integration either directly as an integral $\int d^4 l$ or with the help a different representation of the integral. We have chosen to explore the use of the Feynman parameter representation because it makes the denominator simple. We have found that this method works for the cases of 4, 5, or 6 external legs. There is, in principle, no limitation to the number of external legs. However, for more external legs, the integrand becomes more singular because the denominator is raised to a high power, $N-1$. This is evident in our results by examining the growth of the integration error as $N$ increases.
In many practical calculations, the partons in the loop can have non-zero masses and the partons entering the loop can be off-shell. These possibilities make the analytical results more complicated, but we expect that they make the numerical result more stable by softening the singularities.[^7] However, we leave exploration of this issue for later work.
It is remarkable that the method presented here works for quite a large number of external legs. However, we expect that the method can be improved. One approach lies in making a sequence of small improvements that together amount to a big improvement. Along these lines, one can work on the algorithm for deforming the integration contour and on the sampling methods used for choosing integration points (which methods we have not discussed here). Alternatively, one can look for a different representation of $\cal M$ as an integral. One could use an integral transformation other than that provided by the Feynman parameter representation or one could use a more direct representation of the integral. In particular, the representation of Refs. [@beowulfPRL; @beowulfPRD] recommends itself. Here we turn the integral $\int d^4 l$ into a three dimensional integral $\int d^4 \vec l\ \delta_+(l^2)$ that is rather similar to what one has for real parton emissions. In contrast to Refs. [@beowulfPRL; @beowulfPRD], however, one would use explicit subtractions.[^8] We expect that this method or something similar might be superior to the Feynman parameter method used in this paper because for large $N$ one does not raise a denominator to a high power.
The challenge would be to find a representation that is simple and for which the integrand is well enough behaved that one can get numerical results for, say, $N=12$. We hope that others might accept this challenge.
This work was supported in part the United States Department of Energy and by the Swiss National Science Foundation (SNF) through grant no.200020-109162 and by the Hungarian Scientific Research Fund grants OTKA K-60432. We thank T. Binoth for encouraging us to try the Feynman parameter representation. We thank G. Heinrich for pointing us toward Ref. [@lightbylight] and L. Dixon for pointing us toward Ref. [@MahlonTahoe].
Wick rotation {#app:wick}
=============
In this appendix, we explain the contour deformation necessary to obtain the scaled and rotated momenta of Eq. (\[eq:elldef\]). The simplest procedure is to start by rotating the space-parts of the vector $\tilde l$, $$%
\tilde l^0 = k^0\;\;,
\hskip 1 cm
\tilde l^j = e^{-\mi \theta} k^j
\hskip 0.5 cm k = 1,2,3
\;\;.
%$$ Here the components of $k$ are real. We start with $\theta = 0$ and increase $\theta$ until $\theta = \pi/2$. Thus we rotate the $\tilde l$ contour. Throughout the rotation, $\tilde l^2$ has a positive imaginary part. At the end, $\tilde l^2 + \Lambda^2(x)$ becomes $k^\mu P_{\mu\nu} k^\nu + \Lambda^2(x)$, where $k^\mu P_{\mu\nu} k^\nu$ is the euclidean square of $k$.
The next step is to rotate all of the components of $k$ by half of the phase of $\Lambda^2(x)$, so that after the rotation, $k^\mu P_{\mu\nu} k^\nu$ has the same phase as $\Lambda^2(x)$ (which itself has a positive imaginary part).
Finally we rescale the components of $k$ by the absolute value of $\Lambda^2(x)$. Thus $$%
k^\mu = \Lambda(x) \ell^\mu\;\;.
%$$ The net transformation is that of Eq. (\[eq:elldef\]). At all stages, the imaginary part of the denominator is positive.
Contour deformation {#app:DeformCheck}
===================
In this appendix, we exhibit some details of the argument that the integration over Feynman parameters is left invariant by the contour deformation. We start with $$%
I = \int\! d\xi \det A\ F(z(\xi,\lambda))
\;\;.
%$$ Here $z(\xi,\lambda)$ specifies the deformed contour, $$%
\int\! d\xi = \int_0^1 d\xi^1 \cdots \int_0^1 d\xi^N\
\theta(\sum_{j=1}^N \xi < 1)
\;\;,
%$$ the matrix $A$ is $$%
A^i_j = \frac{\partial z^i}{\partial \xi^j}
\;\;,
%$$ and $F(z)$ is the integrand, $$%
F(z) = \left(\sum_{i=1}^N z^i\right)^{\!\!N-3}\
\frac{N(l(z,\ell))}
{\big[\Lambda^{2}(z)\big]^{N-1}}
\;\;.
%$$ In order to take care with what happens at the integration boundary, we define a function $R(\xi)$ that measures how far the point $\xi$ is from the boundary, $$%
R(\xi) = \min\!\left(\xi^1,\dots,\xi^N, 1 - \sum_{j=1}^N \xi^j
\right)
\;\;.
%$$ The boundary is at $R(\xi)$ = 0, and in general $0 < R(\xi) < 1/(N+1)$. Then $$%
I = \lim_{r \to 0} I(r)
\;\;,
%$$ where $$%
I(r) = \int\! d\xi\ \theta(R(\xi) > r)\,\det A\ F(z(\xi,\lambda))
\;\;.
%$$ Now let us make a small change $\delta \lambda$ in the deformation parameter. If we can prove that the corresponding change $\delta I$ of the integral vanishes, then the integral is the same for $\lambda = 1$ as it was for an infinitesimal $\lambda$. We calculate $\delta I(r)$ for non-zero $r$. As shown in Ref. [@beowulfPRD], $\delta I(r)$ is the integral of a total derivative. Thus we get an integral over the boundary of the contour, $$%
\delta I(r) = \int\!d\xi\ \det A\ \delta(R(\xi) - r) \
\delta R(\xi)\ F(z(\xi))
\;\;.
%$$ Here $$%
\delta R(\xi) \equiv
\sum_{i,j}
\frac{\partial R(\xi)}{\partial \xi^i}\
B^i_j(\xi) \ \frac{\partial z^j(\xi)}{\partial \lambda}\
\delta\lambda
\;\;,
%$$ where $B$ is the inverse matrix to $A$, $$%
\sum_j
A^i_j B^j_k = \delta^i_k
\;\;.
%$$ We think of $A$ as producing a vector $\delta z$ from a vector $\delta \xi$, $\delta z^i = \sum_j A^i_j\, \delta \xi^j$. Then we can think of $B$ as producing a vector $\delta \xi$ from the vector $\delta z$ given by the change of $z$ under the change of deformation, $$%
\delta \xi^i = \sum_{j}B^i_j(\xi) \
\frac{\partial z^j(\xi)}{\partial \lambda}\,\delta\lambda
\;\;.
%$$ This justifies the name $\delta R$ for the combination $$%
\delta R = \sum_i \frac{\partial R}{\partial \xi^i}\,
\delta \xi^i
\;\;.
%$$ Given the ansatz (\[eq:contour\]) for $z(\xi)$, the variation $\delta z$ takes the form $$%
\delta z^k = \frac{\mi\delta\lambda}
{[1 + \mi \lambda\sum_{j=1}^N \eta^j(\xi)]^2}\
\left[
\eta^k(\xi) - \xi^k \sum_{j=1}^N \eta^j(\xi)
\right]
\;\;.
\label{eq:zvariation}
%$$ We build into the definition of the contour deformation the requirement that as any $\xi^k$ vanishes, the corresponding $\eta^k$ also vanishes, with $\eta^k \propto \xi^k$. Then $\delta z^k \propto \xi^k$ in this limit. Also, when $1 - \sum\xi^k \to 0$, it follows from Eq. (\[eq:zvariation\]) that $\sum\delta z^k \propto 1 - \sum\xi^k$. The result is that as we approach a boundary of the integration region, $R(\xi) \to 0$, the function $\delta R(\xi)$ vanishes, with $$%
\delta R(\xi) = R(\xi)\times h(\xi)
\;\;,
%$$ where $h(\xi)$ is non-singular. Thus $$%
\delta I(r) = r \times \int\!d\xi\ \det A\ \delta(R(\xi) - r) \
h(\xi)\ F(z(\xi))
\;\;.
%$$ The factor $r$ would seem to imply that $\delta I(r) \to 0$ as $r \to 0$. However, we should be careful because $F(z(\xi))$ is singular near the boundary of the integration region. To examine this issue, we note that $$%
I = \int_0^{1/(N+1)}\!dr\
\tilde I(r)
\;\;,
%$$ where $$%
\tilde I(r) =
\int\!d\xi\ \det A\ \delta(R(\xi) - r) \
F(z(\xi))
\;\;.
%$$ Were it not for the numerator function (and the UV subtraction in the case $N=4$), the integral for $I$ would be logarithmically divergent. Generically, a one loop integral could produce two logs, so that $\tilde I(r)$ would have a singularity $\log(r)/r$ for $r \to 0$. However, the numerator factor (and UV subtraction if needed) produces an extra factor of $r$. Thus $\tilde I(r) \propto r^0 \log^K(r)$ for $r \to 0$ for some $K$. The power counting for $\delta I(r)/r$, with its extra non-singular factor $h(\xi)$, is the same. Thus $\delta I(r)$ is proportional to $r$ times possible logarithms of $r$ as $r \to 0$.
We conclude that when we make an infinitesimal change of contour with the properties specified in this paper, the variation of the integral vanishes, $$%
\delta I =
\lim_{r \to 0} \delta I (r) = 0
\;\;.
%$$ Thus the integral on the deformed contour is the same as on the original infinitesimally deformed contour.
Extra deformation {#app:extradeform}
=================
Here we return to the question of contour deformation. We study a problem that can occur with the standard deformation. We start by stating the problem rather abstractly. Let ${\cal L}$ be a subset of $\{1,2,\dots,N\}$ and let ${\cal B}$ be its complement. Suppose that $$%
S_{ij} = 0 \hskip 1 cm {\rm for}\ i,j \in {\cal B}
\;\;.
%$$ We consider the following limit. Define $$\begin{split}
%
\bar\xi_{\cal L} ={}& \sum_{j \in {\cal L}} \xi^j
%
\end{split}$$ and let $$\begin{split}
%
\xi^j ={}& \bar\xi_{\cal L}\,\hat\xi^j_{\cal L}
\hskip 2.9 cm {\rm for}\ j \in {\cal L}
\;\;,
\\
\xi^j ={}& (1 - \xi^0 - \bar\xi_{\cal L})\,
\hat\xi^j_{\cal B} \hskip 1.0 cm {\rm for}\ j \in {\cal B}
\;\;,
\label{eq:limitdef}
%
\end{split}$$ so that $$%
\sum_{j \in {\cal L}} \hat\xi^j
=\sum_{j \in {\cal B}} \hat\xi^j
=1
\;\;.
%$$ Then we consider the limit $\bar\xi_{\cal L} \to 0$ with $\xi^0 = 0$. Thus the $\xi^i$ for $i \in {\cal B}$ are big and the $\xi^i$ for $i \in {\cal L}$ are little.
In the limit $\bar\xi_{\cal L} \to 0$, ${\cal S}(\xi)$ becomes $$%
{\cal S}(\xi) =
\sum_{i \in {\cal L}} \xi^i \sum_{j \in {\cal B}} S_{ij} \xi^j
+ \frac{1}{2} \sum_{i,j \in {\cal L}} \xi^i \xi^j S_{ij}
=
\bar\xi_{\cal L}
\sum_{i \in {\cal L}} \hat\xi^i_{\cal L}
\tilde w_i(\hat\xi_{\cal B})
+{\cal O}(\bar\xi_{\cal L}^2)
\;\;,
%$$ where $$%
\tilde w_i(\hat\xi_{\cal B}) = \sum_{j \in {\cal B}} S_{ij} \hat\xi^j_{\cal B}
\;\;.
\label{eq:tildewdef}
%$$ If we adopt the standard contour definition from Eq. (\[eq:etadef\]), we have $$%
{\cal S}(\xi + \mi \eta) =
\bar\xi_{\cal L}
\sum_{i \in {\cal L}} \hat\xi^i_{\cal L}
\tilde w_i(\hat\xi_{\cal B})
+\mi \,\frac{\lambda}{m^2}\, \bar\xi_{\cal L}
\sum_{i \in {\cal L}} \hat\xi^i_{\cal L}
[\tilde w_i(\hat\xi_{\cal B})]^2
+ {\cal O}(\bar\xi_{\cal L}^2)
\;\;.
%$$ We see that the surface $\bar \xi_{\cal L} = 0$ with $\xi^0 = 0$ is a singular surface of the integrand. In fact, it is a pinch singular surface. For a generic point $\hat \xi$, ${\cal S}(\xi + \mi \eta)$ vanishes linearly with $\bar \xi_{\cal L}$ as $\bar \xi_{\cal L} \to 0$.
This generic behavior is fine from a numerical point of view. However, we would like to avoid having ${\cal S}(\xi + \mi \eta)$ vanish faster than linearly as $\bar \xi_{\cal L} \to 0$. The real part of ${\cal S}(\xi + \mi \eta)$ can easily vanish quadratically as $\bar \xi_{\cal L} \to 0$. The components $S_{ij}$ can have either sign, so that for some points $\hat \xi$ the particular linear combination $\sum_{i \in {\cal L}} \hat\xi^i_{\cal L} \tilde w_i(\hat\xi_{\cal B})$ can vanish. It is harder for the linear contribution to the imaginary part of ${\cal S}(\xi + \mi \eta)$ to vanish. However, if the set ${\cal B}$ has more than one element, it is possible for $\tilde w_i(\hat\xi_{\cal B})$ to vanish for some particular index $i = I$ at some particular value of $\hat\xi_{\cal B}$. Then if all of the $\hat\xi^i_{\cal L}$ vanish except for $i = I$, we will have $$%
\sum_{i \in {\cal L}} \hat\xi^i_{\cal L}
[\tilde w_i(\hat\xi_{\cal B})]^2
= \hat\xi^{I}_{\cal L}
[\tilde w_{I}(\hat\xi_{\cal L})]^2
= 0
\;\;.
%$$ For this choice of the $\hat \xi^i$, we will have $\Lambda^2(\xi + \mi \eta) = {\cal O}(\bar\xi_{\cal L}^2)$ if we take the standard deformation. One might think that having an esoteric integration region in which the integrand is extra singular is not a problem. However, in a numerical integration it is a problem. One possibility is to put extra integration points in the region of extra singularity, but a more attractive possibility is to fix the contour deformation so as to better keep the integration contour away from the singularity. At the same time, we need to avoid letting the jacobian $\det\!\left({dz}/{d\xi}\right)$ in Eq. (\[eq:lxspacedeformed\]) become singular. This is the strategy we will pursue.
Until now, we have followed a rather abstract formulation of the problem for the reason that the same abstract problem occurs in several ways in the subtraction terms defined in Ref. [@NSsubtractions] to take care of infrared divergent graphs. In this paper, however, we are concerned with infrared finite graphs representing photon scattering with a massless electron loop. The problem is associated with the region in the original loop integral in which lines $n$ and $n+1$ are nearly collinear. With massless kinematics, $S_{nn} = S_{n+1,n+1} = S_{n,n+1} = 0$. Thus the matrix $S_{ij}$ has the special form with ${\cal B} = \{n,n+1\}$ and ${\cal L}$ consisting of all index values $i \in \{1,\dots,N\}$ other than $n$ and $n+1$.
We will seek a supplementary deformation $\tilde\eta^n(\xi)$ and $\tilde\eta^{n+1}(\xi)$ that we can add to the standard deformation. In the following, we consider $n$ to be any fixed index value in the range $1 \le n \le N$. There will be an analogous deformation for any $n$. For reasons that will become apparent, we want to have just one of these added deformations $\tilde \eta(\xi)$ for any value of $\xi$. For this reason, we will arrange that $\tilde\eta^n(\xi)$ and $\tilde\eta^{n+1}(\xi)$ are nonzero only in the region $$%
\label{eq:Rndef}
{\cal R}_n:\quad
\xi^n + \xi^{n+1} > {\textstyle\frac{1}{2}},\
\xi^n > \xi^{n+2},\
\xi^{n+1} > \xi^{n-1}
\;\;.
%$$ It is easy to verify that the various regions ${\cal R}_n$ are non-overlapping.
Given that there is an added deformation $\tilde\eta^n(\xi)$ and $\tilde\eta^{n+1}(\xi)$, there is an added contribution to ${\rm Im}\,{\cal S}$ that has the form $$\begin{split}
%
\Delta\,{\rm Im}\,{\cal S}(\xi + \mi \eta) ={}&
\sum_{i \in {\cal L}}\xi^i
\left\{
\tilde\eta^{n} S_{n,i}
+\tilde\eta^{n+1} S_{n+1,i}
\right\}
\;\;.
\label{eq:DeltaLambda}
%
\end{split}$$ When we include the standard contour definition from Eq. (\[eq:etadef\]), the total ${\rm Im}\,{\cal S}$ is $$\begin{split}
%
{\rm Im}\,{\cal S}(\xi + \mi \eta) ={}&
\sum_{i \in {\cal L}} \xi^i
\left\{
\tilde\eta^{n} S_{n,i}
+\tilde\eta^{n+1} S_{n+1,i}
+(\lambda/m^2)[w_i(\xi)]^2
\right\}
\\ &
+\sum_{i\in {\cal B}}
(\lambda/m^2) \xi^i [w_i(\xi)]^2
\;\;.
\label{eq:ImLambdamod1}
%
\end{split}$$ We take the extra deformations to be of the form $\pm (\lambda/m^2)\, g(\xi)$: $$\begin{split}
%
\label{eq:tildeetadef}
\tilde\eta^n ={}&
(\lambda/m^2)\, g(\xi) \;,
\\
\tilde\eta^{n+1} ={}&
- (\lambda/m^2)\, g(\xi) \;.
%
\end{split}$$ Here $g(\xi)$ is a function to be defined.
With the definition (\[eq:tildeetadef\]) for the added deformation together with the standard deformation, we have $$\begin{split}
%
{\rm Im}\,{\cal S}(\xi + \mi \eta) ={}&
(\lambda/m^2)
\sum_{i \in {\cal L}} \xi^i
\left\{
(S_{n,i} - S_{n+1,i})g(\xi)
+[w_i(\xi)]^2
\right\}
\\ &+
\sum_{i\in {\cal L}}
(\lambda/m^2) \xi^i [w_i(\xi)]^2
\;\;.
\label{eq:ImLambdamod2}
%
\end{split}$$ The second term here is always positive but vanishes quadratically with $\bar\xi_{\cal L}$ in the limit $\bar\xi_{\cal L} \to 0$, so it is too small to help us in this limit. In the first term, the parts proportional to $[w_j(\xi)]^2$ are always positive and for typical values of $\xi^n$ and $\xi^{n+1}$ vanish linearly with $\bar\xi_{\cal L}$. However, $w_i(\xi)$ for some value of $i$ can vanish in this limit for a particular value of $\xi^n$ and $\xi^{n+1}$. This is the reason for adding the new deformation specified by $g(\xi)$.
We need to ensure that $$%
\label{eq:goal}
(S_{n,i} - S_{n+1,i})g(\xi)
+[w_i(\xi)]^2
> 0
%$$ for all $i \in {\cal L}$ and for all $\xi$. Here we really want a “$>$” relation and not just a “$\ge$” relation, which could be satisfied with $g(\xi) = 0$.
This goal can be accomplished by a straightforward construction. First, we need some notation indicating certain sets of indices. Let ${\cal L}_{+}$ be set of indices in ${\cal L}$ such that $S_{n+1,i} > S_{n,i}$ and let ${\cal L}_{-}$ be set of indices in ${\cal L}$ such that $S_{n,i} > S_{n+1,i}$. The union of ${\cal L}_{+}$ and ${\cal L}_{-}$ is all of ${\cal L}$. (Here we assume that the external momenta do not lie on the surface $S_{n+1,i} = S_{n,i}$ for any $i$ in $\cal L$.)
Define $$\begin{split}
%
g_+(\xi) ={}& \min_{i \in {\cal L}_+} \frac{[w_i(\xi)]^2}{S_{n+1,i} - S_{n,i}}
\;\;,
\\
g_-(\xi) ={}& \min_{i \in {\cal L}_-} \frac{[w_i(\xi)]^2}{S_{n,i} - S_{n+1,i}}
\;\;.
%
\end{split}$$ Then Eq. (\[eq:goal\]) requires that $$%
\label{eq:goal2}
- g_-(\xi) < g(\xi) < g_+(\xi)
\;\;.
%$$ Of particular interest is the requirement for a special point for which $w_i(\xi) = 0$. If $i \in {\cal L}_-$, then $g_-(\xi) = 0$ at this point and the requirement is that $g(\xi)$ be positive (but not too positive). If $i \in {\cal L}_+$, then $g_+(\xi) = 0$ at this point and the requirement is that $g(\xi)$ be negative (but not too negative).
These restrictions are not very restrictive in the case that either $g_+(\xi)$ or $g_-(\xi)$ is large or in the case that one of the sets ${\cal L}_\pm$ is empty (in which case we interpret the corresponding $g_\pm$ to be infinite). In order to ensure that the deformation not be too large, we can also impose $$%
\label{eq:goal3}
- \lambda_g < (\lambda/m^2)\,g(\xi) < \lambda_g
\;\;.
%$$ where $\lambda_g$ is a parameter that could be chosen to be $\lambda$. Defining $$%
\tilde g_\pm(\xi) = \min[g_\pm(\xi),m^2 \lambda_g/\lambda]
\;\;,
%$$ our requirement is $$%
\label{eq:goal4}
- \tilde g_-(\xi) < g(\xi) < \tilde g_+(\xi)
\;\;.
%$$ It is easy to satisfy Eq. (\[eq:goal4\]). We set $$%
\label{eq:gdef}
g(\xi) = H(\xi)\, C_g
\big[ \tilde g_+(\xi) - \tilde g_-(\xi) \big]
\;\;,
%$$ where $C_g$ is a parameter in the range $0 < C_g < 1$ (possibly 1/2) and $$\begin{split}
%
\label{eq:Hdef}
H(\xi) ={}&
(2\xi^n + 2\xi^{n+1} - 1)\
4(\xi^n - \xi^{n+2})(\xi^{n+1} - \xi^{n-1})
\\& \times
\theta(\xi^n + \xi^{n+1} > {\textstyle\frac{1}{2}})\
\theta(\xi^n > \xi^{n+2})\
\theta(\xi^{n+1} > \xi^{n-1})
\;\;.
%
\end{split}$$ The purpose of $H(\xi)$ is to restrict the range of $\xi$ for which $g(\xi) \ne 0$ to the desired region ${\cal R}_n$, Eq. (\[eq:Rndef\]). There is also a factor that becomes $4 \xi^n \xi^{n+1}$ in the limit $\bar \xi_{\cal L} \to 0$. This factor turns off the deformation as $\xi^n \to 0$ or $\xi^{n+1} \to 0$. Notice that $$%
0 \le H(\xi) \le 1
\;\;.
%$$ With the use of this property, it is evident that the definition (\[eq:gdef\]) satisfies Eq. (\[eq:goal4\]).
Suppose that that there is an index $i \in {\cal L}_+$ and an index $j \in {\cal L}_-$ such that $$\begin{split}
%
S_{i,n+1} >{}& 0 \hskip 1 cm S_{i,n} < 0
\;\;,
\\
S_{j,n} >{}& 0 \hskip 1 cm S_{j,n+1} < 0
\;\;,
%
\end{split}$$ and that $$%
S_{i,n}S_{j,n+1} - S_{j,n}S_{i,n+1} \ll s^2
\;\;.
%$$ Then there is approximately an effective contour pinch, in the sense that both $g^+(\xi)$ and $g^-(\xi)$ are close to vanishing when all of the $\xi^k$ for $k \in {\cal L}$ are very small and $$\begin{split}
%
\xi^n ={}& \frac{S_{i,n+1} - S_{j,n+1}}
{S_{i,n+1} - S_{i,n} + S_{j,n} - S_{j,n+1}}
\;\;,
\\
\xi^{n+1} ={}& \frac{ S_{j,n} - S_{i,n}}
{S_{i,n+1} - S_{i,n} + S_{j,n} - S_{j,n+1}}
\;\;.
%
\end{split}$$ The contour is pinched already along the whole collinear singularity line $\xi^k = 0$ for $k \in {\cal L}$, but this is an extra pinch that prevents the deformation of this section from being effective. For this reason, one should put extra integration points in the region near this point.
It is instructive to examine the functions $w_i(\xi)$ for $i \in {\cal L}$ in the limit that $\xi^0$ and all of the $\xi^j$ for $j \in {\cal L}$ vanish (and assuming massless kinematics). Then the only two $x^i$ that are non-zero are $\xi^n$ and $\xi^{n+1} = 1 - \xi^n$. Following the notation of Eq. (\[eq:tildewdef\]), we can call the limiting function $\tilde w_i(\xi^n)$. We would like to know for what value of $\xi^n$ (if any) this function vanishes. We have $$%
\tilde w_i(\xi^n) = S_{in}\xi^n + S_{i,n+1}(1 - \xi^n)
\;\;.
%$$ Evidently $\tilde w_i(\xi^n)$ will vanish for some $\xi^n$ in the range $0 < \xi^n < 1$ if and only if $S_{in}$ and $S_{i,n+1}$ are non-zero and have opposite signs.
Thus we need to know something about the signs of $S_{in}$ and $S_{i,n+1}$. First, we note that if $i = n-1$ then $S_{in} = 0$. Furthermore, if all of the particles $i,i+1,\dots,n-1$ are final state particles, then $$%
S_{in} = \left(
\sum_{j = i}^{n-1} P_j
\right)^2 > 0
\;\;.
%$$ If two of the particles $i,i+1,\dots,n-1$ are the two initial state particles, then $$%
S_{in} = \left(
\sum_{j = n}^{i-1} P_j
\right)^2 > 0
\;\;.
%$$ If exactly one of the particles $i,i+1,\dots,n-1$ is an initial state particle, then $S_{in} < 0$. The proof of this amounts to showing that if a massless particle $A$ turns into a massive particle $A'$ by exchanging a momentum $Q$ and a massless particle $B$ turns into a massive particle $B'$ by absorbing the momentum $Q$, then $Q^2 < 0$. We omit the details.
Given these results and the analogous results for $S_{i,n+1}$ we can conclude that $S_{in}$ and $S_{i,n+1}$ are non-zero and have opposite signs when $i$ is neither of $n-1$ or $n+2$ and external photon $n$ is an incoming particle.
Supposing that photon $n$ is an incoming particle, then the propagator index $i$ is in ${\cal L}_-$, with $S_{in} > 0$ and $S_{i,n+1} < 0$ if all of the external particles $i,i+1,\dots,n-1$ are final state particles. If, on the other hand, one of them is the other initial state particle, then $S_{in} < 0$ and $S_{i,n+1} > 0$ and $i$ is in ${\cal L}_+$.
As we have seen, when photon $n$ is an incoming particle, the zero of $\tilde w_i(\xi^n)$, $$%
\xi^n_{(i)} = \frac{S_{i,n+1}}{S_{i,n+1} - S_{i,n}}
\;\;,
%$$ lies in the integration range, $0 < \xi^n < 1$. In this case, we can say more about the location of this zero. Let the index of the other incoming photon be $n'$. Define $$%
\tau = - 2 P_{n'} \cdot \sum_{j = n+1}^{n'-1} P_k /s
\;\;.
%$$ Then if the index $i$ is in the range $n'+1,\dots,n-1$, we have $S_{i,n} > 0$ and $S_{i,n+1} < 0$ so $i \in {\cal L}_-$. Then one can show that $$%
\xi^n_{(i)} > \tau
\;\;.
\label{tauinequality1}
%$$ On the other hand, if the index $i$ is in the range $n+2,\dots,n'$, we have $S_{i,n} < 0$ and $S_{i,n+1} > 0$ so $i \in {\cal L}_+$. Then one can show that $$%
\xi^n_{(i)} < \tau
\;\;.
\label{tauinequality2}
%$$ To prove Eq. (\[tauinequality2\]), write each outgoing momentum in the form $$%
P_i = -a_i P_n - b_i P_{n'} + P_i^T
%$$ where $P_i^T \cdot P_n = P_i^T \cdot P_{n'} = 0$ and $0 < a_i < 1$ and $0 < b_i < 1$. Then for $i$ in the range $n+2,\dots,n'$ we have $$%
S_{i,n+1} = \left(\sum_{j = n+1}^{i-1} P_j\right)^2
= \left(\sum_{j = n+1}^{i-1} a_j\right)\left(\sum_{j = n+1}^{i-1} b_j\right)s
+ \left(\sum_{j = n+1}^{i-1} P_j^T\right)^2
\;\;.
%$$ For $S_{i,n}$ we add one more particle $n$ with $a_n = -1$, $b_n = 0$, and no $P_n^T$. Thus $$%
S_{i,n} = \left(\sum_{j = n}^{i-1} P_j\right)^2
= -\left(1 - \sum_{j = n+1}^{i-1} a_j\right)
\left(\sum_{j = n+1}^{i-1} b_j\right)s
+ \left(\sum_{j = n+1}^{i-1} P_j^T\right)^2
\;\;.
%$$ Then $$%
\xi^n_{(i)} = \frac{\left(\sum_{j = i}^{n-1} a_j\right)
\left(\sum_{j = i}^{n-1} b_j\right)s
+ \left(\sum_{j = i}^{n-1} P_j^T\right)^2}
{\left(\sum_{j = i}^{n-1} b_j\right)s}
\;\;,
%$$ The $(P^T)^2$ term in the numerator is negative. Thus $$%
\xi^n_{(i)} < \sum_{j = i}^{n-1} a_j
< \sum_{j = n'}^{n-1} a_j
= \tau
\;\;.
%$$ The proof of Eq. (\[tauinequality1\]) is similar.
Thus in the limit that all of the $\xi^j$ for $j \in {\cal L}$ are very small, the qualitative nature of the deformation constructed here is quite simple. We deform $\xi^n$ into the upper half plane for $\xi > \tau$ and into the lower half plane for $\xi < \tau$.
Double parton scattering singularity {#sec:appendixdps}
====================================
As discussed briefly in Sec. \[sec:dps\], a pinch singular point corresponding to double parton scattering can be present if a special condition holds for the external momenta. This singularity is illustrated in Fig. \[fig:dps\]. Imagine that incoming parton with index $A$ carries momentum $-P_A$ such that $P_A^2 = 0$ and that parton $A$ splits into two collinear partons with labels $A$ and $A+1$. That is $-K_A(\xi) = -(1-x_A) P_A$ and $K_{A+1}(\xi) = -x_A P_A$. Imagine also that incoming parton with index $B$ carries momentum $-P_B$ such that $P_B^2 = 0$ and that parton $B$ splits into two collinear partons with labels $B$ and $B+1$. That is $-K_B(\xi) = (1-x_B) P_B$ and $K_{B+1}(\xi) = - x_B P_B$. (Here the $x$’s are momentum fractions, not Feynman parameters.) Partons $A+1$ and $B$ could meet and produce a group of final state partons with labels $i$ in a set ${\cal A} = \{A+1,\dots,B-1\}$. Partons $B+1$ and $A$ could meet and produce a group of final state partons with labels $i$ in a set ${\cal B}= \{B+1,\dots,A-1\}$. Thus $$\begin{split}
%
\sum_{i \in {\cal A}} P_i ={}& - x_A P_A - (1-x_B)P_B
\;\;,
\\
\sum_{i \in {\cal B}} P_i ={}& - (1 - x_A) P_A - x_B P_B
\;\;.
\label{eq:dps1}
%
\end{split}$$ It is convenient to write Eq. (\[eq:dps1\]) in terms of the internal line momenta $Q_i$ using Eq. (\[eq:Qndef\]). We note immediately that $\sum_{i \in {\cal A}} P_i = Q_{B} - Q_{A+1}$ and $\sum_{i \in {\cal B}} P_i = Q_{A} - Q_{B+1}$ are timelike vectors. Thus $$\begin{split}
%
S_{A+1,B} >{}& 0 \;\;,
\\
S_{A,B+1} >{}& 0 \;\;.
%
\end{split}$$ Notice that for this kind of singularity to occur, we need at least two external lines in set ${\cal A}$ and two in set ${\cal B}$. Thus we need at least four outgoing external particles. Thus we need $N \ge 6$.
Given the external momenta, the momentum fractions $x_A$ and $x_B$ are determined. When rewritten in terms of the $Q_i$, the first of Eq. (\[eq:dps1\]) reads $$%
Q_{A+1} - Q_{B+1} = x_A (Q_{A+1} - Q_A) - x_B (Q_{B+1} - Q_B)
\;\;,
\label{eq:dps2}
%$$ while the second equation in (\[eq:dps1\]) is just the negative of this. Take the inner product of this with $(Q_{B+1} - Q_B)$ and use $$%
2(Q_{A+1} - Q_A)\cdot (Q_{B+1} - Q_B) = \overline S
\;\;,
%$$ where $$%
\overline S \equiv S_{A,B+1} + S_{A+1,B} - S_{A,B} - S_{A+1,B+1}
\;\;.
\label{eq:overlineSdef}
%$$ Also note that $(Q_{B+1} - Q_B)^2 = 0$ and $$%
2(Q_{A+1} - Q_{B+1})\cdot (Q_{B+1} - Q_B)
= S_{A+1,B} - S_{A+1,B+1} \;\;.
%$$ These relations give $$%
x_A = \frac{S_{A+1,B} - S_{A+1,B+1}}
{\overline S}
\;\;.
%$$ We similarly derive $$%
x_B = \frac{S_{A,B+1} - S_{A+1,B+1}}
{\overline S}
\;\;.
%$$
The kinematic conditions require that $$%
\label{eq:doublescatteringcondition}
\sum_{i \in {\cal A}} P_i^{\rm T} = 0
\;\;,
%$$ where $P_i^{\rm T}$ is the part of $P_i$ transverse to $P_A$ and $P_B$. (In this frame the sum of the transverse momenta of all of the final state particles vanishes, so the sum of the $P_i^{\rm T}$ for the particles in set ${\cal B}$ also vanishes if the sum for set ${\cal A}$ vanishes.) The condition for this to happen is obtained by squaring both sides of Eq. (\[eq:dps2\]) and inserting the solutions for $x_A$ and $x_B$. This gives $$%
\label{eq:detiszero}
S_{A+1,B}\, S_{A,B+1} - S_{A,B}\, S_{A+1,B+1} = 0
\;\;.
%$$ That is, $$%
\det
\left(
\begin{matrix}
S_{A,B} & S_{A+1,B} \\
S_{A,B+1} & S_{A+1,B+1}
\end{matrix}
\right)
= 0
\;\;.
%$$ Recall that in order to have a double parton scattering singularity, $S_{A+1,B}>0$ and $S_{A,B+1}>0$. The determinant condition then implies that $S_{A,B}$ and $S_{A+1,B+1}$ have the same sign. In fact, this sign must be negative. To see this, one may note that, because of Eq. (\[eq:detiszero\]), two alternative expressions for $x_A$ are also valid: $$%
x_A = \frac{S_{A+1,B}}{S_{A+1,B} - S_{A,B}}
=
\frac{-S_{A+1,B+1}}{S_{A,B+1} - S_{A+1,B+1}}
=
\frac{S_{A+1,B}-S_{A+1,B+1}}
{\overline S}
\;\;.
%$$ Using the first of these, we see that $x_A > 0$ implies that $S_{A+1,B} - S_{A,B} > 0$. But then $x_A < 1$ implies that $S_{A,B} < 0$. We conclude that for a double parton scattering singularity, $S_{A+1,B}>0$ and $S_{A+1,B}>0$, $S_{A,B} < 0$ and $S_{A+1,B+1} < 0$.
What does this mean in terms of solving Eq. (\[eq:pinchcondition1\])? We demand that Eq. (\[eq:pinchcondition1\]) hold for nonzero $\xi^A$, $\xi^{A+1}$, $\xi^B$ and $\xi^{B+1}$ with all of the other $\xi^i = 0$. Thus we need $w_B = w_{B+1} = 0$, or $$%
\left(
\begin{matrix}
S_{A,B} & S_{A+1,B} \\
S_{A,B+1} & S_{A+1,B+1}
\end{matrix}
\right)
\left(
\begin{matrix}
\xi^A \\
\xi^{A+1}
\end{matrix}
\right)
= 0
\;\;.
%$$ Similarly we need $w_A = w_{A+1} = 0$, or $$%
\left(
\begin{matrix}
S_{A,B} & S_{A,B+1} \\
S_{A+1,B} & S_{A+1,B+1}
\end{matrix}
\right)
\left(
\begin{matrix}
\xi^B \\
\xi^{B+1}
\end{matrix}
\right)
= 0
\;\;.
%$$ One can solve these if the determinant of the matrix is zero, that is if Eq. (\[eq:detiszero\]) holds. If it does, the solution with $\xi^A + \xi^{A+1} + \xi^B + \xi^{B+1} = 1$ is $$\begin{split}
%
\xi^A ={}& f_A\, \bar x \;\;,
\\
\xi^{A+1} ={}& (1-f_A)\, \bar x \;\;,
\\
\xi^B ={}& f_B\, (1- \bar x) \;\;,
\\
\xi^{B+1} ={}& (1-f_B)\, (1 - \bar x )\;\;,
%
\label{eq:xifordps}
\end{split}$$ where $$\begin{split}
%
f_A ={}& \frac{S_{A+1,B}}{S_{A+1,B} - S_{A,B}}
=
\frac{-S_{A+1,B+1}}{S_{A,B+1} - S_{A+1,B+1}}
=
\frac{S_{A+1,B}-S_{A+1,B+1}}
{\overline S}
\;\;,
\\
f_B ={}& \frac{S_{A,B+1}}{S_{A,B+1} - S_{A,B}}
=
\frac{- S_{A+1,B+1}}{S_{A+1,B} - S_{A+1,B+1}}
=
\frac{S_{A,B+1} - S_{A+1,B+1}}
{\overline S} \;\;.
%
\end{split}$$ That is, $f_A = x_A$ and $f_B = x_B$. In order for the pinch singularity to be inside the integration region, $\xi^A$, $\xi^{A+1}$, $\xi^B$ and $\xi^{B+1}$ need to be positive. Thus we need to choose $\bar x$ in the range $$%
0 < \bar x < 1
\;\;.
%$$ It is of interest to work out the momenta $K_i(\xi)$, Eq. (\[eq:Kndef\]), when the external momenta obey the condition (\[eq:dps2\]) for a double parton scattering singularity and the Feynman parameters $\xi$ are given by Eq. (\[eq:xifordps\]). One finds $$\begin{split}
%
K_A(\xi) ={}& (1-f_A) P_A \;\;,
\\
K_{A+1}(\xi) ={}& -f_A P_A \;\;,
\\
K_B(\xi) ={}& (1-f_B) P_B \;\;,
\\
K_{B+1}(\xi) ={}& -f_B P_B \;\;.
%
\end{split}$$ These are, of course, the relations we started with.
We learn that if the determinant condition (\[eq:detiszero\]) and certain sign conditions hold, $\Lambda^2(\xi)$ has a pinch singularity along a line that runs through the middle of the integration region. Now, the pinch singularity conditions hold only for certain special choices of the external momenta. However, one can easily be near to having a pinch singularity. For this reason, in a numerical program, one should check for each graph if $|S_{A+1,B}\, S_{A,B+1} - S_{A,B}\, S_{A+1,B+1}| \ll \overline S^2$, with the required sign conditions, for some choice of indices $A$ and $B$. In that event, one should put a high density of integration points near the “almost” singular line.
[99]{}
R. K. Ellis, D. A. Ross and A. E. Terrano, Nucl. Phys. B [**178**]{}, 421 (1981). J. M. Campbell and R. K. Ellis, Phys. Rev. D [**60**]{} (1999) 113006 \[arXiv:hep-ph/9905386\]; Phys. Rev. D [**62**]{} (2000) 114012 \[arXiv:hep-ph/0006304\]; Phys. Rev. D [**65**]{} (2002) 113007 \[arXiv:hep-ph/0202176\].
Z. Nagy and Z. Trócsányi, Phys. Lett. B [**414**]{}, 187 (1997) \[arXiv:hep-ph/9708342\]; Phys. Rev. D [**59**]{}, 014020 (1999) \[Erratum-ibid. D [**62**]{}, 099902 (2000)\] \[arXiv:hep-ph/9806317\]; Phys. Rev. Lett. [**87**]{}, 082001 (2001) \[arXiv:hep-ph/0104315\]; Z. Nagy, Phys. Rev. Lett. [**88**]{} (2002) 122003 \[arXiv:hep-ph/0110315\]; Phys. Rev. D [**68**]{} (2003) 094002 \[arXiv:hep-ph/0307268\]. For example, T. Binoth, J. P. Guillet, G. Heinrich, E. Pilon and C. Schubert, JHEP [**0510**]{}, 015 (2005) \[arXiv:hep-ph/0504267\]; R. K. Ellis, W. T. Giele and G. Zanderighi, JHEP [**0605**]{}, 027 (2006) \[arXiv:hep-ph/0602185\]; Phys. Rev. D [**72**]{}, 054018 (2005) \[arXiv:hep-ph/0506196\]; C. F. Berger, Z. Bern, L. J. Dixon, D. Forde and D. A. Kosower, Phys. Rev. D [**74**]{}, 036009 (2006) \[arXiv:hep-ph/0604195\]; R. Britto, E. Buchbinder, F. Cachazo and B. Feng, Phys. Rev. D [**72**]{}, 065012 (2005) \[arXiv:hep-ph/0503132\]; R. Britto, B. Feng and P. Mastrolia, Phys. Rev. D [**73**]{}, 105004 (2006) \[arXiv:hep-ph/0602178\]; Z. G. Xiao, G. Yang and C. J. Zhu, arXiv:hep-ph/0607017; A. Brandhuber, S. McNamara, B. J. Spence and G. Travaglini, JHEP [**0510**]{}, 011 (2005) \[arXiv:hep-th/0506068\]; G. Ossola, C. G. Papadopoulos and R. Pittau, arXiv:hep-ph/0609007; A. Denner and S. Dittmaier, Nucl. Phys. B [**734**]{}, 62 (2006) \[arXiv:hep-ph/0509141\]. D. E. Soper, Phys. Rev. Lett. [**81**]{}, 2638 (1998) \[arXiv:hep-ph/9804454\];
D. E. Soper, Phys. Rev. D [**62**]{}, 014009 (2000) \[arXiv:hep-ph/9910292\]. T. Binoth, J. P. Guillet, G. Heinrich, E. Pilon and C. Schubert, JHEP [**0510**]{}, 015 (2005) \[arXiv:hep-ph/0504267\]. Z. Nagy and D. E. Soper, JHEP [**0309**]{}, 055 (2003) \[arXiv:hep-ph/0308127\]. J. D. Bjorken and S. D. Drell, [*Relativistic Quantum Field Theory*]{}, (McGraw-Hill, New York, 1965).
The code is available at [http://physics.uoregon.edu/$\sim$soper/Nphoton/]{}.
T. Binoth, E. W. N. Glover, P. Marquard and J. J. van der Bij, JHEP [**0205**]{}, 060 (2002) \[arXiv:hep-ph/0202266\]. G. Mahlon, Phys. Rev. D [**49**]{}, 2197 (1994) \[arXiv:hep-ph/9311213\]. G. Mahlon, in [*Beyond the Standard Model IV*]{}, proceedings of the Fourth International Conference on Physics Beyond the Standard Model, edited by J.F. Gunion, T. Han, and J. Ohnemus (World Scientific, River Edge NJ, 1995), hep-ph/9412350. S. Catani, talk at the workshop High Precision for Hard Processes at the LHC, Zurich, September 2006.
[^1]: Here “known” may mean that there exists a computer program to calculate the desired scattering amplitude in terms of known master integrals or other special functions. There are many approaches, some of which are conventionally called “semi-numerical” because parts of the calculation involve a numerical approach.
[^2]: Throughout this paper, we adopt a cyclic notation for indices in the range $\{1,2,\cdots,N\}$. Thus Eq. (\[eq:Qndef\]) for $n = N$ is $P_N = Q_1 - Q_N$.
[^3]: Specifically, $\epsilon_i(P_i)$ for an outgoing photon is $\epsilon^*(P_i,s_i) = \epsilon(P_i,-s_i)$, where $s_i$ is the helicity of the photon. For an incoming photon, we follow the convention of using a helicity label $s_i$ equal to the negative of the physical helicity of the photon. Then $\epsilon_i(P_i)$ is $\epsilon(-P_i,-s_i)$.
[^4]: This is a non-trivial deformation for all $\xi$ such that $\eta(\xi) \ne 0$ with one exception. If all of the $\xi^i$ vanish except for $\xi^n$, where then $\xi^n = 1$, then $z^i = \xi^i$ for all $i$ even if $\eta^n \ne 0$. This possibility does not cause any problems.
[^5]: This takes a bit under an hour for each point on one chip of our computer, but we note that computer timings are dependent on the computer and the compiler.
[^6]: That it is practical to do so has been demonstrated for the case of three jet production in electron-positron annihilation [@beowulfPRL; @beowulfPRD]. However, the method used there does not extend well to the hadron-hadron case.
[^7]: Having an unstable massive particle as an incoming parton would be an exception, since this can put new kinds of singularities into the integrand.
[^8]: We understand that S. Catani, T. Gleisberg, F. Krauss, G. Rodrigo, and J. Winter are working along these lines [@Catani].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We prove that there is an absolute constant $C>0$ so that for every natural $n$ there exists a triangle-free *regular* graph with no independent set of size at least $C\sqrt{n\log n}$.'
author:
- 'Noga Alon[^1]'
- 'Sonny Ben-Shimon[^2]'
- 'Michael Krivelevich[^3]'
bibliography:
- 'regramsey.bib'
title: A note on regular Ramsey graphs
---
Introduction
============
A major problem in extremal combinatorics asks to determine the maximal $n$ for which there exists a graph $G$ on $n$ vertices such that $G$ contains no triangles and no independent set of size $t$. This Ramsey-type problem was settled asymptotically by Kim [@Kim95] in 1995, after a long line of research; Kim showed that $n=\Theta(t^2/\log t)$. Recently, Bohman [@Boh2009] gave an alternative proof of Kim’s result by analyzing the so-called triangle-free process, as proposed by Erdős, Suen and Winkler [@ErdSueWin95], which is a natural way of generating a triangle-free graph. Consider now the above problem with the additional constraint that $G$ must be regular. In this short note we show that the same asymptotic results hold up to constant factors. The main ingredient of the proof is a gadget-like construction that transforms a triangle-free graph with no independent set of size $t$, which is not too far from being regular, into a triangle-free *regular* graph with no independent set of size $2t$.
Our main result can be stated as follows.
\[t:main\] There is a positive constant $C$ so that for every natural $n$ there exists a regular triangle-free graph $G$ on $n$ vertices whose independence number satisfies $\alpha(G) \leq C \sqrt{n\log n}$.
Denote by $R(k,\ell)$ the maximal $n$ for which there exists a graph on $n$ vertices which contains neither a complete subgraph on $k$ vertices nor an independent set on $\ell$ vertices. Let $R^{\mathrm{reg}}(k,\ell)$ denote the maximal $n$ for which there exists a *regular* graph on $n$ vertices which contains neither a complete subgraph on $k$ vertices nor an independent set on $\ell$ vertices. Clearly, for every $k$ and $\ell$ one has $R^{\mathrm{reg}}(k,\ell)\leq R(k,\ell)$. Theorem \[t:main\] states that $R^{\mathrm{reg}}(3,t)=\Theta\left(R(3,t)\right)
=\Theta\left(\frac{t^2}{\log
t}\right)$.
Proof of Theorem \[t:main\]
===========================
Note first that the statement of the theorem is trivial for small values of $n$. Indeed, for every $n_0$ one can choose the constant $C$ in the theorem so that for $ n \leq n_0$, $C \sqrt {n \log n} \geq n$, implying that for such values of $n$ a graph with no edges satisfies the assertion of the theorem. We thus may and will assume, whenever this is needed during the proof, that $n$ is sufficiently large.
The following well known theorem due to Gale and to Ryser gives a necessary and sufficient condition for two lists of non-negative integers to be the degree sequences of the classes of vertices of a simple bipartite graph. The proof follows easily from the max-flow-min cut condition on the appropriate network flow graph (see e.g. [@West2001 Theorem 4.3.18]).
\[t:GayRys57\] If $\mathbf{d}=(d_1,\ldots,d_m)$ and $\mathbf{d'}=(d'_1,\ldots,d'_n)$ are lists of non-negative integers with $d_1\geq\ldots\geq d_m$, $d'_1\geq\ldots\geq d'_n$ and $\sum
d_i= \sum d'_j$ then there exists a simple bipartite graph with degree sequences $\mathbf{d}$ and $\mathbf{d'}$ on each side respectively iff $\sum_{i=1}^m \min\{d_i,s\}\geq \sum_{j=1}^s d'_j$ for every $1\leq s\leq n$ .
\[c:noga1\] Let $a \geq 1$ be a real. If $\mathbf{d}=(d_1,\ldots,d_m)$ is a list of non-negative integers with $d_1\geq\ldots\geq d_m$ and $$\label{e21}
d_1\leq\min\left\{ad_m,
\frac{4am}{(a+1)^2}\right\},$$ then there exists a simple bipartite graph with degree sequence $\mathbf{d}$ on each side. In particular, this holds for $d_1 \leq \min \{2d_m, \frac{8m}{9}\}. $
By Theorem \[t:GayRys57\] it suffices to check that for every $s$, $ 1 \leq s \leq m$, $\sum_{i=1}^s d_i \leq
\sum_{i=1}^m \min\{d_i,s\}$. Suppose this is not the case and there is some $s$ as above so that $$\label{e22}
d_1+ d_2 + \ldots +d_s > \sum_{i=1}^m \min\{d_i,s\}.$$ If $d_i<d_1$ for some $i$ satisfying $2 \leq i \leq s$, replace $d_i$ by $d_1$. Observe that by doing so the left hand side of (\[e22\]) increases by $d_1-d_i$, whereas the right hand side increases by at most this quantity, hence (\[e22\]) still holds with this new value of $d_i$. We can thus assume that $d_1=d_2 = \cdots =d_s$. Note that if $d_1 \leq s$, then (\[e22\]) cannot hold, hence $d_1 >s$. If $d_i >d_1/a$ for some $i$ satisfying $s+1 \leq i \leq m$, then reducing it to $d_1/a$ (even if this is not an integer), maintains (\[e22\]), as the left hand side does not change, whereas the right hand side can only decrease. Moreover, the new sequence still satisfies (\[e21\]). Thus we may assume that in (\[e22\]) $d_i =d_1/a$ for all $s+1 \leq i \leq m$. Put $d=d_{i+1}~(= d_{i+2}= \ldots =d_m)$, then (\[e22\]) gives $$d_1 + \ldots + d_s =s\cdot(ad) > \sum_{i=1}^m \min\{d_i,s\}
=s^2+(m-s)d.$$ Therefore $[(a+1)s-m]d >s^2$, implying that $(a+1)s-m>0$, that is, $s > \frac{m}{a+1}$, and $$\label{e23}
d> \frac{s^2}{(a+1)s-m}.$$ The function $g(s) =\frac{s^2}{(a+1)s-m}$ attains its minimum in the range $ \frac{m}{a+1} < s \leq m$ at $s=\frac{2m}{a+1}$ and its value at this point is $\frac{4m}{(a+1)^2}.$ We thus conclude from (\[e23\]) that $d > \frac{4m}{(a+1)^2}$ and hence that $d_1=ad > \frac{4am}{(a+1)^2}$ contradicting the assumption (\[e21\]). This completes the proof. Fix some $s\in\{1,\ldots,m\}$ and denote by $t$ the maximal index $i$ for which $d_i>s$. The condition given by Theorem \[t:GayRys57\] can be thus stated as $\sum_{i=1}^sd_i\leq ts
+\sum_{i=t+1}^m d_i$. First assume $s\leq t$. If $t\geq m/3$ then $\sum_{i=1}^sd_i\leq s\cdot d_1\leq st$ as needed. If $s\leq t<m/3$ then $\sum_{i=1}^sd_i\leq s\cdot d_1\leq\frac{m-t}{2}\cdot
2d_m\leq\sum_{i=t+1}^md_i$. If $s\geq t+1$, then subtracting $\sum_{i=t+1}^s d_i$ from each side of the condition yields an inequality which holds from the previous case.
Condition (\[e21\]) is tight for all values of $a>1$, in the sense that if $d_1> \frac{4am}{(a+1)^2}$ and $d_1=d_2 \ldots =d_s$ for $s=\frac{2m}{a+1}$ with $d_i=\frac{4m}{(a+1)^2}$ for all $s+1 \leq i \leq m$, then there is no simple bipartite graph whose degree sequence in each side is $(d_1, d_2, \ldots ,d_m)$. This follows from Theorem \[t:GayRys57\].
Let ${\mathcal{R}}(n,3,t)$ denote the set of all triangle-free graphs $G$ on $n$ vertices with $\alpha(G)<t$. As usual, let $\Delta(G)$ and $\delta(G)$ denote the respective maximal and minimal degrees of $G$.
\[p:noga1\] Let $t$ and $d$ be integers. If there exists a graph $G\in{\mathcal{R}}(n,3,t)$ such that $\Delta(G)-\delta(G)\leq d\leq
\frac{4}{9}\cdot\left\lfloor\frac{n}{\Delta(G)+1}\right\rfloor$, then there exists a $(d+\Delta(G))$-*regular* graph $G'\in{\mathcal{R}}(2n,3,2t-1)$.
Construct a new graph $G'$ as follows. Take two copies of $G$, and color each of these copies by the same equitable coloring using $\Delta(G)+1$ colors with all color classes of cardinality either $\left\lfloor n/(\Delta(G)+1)\right\rfloor$ or $\left\lceil
n/(\Delta(G)+1)\right\rceil$ using the Hajnal-Szemerédi Theorem [@HajSze70] (see also a shorter proof due to Kierstead and Kostochka [@KieKos2008]). Let $C$ and $C'$ be the same color class in each of the copies of $G$. Denote the degree sequence of the vertices of $C$ in $G$ by $d'_1\leq\ldots\leq d'_m$, where $m=|C|$, and set $d_i=d+\Delta(G)-d'_i$. According to Corollary \[c:noga1\] there exists a simple bipartite graph with $m$ vertices on each side, where the degree sequence of each side is $d_1\geq\ldots\geq d_m$ as the maximal degree $d_1=d+\Delta(G)-\delta(G)\leq 2d$, the minimal degree $d_m\geq d$, and by our assumption on $G$ we have $d_1\leq \frac{8m}{9}$. We can thus connect the vertices of $C$ and $C'$ using this bipartite graph such that all vertices in $C\cup C'$ have degree $d+\Delta(G)$. By following this method for every color class, we create the graph $G'$ which is $(d+\Delta(G))$-regular, triangle-free and has no independent set of cardinality $2t-1$.
The $H$-free process and Bohman’s result
----------------------------------------
Consider the following randomized greedy algorithm to generate a graph on $n$ labeled vertices with no $H$-subgraph for some fixed graph $H$. Given a set of $n$ vertices, a sequence of graphs $\{G^{(H)}_i\}^t_{i=0}$ on this set of vertices is constructed. Start with $G^{(H)}_0$ as the empty graph, and for each $0<i\leq t$, the graph $G^{(H)}_i$ is defined by $G^{(H)}_{i-1}\cup\{e_i\}$ where $e_i$ is chosen uniformly at random from all unselected pairs of vertices that do not create a copy of $H$ when added to $G^{(H)}_{i-1}$. The process terminates at step $t$, the first time that no potential unselected pair $e_{t+1}$ exists. This algorithm is called the *$H$-free process*.
The $K_3$-free process was proposed by Erdős, Suen and Winkler [@ErdSueWin95] and was further analyzed by Spencer [@Spe95]. Recently, Bohman [@Boh2009] extending and improving previous results, was able to analyze the $K_3$-free process and to show that with high probability it passes through an almost regular Ramsey-type graph.
\[t:Boh2009\] With high probability[^4] there exists an integer $1\leq m=m(n)$ such that the following properties hold simultaneously:
1. $G^{(K_3)}_m\in{\mathcal{R}}(n,3,C\sqrt{n\log n})$ for some absolute constant $C>0$;
2. $\Delta(G^{(K_3)}_m)=\Theta(\sqrt{n\log n})$;
3. \[i:degdiff\] $\Delta(G^{(K_3)}_m)-\delta(G^{(K_3)}_m)=o(\sqrt{n/ \log n})$.
Item can be derived implicitly from [@Boh2009], or alternatively, it follows from [@BohKeePre Theorem 1.4], as the degree of every vertex is a *trackable extension variable*.
Note that Proposition \[p:noga1\] in conjunction with Theorem \[t:Boh2009\] completes the proof of Theorem \[t:main\] for every large enough *even* integer $n$. To fully complete the proof, we describe how to deal with the case of $n$ odd. So, let now $n$ be be large enough and odd. Our aim is to show the existence of a regular triangle-free graph $G_n$ on $n$ vertices with $\alpha(G_n)=O(\sqrt{n\log n})$. The approach we take to achieve this goal is to construct a “big” graph satisfying our Ramsey conditions on an even number of vertices, and to add to it a “small” graph with an odd number of vertices without affecting the asymptotic results claimed.
For every $k=0\pmod{5}$, and every even $r\leq 2k/5$, let $H_{k,r}$ denote a graph constructed as follows. Start with a copy of $C_5$ blown up by factor of $k/5$ and delete from the resulting graph $(2k/5 - r/2)$ disjoint $2$-factors (which exist by Petersen’s Theorem, see e.g. [@West2001 Theorem 3.3.9]). $H_{k,r}$ is hence a triangle-free $r$-regular graph on $k$ vertices.
Denote by $F_n$ an $r$-regular triangle-free graph on $2n$ vertices with $\alpha(F_n)\leq C \sqrt{n\log n}$ for some absolute constant $C$, and furthermore assume $r$ is even (this can be achieved by choosing the appropriate parameter $d$ in Proposition \[p:noga1\], as we have much room to spare with the values we plug in from Theorem \[t:Boh2009\]). Let $n_0=(n-k)/2$, where $k=5\pmod{10}$, and $k=(1+o(1))\frac{5C}{2}\sqrt{n\log n}$. Clearly, $n_0$ is integer. The graph $F_{n_0}$ is $r$-regular for some even $r\leq
\alpha(F_{n_0})$, is triangle-free on $2n_0$ vertices, and satisfies $\alpha(F_{n_0})\leq C\sqrt{n_0\log n_0}\leq C\sqrt{n\log n}$. Now, define $G_n$ to be a disjoint union of $F_{n_0}$ and $H_{k,r}$. Clearly, $G_n$ has $2n_0+k=n$ vertices, is $r$-regular, triangle-free and satisfies $\alpha(G_n)=\alpha(F_{n_0})+\alpha(H_{k,r})\le \alpha(F_{n_0})+k
\leq C\sqrt{n\log n} + k =O(\sqrt{n\log n})$.
Discussion
==========
A natural question that extends the above is to try and determine $R^{\mathrm{reg}}(k,\ell)$ for other values of $k$ and $\ell$ (in particular for fixed values of $k>3$ and $\ell$ arbitrary large), and also to try and investigate its relation with $R(k,\ell)$. The following conjecture seems plausible.
\[c31\] For every $k \geq 2$ there is a constant $c_k>0$ so that $R^{\mathrm{reg}}(k,\ell) \geq c_k R(k,\ell)$ for all $\ell \geq 2$.
This is trivial for $k=2$, and by our main result here holds for $k=3$ as well.
Recently, Bohman and Keevash [@BohKeePre] were able to generalize the techniques of [@Boh2009] for the $H$-free process, where $H$ is a strictly 2-balanced graph. This in turn provided new lower bounds for $R(k,\ell)$ (as complete graphs are strictly 2-balanced) where $k$ is fixed and $\ell$ arbitrarily large. It is plausible to think that these results can also be used to construct *regular* Ramsey graphs in a manner similar to that described in this note. Nonetheless, since the asymptotic behavior of $R(k,\ell)$ is not known for $k \geq 4$, a complete proof of Conjecture \[c31\] appears to require some additional ideas, and remains open.
[^1]: School of Computer Science and School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978, Israel. E-mail: nogaa@post.tau.ac.il. Research supported in part by an ERC advanced grant, by the Israel Science Foundation and by a USA-Israel BSF grant.
[^2]: School of Computer Science, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978, Israel. E-mail: sonny@post.tau.ac.il. Research conducted as part of the author’s Ph.D. thesis under the supervision of Prof. Michael Krivelevich.
[^3]: School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978, Israel. E-mail: krivelev@post.tau.ac.il. Research supported in part by USA-Israel BSF Grant 2006322, by grant 1063/08 from the Israel Science Foundation, and by a Pazy memorial award.
[^4]: In this context we mean that the mentioned events hold with probability tending to $1$ as $n$, the number of vertices, goes to infinity.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The quantum spin Hall effect shares many similarities (and some important differences) with the quantum Hall effect for the electric charge. As with the quantum (electric charge) Hall effect, there exists a correspondence between bulk and boundary physics that allows to characterize the quantum spin Hall effect in diverse and complementary ways. In this paper, we derive from the network model that encodes the quantum spin Hall effect, the so-called $\mathbb{Z}^{\ }_{2}$ network model, a Dirac Hamiltonian in two dimensions. In the clean limit of this Dirac Hamiltonian, we show that the bulk Kane-Mele $\mathbb{Z}^{\ }_{2}$ invariant is nothing but the SU(2) Wilson loop constructed from the SU(2) Berry connection of the occupied Dirac-Bloch single-particle states. In the presence of disorder, the non-linear sigma model (NLSM) that is derived from this Dirac Hamiltonian describes a metal-insulator transition in the standard two-dimensional symplectic universality class. In particular, we show that the fermion doubling prevents the presence of a topological term in the NLSM that would change the universality class of the ordinary two-dimensional symplectic metal-insulator transition. This analytical result is fully consistent with our previous numerical studies of the bulk critical exponents at the metal-insulator transition encoded by the $\mathbb{Z}^{\ }_{2}$ network model. Finally, we improve the quality and extend the numerical study of boundary multifractality in the $\mathbb{Z}^{\ }_{2}$ topological insulator. We show that the hypothesis of two-dimensional conformal invariance at the metal-insulator transition is verified within the accuracy of our numerical results.'
address:
- ' $^1$Department of Physics, University of California, Berkeley, CA 94720, USA'
- ' $^2$Condensed Matter Theory Group, Paul Scherrer Institute, CH-5232 Villigen PSI, Switzerland'
- ' $^3$Department of Physics, Kyoto University, Kyoto 606-8502, Japan'
- ' $^4$Condensed Matter Theory Laboratory, RIKEN, Wako, Saitama 351-0198, Japan'
author:
- 'Shinsei Ryu$^1$, Christopher Mudry$^2$, Hideaki Obuse$^3$ and Akira Furusaki$^4$'
title: 'The $\mathbb{Z}^{\ }_{2}$ network model for the quantum spin Hall effect: two-dimensional Dirac fermions, topological quantum numbers, and corner multifractality'
---
Introduction {#sec: intro}
=============
Spin-orbit coupling has long been known to be essential to account for the band structure of semiconductors, say, semiconductors with the zink-blende crystalline structure. Monographs have been dedicated to reviewing the effects of the spin-orbit coupling on the Bloch bands of conductors and semiconductors [@Winkler03]. Electronic transport properties of metals and semiconductors in which impurities are coupled to the conduction electrons by the spin-orbit coupling, i.e., when the impurities preserve the time-reversal symmetry but break the spin-rotation symmetry, are also well understood since the prediction of weak antilocalization effects [@Hikami80]. Hence, the prediction of the quantum spin Hall effect in two-dimensional semiconductors with time-reversal symmetry but a sufficiently strong breaking of spin-rotation symmetry is rather remarkable in view of the maturity of the field dedicated to the physics of semiconductors [@Kane05a; @Kane05b; @Bernevig06a; @Bernevig06b]. The quantum spin Hall effect was observed in HgTe/(Hg,Cd)Te quantum wells two years later [@Konig07]. Even more remarkably, this rapid progress was followed by the prediction of three-dimensional topological insulators [@Moore07; @Roy; @Fu07] and its experimental confirmation for Bi-based compounds [@Hasan; @Hsieh09; @Xia09; @Hsieh09b; @Chen09].
The quantum spin Hall effect, like its relative, the quantum (electric charge) Hall effect, can be understood either as a property of the two-dimensional bulk or as a property of the one-dimensional boundary. The bulk can be characterized by certain integrals over the Brillouin zone of Berry connections calculated from Bloch eigenstates. These integrals are only allowed to take discrete values and are examples of topological invariants from the mathematical literature. As is well known, the topological number $\nu$ takes integer values for the quantum (electric charge) Hall effect [@Thouless82]. By contrast, it takes only two distinct values ($\nu=0$ or 1) for time-reversal invariant, $\mathbb{Z}^{\ }_2$ topological band insulators [@Kane05b; @Moore07; @Roy; @Fu07; @Fu06]. Because they are quantized, they cannot change under a small continuous deformation of the Hamiltonian, including a perturbation that breaks translation invariance, i.e., disorder.
The bulk topological quantum numbers are closely connected with the existence of stable gapless edge states along the boundary of a topological insulator, or more precisely along the interface between two insulators with different topological numbers. The number of gapless edge modes is determined by the difference of the topological numbers. On the edge of a two-dimensional $\mathbb{Z}^{\ }_{2}$ topological band insulator with $\nu=1$, there exists helical edge states, a Kramers’ pair of counter propagating modes, which interpolates between the bulk valence band and the bulk conduction band. If one changes the Fermi energy from the center of the band gap to lower energies through the conduction band, one should observe a transition from a $\mathbb{Z}^{\ }_{2}$ topological insulator to a metal, and then from a metal to a trivial band insulator ($\nu=0$) without helical edge states. Since both helical edge states and a metallic phase are stable against (weak) disorder (due to the quantized topological number and to weak anti-localization, respectively), the same sequence of phases should appear as the Fermi energy is varied even in the presence of disorder, as confirmed recently by numerical simulations [@Onoda; @Obuse07a]. A question one can naturally ask is then whether there is any difference between the critical phenomena at the metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition and those at the metal-to-trivial-insulator transition. This is the question which we revisit in this paper, extending our previous studies [@Obuse07a; @Obuse08]. It will become clear that one needs to distinguish between bulk and boundary properties in the universal critical phenomena.
For the quantum (electric charge) Hall effect, the Chalker-Coddington network model serves as a standard model for studying critical properties at Anderson transition between different quantum Hall states [@Chalker88]. The elementary object in the Chalker-Coddington network model is chiral edge states. These edge states are plane waves propagating along the links of each plaquette which represents a puddle of a quantum Hall droplet formed in the presence of spatially slowly varying potential. They are chiral as they represent the mode propagating along equipotential lines in the direction determined by the external magnetic field. The Chalker-Coddington network model is a unitary scattering matrix that scales in size with the number of links defining the network, and with a deterministic parameter that quantifies the relative probability for an incoming mode to scatter into a link rotated by $+\pi/2$ or $-\pi/2$. By tuning this parameter through the value $1/2$, one can go through a transition from one insulating phase to another insulating phase, with the topological number $\nu$ changed by one. This remains true even when the phase of an edge state along any link is taken to be an independent random number to mimic the effects of static local disorder. The Chalker-Coddington model is a powerful tool to characterize the effects of static disorder on the direct transition between two successive integer quantum Hall states. It has demonstrated that this transition is continuous and several critical exponents at this transition have been measured from the Chalker-Coddington model [@Chalker88; @Kramer05].
The present authors have constructed in [@Obuse07a] a generalization of the Chalker-Coddington model that describes the physics of the two-dimensional quantum spin Hall effect. We shall call this network model the $\mathbb{Z}^{\ }_{2}$ network model, which will be briefly reviewed in section 2. As with the Chalker-Coddington model, edge states propagate along the links of each plaquette of the square lattice. Unlike the Chalker-Coddington model there are two edge states per link that form a single Kramers’ doublet, which corresponds to helical edge states moving along a puddle of a quantum spin Hall droplet. Kramers’ doublets undergo the most general unitary scattering compatible with time-reversal symmetry at the nodes of the square lattice. The $\mathbb{Z}^{\ }_{2}$ network model is thus a unitary scattering matrix that scales in size with the number of links defining the network and that preserves time-reversal symmetry. The $\mathbb{Z}^{\ }_{2}$ network model supports one metallic phase and two insulating phases, as we discussed earlier[^1]. The metallic phase prevents any direct transition between the insulating phases and the continuous phase transition between the metallic and any of the insulating phases belongs to the two-dimensional symplectic universality class of Anderson localization [@Hikami80].
Numerical simulations have shown that bulk properties at metal-insulator transition in the $\mathbb{Z}^{\ }_{2}$ network model are the same as those at conventional metal-insulator transitions in the two-dimensional symplectic symmetry class [@Obuse07a; @Obuse08]. In fact, one can understand this result from the following general argument based on universality. The non-linear sigma model (NLSM) description is a very powerful, standard theoretical approach to Anderson metal-insulator transition [@Wegner79]. A NLSM can have a topological term if the homotopy group of the target manifold, which is determined by the symmetry of the system at hand, is nontrivial. Interestingly, in the case of the symplectic symmetry class, as is called the statistical ensemble of systems (including quantum spin Hall systems) that are invariant under time reversal but are not invariant under SU(2) spin rotation, the NLSM admits a $\mathbb{Z}^{\ }_2$ topological term [@Fendley01; @Ryu07; @Ostrovsky07]. Moreover, the NLSM in the symplectic symmetry class with a $\mathbb{Z}^{\ }_{2}$ topological term cannot support an insulating phase. This can be seen from the fact that this NLSM describes surface Dirac fermions of a three-dimensional $\mathbb{Z}^{\ }_{2}$ topological insulator which are topologically protected from Anderson localization [@Bardarson07; @NomuraKoshinoRyu; @Schnyder08]. This in turn implies that any two-dimensional metal-insulator transition in time-reversal-invariant but spin-rotation-noninvariant systems should be in the same and unique universality class that is encoded by the NLSM without a topological term in the (ordinary) symplectic class.
Whereas bulk critical properties at the transition between a metal and a $\mathbb{Z}^{\ }_{2}$ topological insulator do not depend on the topological nature of the insulating phase, there are boundary properties that can distinguish between a topologically trivial and non-trivial insulating phases. Boundary multifractality is a very convenient tool to probe any discrepancy between universal bulk and boundary properties at Anderson transition [@Subramaniam06; @Obuse07b]. To probe this difference, the present authors performed a multifractal analysis of the edge states that propagate from one end to the other in a network model at criticality with open boundary condition in the transverse direction [@Obuse08]. It was found that boundary multifractal exponents are sensitive to the presence or absence of a helical Kramers’ doublet propagating along the boundary.
The goal of this paper is 2-fold:
1. \[enu 1\] to establish a direct connection between the $\mathbb{Z}^{\ }_{2}$ network model and a Hamiltonian description of the $\mathbb{Z}^{\ }_{2}$ topological insulator perturbed by time-reversal symmetric local static disorder.
2. \[enu 2\] to improve the quality and extend the numerical study of boundary multifractality in the $\mathbb{Z}^{\ }_{2}$ topological insulator.
For item (\[enu 1\]), in section 3, we are going to relate the $\mathbb{Z}^{\ }_{2}$ network model to a problem of Anderson localization in the two-dimensional symplectic universality class that is encoded by a stationary $4\times4$ Dirac Hamiltonian perturbed by static disorder that preserves time-reversal symmetry but breaks spin-rotation symmetry. This result is a natural generalization of the fact that the Chalker-Coddington network model can be related [@Ho96] to a $2\times2$ Dirac Hamiltonian with static disorder [@Ludwig94]. In the clean limit, we shall characterize the $\mathbb{Z}^{\ }_{2}$ insulating phases in the $4\times4$ Dirac Hamiltonian by a $\mathbb{Z}^{\ }_{2}$ topological invariant. In particular, we show that an SU(2) Wilson loop of Berry connection of Bloch wave functions is equivalent to the $\mathbb{Z}^{\ }_{2}$ index introduced by Kane and Mele [@Kane05b]. The $4\times4$ Dirac Hamiltonian will allow us to make contact between the $\mathbb{Z}^{\ }_{2}$ network model and the NLSM description of two-dimensional Anderson localization in the symplectic universality class derived 30 years ago by Hikami et al. in [@Hikami80]. In our opinion, this should remove any lingering doubts that the metal-insulator transition between a two-dimensional metallic state and a two-dimensional $\mathbb{Z}^{\ }_{2}$ insulator that is driven by static disorder is anything but conventional.
For item (\[enu 2\]), besides improving the accuracy of the critical exponents for one-dimensional boundary multifractality in the $\mathbb{Z}^{\ }_{2}$ network model, we compute critical exponents for two zero-dimensional boundaries (corners) in section 4. We shall use these critical exponents to verify the hypothesis that conformal invariance holds at the metal-insulator transition and imposes relations between lower-dimensional boundary critical exponents.
Definition of the $\mathbb{Z}^{\ }_{2}$ network model for the quantum spin Hall effect {#sec: definition}
========================================================================================
![ \[fig: network.eps\] (a) The $\mathbb{Z}^{\ }_{2}$ network model. The solid and dashed lines represent the links for up and down spin electrons, respectively. The electrons are unitarily scattered at the nodes $\mathsf{S}$ and $\mathsf{S}'$. The choice for the scattering basis at the nodes $\mathsf{S}$ and $\mathsf{S}'$ is shown in (b) and (c), respectively. ](figure1.eps){width="15cm"}
The $\mathbb{Z}^{\ }_{2}$ network model is defined as follows. First, one draws a set of corner sharing square plaquettes on the two-dimensional Cartesian plane. Each edge of a plaquette is assigned two opposite directed links. This is the network. There are two types $\mathsf{S}$ and $\mathsf{S}'$ of shared corners, which we shall call the nodes of the network. Second, we assign to each directed link an amplitude $\psi$, i.e., a complex number $\psi\in\mathbb{C}$. Any amplitude $\psi$ is either an incoming or outgoing plane wave that undergoes a unitary scattering process at a node. We also assign a $4\times4$ unitary matrix $S$ to each node of the network. The set of all directed links obeying the condition that they are either the incoming or outgoing plane waves of the set of all nodal unitary scattering matrices defines a solution to the $\mathbb{Z}^{\ }_{2}$ network model.
To construct an explicit representation of the $\mathbb{Z}^{\ }_{2}$ network model, the center of each plaquette is assigned the coordinate $(x,y)$ with $x$ and $y$ taking integer values, as is done in figure \[fig: network.eps\]. We then label the 8 directed links $\psi^{\ }_{n\sigma}(x,y)$ of any given plaquette by the coordinate $(x,y)$ of the plaquette, the side $n=1,2,3,4$ of the plaquette with the convention shown in figure \[fig: network.eps\], and the spin index $\sigma=\uparrow$ or $\sigma=\downarrow$ if the link is directed counterclockwise or clockwise, respectively, relative to the center of the plaquette. The $4\times4$ unitary $S$-matrix is then given by $$\left(
\begin{array}{c}
\psi^{\ }_{2\uparrow}(x,y) \\
\psi^{\ }_{3\downarrow}(x,y) \\
\psi^{\ }_{4\uparrow}(x+1,y-1) \\
\psi^{\ }_{1\downarrow}(x+1,y-1) \\
\end{array}
\right)=:
S
\left(
\begin{array}{c}
\psi^{\ }_{3\uparrow}(x,y) \\
\psi^{\ }_{2\downarrow}(x,y) \\
\psi^{\ }_{1\uparrow}(x+1,y-1) \\
\psi^{\ }_{4\downarrow}(x+1,y-1) \\
\end{array}
\right)
\label{eq: def S at S node}$$ at any node of type $\mathsf{S}$ or as $$\left(
\begin{array}{c}
\psi^{\ }_{3\uparrow}(x+1,y+1) \\
\psi^{\ }_{4\downarrow}(x+1,y+1) \\
\psi^{\ }_{1\uparrow}(x,y) \\
\psi^{\ }_{2\downarrow}(x,y) \\
\end{array}
\right)=:
S'
\left(
\begin{array}{c}
\psi^{\ }_{4\uparrow}(x+1,y+1) \\
\psi^{\ }_{3\downarrow}(x+1,y+1) \\
\psi^{\ }_{2\uparrow}(x,y) \\
\psi^{\ }_{1\downarrow}(x,y) \\
\end{array}
\right)
\label{eq: def S' at S' node}$$ at any node of type $\mathsf{S}'$, with $$S=U(x,y)S_0V(x,y),
\qquad
S'=U'(x,y)S_0V'(x,y).$$ Here, the $4\times4$ unitary matrix $$S_0:=
\left(
\begin{array}{cc}
r s^{\ }_{0}
&
t Q
\\
-t Q^{\dag}
&
r s^{\ }_{0}
\end{array}
\right)$$ is presented with the help of the unit $2\times2$ matrix $s^{\ }_{0}$ and of the $2\times2$ matrix $$Q:=
s^{\ }_1 \sin\theta
+s^{\ }_3 \cos\theta
=
\left(\begin{array}{cc}
\cos\theta & \sin\theta \\
\sin\theta & -\cos\theta
\end{array}
\right),
\label{eq:node S}$$ ($s^{\ }_{1}$, $s^{\ }_{2}$, and $s^{\ }_{3}$ are the $2\times 2$ Pauli matrices) that are both acting on the spin indices $\sigma=\uparrow,\downarrow$, together with the real-valued parameters $$r:=\tanh X,
\qquad
t:=\frac{1}{\cosh X},$$ with $$\left\{
(X,\theta)\,|\,
0\le X \le \infty,
\quad
0\le \theta \le \pi/2
\right\}\!.
\label{eq: def X theta}$$ For later use, we shall also introduce the real-valued parameter $\beta \in[0,\pi]$ through $$r=\cos\beta,
\qquad
t=\sin\beta.$$ The parameter $\theta$ controls the probability of spin-flip scattering, $\sin^2\theta$. The unitary matrices $U,V,U',V'$ are defined as $$\begin{aligned}
U(x,y)=\mathrm{diag}(
\rme^{\rmi\chi^{\ }_2(x,y)},
\rme^{\rmi\chi^{\ }_3(x,y)},
\rme^{\rmi\chi^{\ }_4(x+1,y-1)},
\rme^{\rmi\chi^{\ }_1(x+1,y-1)}),
\\
V(x,y)=\mathrm{diag}(
\rme^{\rmi\chi^{\ }_3(x,y)},
\rme^{\rmi\chi^{\ }_2(x,y)},
\rme^{\rmi\chi^{\ }_1(x+1,y-1)},
\rme^{\rmi\chi^{\ }_4(x+1,y-1)}
),
\\
U'(x,y)=\mathrm{diag}(
\rme^{\rmi\chi^{\ }_3(x+1,y+1)},
\rme^{\rmi\chi^{\ }_4(x+1,y+1)},
\rme^{\rmi\chi^{\ }_1(x,y)},
\rme^{\rmi\chi^{\ }_2(x,y)}
),
\\
V'(x,y)=\mathrm{diag}(
\rme^{\rmi\chi^{\ }_4(x+1,y+1)},
\rme^{\rmi\chi^{\ }_3(x+1,y+1)},
\rme^{\rmi\chi^{\ }_2(x,y)},
\rme^{\rmi\chi^{\ }_1(x,y)}
),\end{aligned}$$ where $2\chi^{\ }_n(x,y)$ equals a (random) phase that wave functions acquire when propagating along the edge $n$ of the plaquette centered at $(x,y)$.
The $\mathbb{Z}^{\ }_{2}$ network model is uniquely defined from the scattering matrices $S$ and $S'$. By construction, the $S$-matrix is time-reversal symmetric, i.e., $$\left(
\begin{array}{cc}
\mathrm{i}s^{\ }_{2}
&
0
\\
0
&
\mathrm{i}
s^{\ }_{2}
\end{array}
\right)
S^*
\left(
\begin{array}{cc}
-\mathrm{i}
s^{\ }_{2}
&
0
\\
0
&
-\mathrm{i}
s^{\ }_{2}
\end{array}
\right)
=
S^{\dag},$$ and a similar relation holds for $S'$.
In [@Obuse07a], we obtained the phase diagram of the $\mathbb{Z}^{\ }_{2}$ network model shown schematically in figure \[fig: predicted phase diagram\](a). Thereto, $(X,\theta)$ are spatially uniform deterministic parameters that can be changed continuously. On the other hand, the phases $\chi^{\ }_n$ of all link plane waves in the $\mathbb{Z}^{\ }_{2}$ network model are taken to be independently and uniformly distributed random variables over the range $[0,2\pi)$. The line $\theta=0$ is special in that the $\mathbb{Z}^{\ }_{2}$ network model reduces to two decoupled Chalker-Coddington network models [@Obuse07a]. Along the line $\theta=0$, the point $$X^{\ }_{\mathrm{CC}}=\ln(1+\sqrt{2})
\Longleftrightarrow
\beta=\frac{\pi}{4}
\label{eq: CC QCP bis}$$ realizes a quantum critical point that separates two insulating phases differing by one gapless edge state or, equivalently, by one unit in the Hall conductivity, per spin. Alternatively, $\theta$ can also be chosen to be randomly and independently distributed at each node with the probability $\sin(2 \theta)$ over the range $(0,\pi/2)$. This leaves $X$ as the sole deterministic parameter that controls the phase diagram as shown in figure \[fig: predicted phase diagram\](b). When performing numerically a scaling analysis with the size of the $\mathbb{Z}^{\ }_{2}$ network model, one must account for the deviations away from one-parameter scaling induced by irrelevant operators. The $\mathbb{Z}^{\ }_{2}$ network model with a randomly distributed $\theta$ minimizes such finite-size effects (see [@Obuse07a]).
![ (a) Schematic phase diagram from the analysis of the $\mathbb{Z}^{\ }_{2}$ network model with the constant $X$ and $\theta$. The metallic phase is surrounded by the two insulating phases with the critical points $X_s$ and $X_l(>X_s))$ for $0<\theta<\pi/2$. The fixed point denoted by a filled (green) square along the boundary $\theta=0$ is the unstable quantum critical point located at $X^{\ }_{\mathrm{CC}}=\ln(1+\sqrt{2})$ separating two insulating phases in the Chalker-Coddington model. The fixed point denoted by the filled (blue) rhombus at the upper left corner is the unstable metallic phase. The shape of the metallic phase is controlled by the symmetry crossover between the unitary and symplectic symmetry classes. (b) The phase diagram for $\mathbb{Z}^{\ }_{2}$ network model with randomly distributed $\theta$ over the range $(0,\pi/2)$. []{data-label="fig: predicted phase diagram"}](figure2.eps){width="12cm"}
Two-dimensional Dirac Hamiltonian from the $\mathbb{Z}^{\ }_{2}$ network model {#sec: 2D dirac}
================================================================================
The Chalker-Coddington model is related to the two-dimensional Dirac Hamiltonian as was shown by Ho and Chalker in [@Ho96]. We are going to establish the counterpart of this connection for the $\mathbb{Z}^{\ }_{2}$ network model. A unitary matrix is the exponential of a Hermitian matrix. Hence, our strategy to construct a Hamiltonian from the $\mathbb{Z}^{\ }_{2}$ network model is going to be to view the unitary scattering matrix of the $\mathbb{Z}^{\ }_{2}$ network model as a unitary time evolution whose infinitesimal generator is the seeked Hamiltonian. To this end, we proceed in two steps in order to present the $\mathbb{Z}^{\ }_{2}$ network model into a form in which it is readily interpreted as a unitary time evolution. First, we change the choice of the basis for the scattering states and select the proper unit of time. We then perform a continuum approximation, by which the $\mathbb{Z}^{\ }_{2}$ network model is linearized, so to say. This will yield an irreducible 4-dimensional representation of the Dirac Hamiltonian in $(2+1)$-dimensional space and time, a signature of the fermion doubling when deriving a continuum Dirac Hamiltonian from a time-reversal symmetric and local two-dimensional lattice model.
Change of the basis for the scattering states and one-step time evolution
---------------------------------------------------------------------------
Our goal is to reformulate the $\mathbb{Z}^{\ }_{2}$ network model defined in Sec. \[sec: definition\] in such a way that the scattering matrix maps incoming states into outgoing states sharing the *same* internal and space labels but a different “time” label. This involves a change of basis for the scattering states and an “enlargement” of the Hilbert space spanned by the scattering states. The parameter $\theta$ is assumed to be spatially uniform. We choose the plaquette $(x,y)$ of the network.
At node $\mathsf{S}$ of the plaquette $(x,y)$, we make the basis transformation and write the $S$-matrix (\[eq: def S at S node\]) in the form $$\left(
\begin{array}{c}
\psi^{\ }_{1\downarrow} \\
\psi^{\ }_{3\downarrow} \\
\psi^{\ }_{2\uparrow} \\
\psi^{\ }_{4\uparrow} \\
\end{array}
\right)
=:
\mathcal{M}^{\ }_{\mathsf{S}}
\left(
\begin{array}{c}
\psi^{\ }_{1\uparrow} \\
\psi^{\ }_{3\uparrow} \\
\psi^{\ }_{2\downarrow} \\
\psi^{\ }_{4\downarrow} \\
\end{array}
\right),
\qquad
\mathcal{M}^{\ }_{\mathsf{S}}
=\mathcal{U}\,\mathcal{N}^{\ }_{\mathsf{S}}\,\mathcal{U},
\label{def M_S}$$ where we have defined $$\mathcal{N}^{\ }_{\mathsf{S}}=
\left(
\begin{array}{cccc}
0
&
- t \, t^{x}_{-} t^{y}_{+} \sin\theta
&
t \, t^{x}_{-} t^{y}_{+} \cos\theta
&
r
\\ \!\!
t\, t^{x}_{+} t^{y}_{-} \sin\theta
&
0
&
r
&
- t \, t^{x}_{+}t^{y}_{-} \cos\theta
\!\!
\\
t \, t^{x}_{+} t^{y}_{-} \cos\theta
&
r
&
0
&
t \, t^{x}_{+} t^{y}_{-} \sin\theta
\\
r
&
- t \, t^{x}_{-} t^{y}_{+} \cos\theta
&
- t \, t^{x}_{-} t^{y}_{+} \sin\theta
&
0
\end{array}
\right)$$ and $$\mathcal{U}(x,y)=
\mathrm{diag}(
\rme^{\rmi\chi^{\ }_1(x,y)},
\rme^{\rmi\chi^{\ }_3(x,y)},
\rme^{\rmi\chi^{\ }_2(x,y)},
\rme^{\rmi\chi^{\ }_4(x,y)}
).$$ Here given $n=1,2,3,4$ and $\sigma=\uparrow,\downarrow$, we have introduced the shift operators acting on $\psi^{\ }_{n\sigma}(x,y)$, $$\begin{aligned}
t^{x}_{\pm} \psi^{\ }_{n\sigma}(x,y):=
\psi^{\ }_{n\sigma}(x\pm 1,y),
\qquad
t^{y}_{\pm} \psi^{\ }_{n\sigma}(x,y):=
\psi^{\ }_{n}(x,y\pm 1),\end{aligned}$$ and similarly on the phases $\chi^{\ }_{n}(x,y)\in[0,2\pi)$. We note that the scattering matrix $\mathcal{N}_\mathsf{S}$ is multiplied by the unitary matrix $\mathcal{U}$ from the left and the right in (\[def M\_S\]), because the Kramers’ doublet acquires exactly the same phase $\chi_n$ when traversing on the edge $n$ of the plaquette $(x,y)$ before and after experiencing the scattering $\mathcal{N}_\mathsf{S}$ at the node $\mathsf{S}$.
At node $\mathsf{S}'$ of the plaquette $(x,y)$, we make the basis transformation and rewrite the scattering matrix $S'$ (\[eq: def S’ at S’ node\]) into the form $$\left(
\begin{array}{c}
\psi^{\ }_{1\uparrow} \\
\psi^{\ }_{3\uparrow} \\
\psi^{\ }_{2\downarrow} \\
\psi^{\ }_{4\downarrow} \\
\end{array}
\right)=:
\mathcal{M}^{\ }_{\mathsf{S}'}
\left(
\begin{array}{c}
\psi^{\ }_{1\downarrow} \\
\psi^{\ }_{3\downarrow} \\
\psi^{\ }_{2\uparrow} \\
\psi^{\ }_{4\uparrow} \\
\end{array}
\right),
\qquad
\mathcal{M}^{\ }_{\mathsf{S}'}
=\mathcal{U}\,\mathcal{N}^{\ }_{\mathsf{S}'}\,\mathcal{U},$$ where we have defined $$\mathcal{N}^{\ }_{\mathsf{S}'}=
\!\left(
\begin{array}{cccc}
0
&
-t \, t^{x}_{+} t^{y}_{+} \sin\theta
&
r
&
-t \, t^{x}_{+} t^{y}_{+} \cos\theta \!
\\
t \, t^{x}_{-} t^{y}_{-} \sin\theta
&
0
&
t \, t^{x}_{-} t^{y}_{-} \cos\theta
&
r
\\
r
&
t \, t^{x}_{+} t^{y}_{+} \cos\theta
&
0
&
-t \, t^{x}_{+} t^{y}_{+} \sin\theta
\\ \!\!\!
-t \, t^{x}_{-} t^{y}_{-} \cos\theta
&
r
&
t \, t^{x}_{-} t^{y}_{-} \sin\theta
&
0
\end{array}
\right) .$$
As it should be $$\begin{aligned}
\mathcal{M}^{\dag}_{\mathsf{S}}
\mathcal{M}^{\ }_{\mathsf{S}}=
\mathcal{M}^{\dag}_{\mathsf{S}'}
\mathcal{M}^{\ }_{\mathsf{S}'}=1.\end{aligned}$$
Next, we introduce the discrete time variable $l\in\mathbb{Z}$ as follows. We define the elementary discrete unitary time evolution to be $$\begin{aligned}
\left(
\begin{array}{c}
\psi^{\ }_{+\downarrow} \\
\psi^{\ }_{-\uparrow} \\
\psi^{\ }_{+\uparrow} \\
\psi^{\ }_{-\downarrow}
\end{array}
\right)^{\ }_{l+1}:=
\left(
\begin{array}{cc}
0 & \mathcal{M}^{\ }_{\mathsf{S}} \\
\mathcal{M}^{\ }_{\mathsf{S}'} & 0
\end{array}
\right)
\left(
\begin{array}{c}
\psi^{\ }_{+\downarrow} \\
\psi^{\ }_{-\uparrow} \\
\psi^{\ }_{+\uparrow} \\
\psi^{\ }_{-\downarrow}
\end{array}
\right)^{\ }_{l}.\end{aligned}$$ Here, to treat on equal footing the nodes of type $\mathsf{S}$ and $\mathsf{S}'$, we have enlarged the scattering basis with the introduction of the doublets $$\begin{aligned}
\psi^{\ }_{+}&:=&
\left(
\begin{array}{c}
\psi^{\ }_{1} \\
\psi^{\ }_{3}
\end{array}
\right),
\qquad
\psi^{\ }_{-}:=
\left(
\begin{array}{c}
\psi^{\ }_{2} \\
\psi^{\ }_{4}
\end{array}
\right).\end{aligned}$$ Due to the off-diagonal block structure in the elementary time evolution, it is more convenient to consider the “one-step” time evolution operator defined by $$\begin{aligned}
\left(
\begin{array}{c}
\psi^{\ }_{+\downarrow}
\\
\psi^{\ }_{-\uparrow}
\\
\psi^{\ }_{+\uparrow}
\\
\psi^{\ }_{-\downarrow}
\end{array}
\right)^{\ }_{l+2}
&=&
\left(
\begin{array}{cc}
\mathcal{M}^{\ }_{\mathsf{S}}
\mathcal{M}^{\ }_{\mathsf{S}'}
&
0
\\
0
&
\mathcal{M}^{\ }_{\mathsf{S}'}
\mathcal{M}^{\ }_{\mathsf{S}}
\end{array}
\right)
\left(
\begin{array}{c}
\psi^{\ }_{+\downarrow}
\\
\psi^{\ }_{-\uparrow}
\\
\psi^{\ }_{+\uparrow}
\\
\psi^{\ }_{-\downarrow}
\end{array}
\right)^{\ }_{l}
\nonumber\\
&\equiv&
\left(
\begin{array}{cc}
\mathcal{M}^{\ }_{\mathsf{S}\mathsf{S}'}
&
0
\\
0
&
\mathcal{M}^{\ }_{\mathsf{S}'\mathsf{S}}
\end{array}
\right)
\left(
\begin{array}{c}
\psi^{\ }_{+\downarrow}
\\
\psi^{\ }_{-\uparrow}
\\
\psi^{\ }_{+\uparrow}
\\
\psi^{\ }_{-\downarrow}
\end{array}
\right)^{\ }_{l}.
\label{eq: reducibility of 2 steps}\end{aligned}$$ The two Hamiltonians generating this unitary time evolution are then $$\mathcal{H}^{\ }_{\mathsf{S}\mathsf{S}'}:=
+\mathrm{i}\ln \mathcal{M}^{\ }_{\mathsf{S}\mathsf{S}'},
\qquad
\mathcal{H}^{\ }_{\mathsf{S}'\mathsf{S}}:=
+\mathrm{i}\ln \mathcal{M}^{\ }_{\mathsf{S}'\mathsf{S}}.
\label{eq: def H's}$$ Evidently, the additivity of the logarithm of a product implies that $$\mathcal{H}^{\ }_{\mathsf{S}\mathsf{S}'}=
\mathcal{H}^{\ }_{\mathsf{S}'\mathsf{S}}.
\label{eq: doubling}$$ From now on, we will consider $\mathcal{H}^{\ }_{\mathsf{S}\mathsf{S}'}$ exclusively since $\mathcal{M}^{\ }_{\mathsf{S}'\mathsf{S}}=
\exp(\mathrm{i}\mathcal{H}^{\ }_{\mathsf{S}'\mathsf{S}})$ merely duplicates the information contains in $\mathcal{M}^{\ }_{\mathsf{S}\mathsf{S}'}
=
\exp(\mathrm{i}\mathcal{H}^{\ }_{\mathsf{S}\mathsf{S}'})
$.
Dirac Hamiltonian close to $\theta=0$
---------------------------------------
In this section, we are going to extract from the unitary time-evolution (\[eq: reducibility of 2 steps\])–(\[eq: doubling\]) of the $\mathbb{Z}^{\ }_{2}$ network model a $4\times4$ continuum Dirac Hamiltonian in the close vicinity of the quantum critical point $$(\theta,\beta)^{\ }_{\mathrm{CC}}:=(0,\pi/4).
\label{eq: CC QCP}$$ To this end and following [@Ho96], it is convenient to measure the link phases $\chi^{\ }_{n}$ ($n=1,2,3,4$) relative to their values when they carry a flux of $\pi$ per plaquette. Hence, we redefine $$\chi^{\ }_{4}\to
\chi^{\ }_{4}
+
\frac{\pi}{2}
\label{eq: def pi flux ref}$$ on all plaquettes.
Our strategy consists in performing an expansion of $$\mathcal{H}^{\ }_{\mathsf{S}\mathsf{S}'}=
+\mathrm{i}\ln \mathcal{M}^{\ }_{\mathsf{S}\mathsf{S}'}=
+\mathrm{i}
\left(
\ln \mathcal{M}^{\ }_{\mathsf{S}}
+
\ln \mathcal{M}^{\ }_{\mathsf{S}}
\right)=
+\mathrm{i}\ln \mathcal{M}^{\ }_{\mathsf{S}'\mathsf{S}}=
\mathcal{H}^{\ }_{\mathsf{S}'\mathsf{S}}$$ defined in (\[eq: def H’s\]) to leading order in powers of $$\theta,
\quad
\frac{m}{2}\equiv\beta-\frac{\pi}{4},
\quad
\partial_{x,y}\equiv\ln t^{x,y}_+,
\quad
\chi^{\ }_{n}$$ with $n=1,2,3,4$ where $\partial_{x,y}$ is the generator of infinitesimal translation on the network (the two-dimensional momentum operator).
When $\theta=0$, the unitary time-evolution operator at the plaquette $(x,y)$ is given by $$\begin{aligned}
&&
\left(
\begin{array}{c}
\psi^{\ }_{+\downarrow}\\
\psi^{\ }_{-\uparrow} \\
\end{array}
\right)^{\ }_{l+2}=
\mathcal{M}^{(0)}_{\mathsf{S}\mathsf{S}'}
\left(
\begin{array}{c}
\psi^{\ }_{+\downarrow}\\
\psi^{\ }_{-\uparrow} \\
\end{array}
\right)^{\ }_{l},
\\
&&
\mathcal{M}^{(0)}_{\mathsf{S}\mathsf{S}'}=
\left(
\begin{array}{cc}
A^{(0)}
D^{(0)}
&
0
\\
0
&
B^{(0)}
C^{(0)}
\end{array}
\right),\end{aligned}$$ whereby $$\begin{aligned}
&&
\mathcal{M}^{(0)}_{\mathsf{S}\mathsf{S}'}=
\mathcal{M}^{(0)}_{\mathsf{S}}
\mathcal{M}^{(0)}_{\mathsf{S}'},
\\
&&
\mathcal{M}^{(0)}_{\mathsf{S}}=
\left(
\begin{array}{cc}
0
&
A^{(0)}
\\
B^{(0)}
&
0
\end{array}
\right),
\qquad
\mathcal{M}^{(0)}_{\mathsf{S}'}=
\left(
\begin{array}{cc}
0
&
C^{(0)}
\\
D^{(0)}
&
0
\end{array}
\right),\end{aligned}$$ with the $2\times2$ operator-valued matrices $$\begin{aligned}
A^{(0)}&:=&
\left(
\begin{array}{cc}
\!
\rme^{\rmi\chi^{\ }_1}
t^{x}_{-} t^{y}_{+}
\rme^{\rmi\chi^{\ }_2}
\sin\beta
&
\rmi\rme^{\rmi(\chi^{\ }_1+\chi^{\ }_4)}
\cos\beta
\\
\rme^{\rmi(\chi^{\ }_3+\chi^{\ }_2)}
\cos\beta
&
-\rmi\rme^{\rmi\chi^{\ }_3}
t^{x}_{+}t^{y}_{-}
\rme^{\rmi\chi^{\ }_4}
\sin \beta \!
\end{array}
\right),
\\
B^{(0)}&:=&
\left(
\begin{array}{cc}
\rme^{\rmi\chi^{\ }_2}
t^{x}_{+} t^{y}_{-}
\rme^{\rmi\chi^{\ }_1}\sin \beta
&
\rme^{\rmi(\chi^{\ }_2+\chi^{\ }_3)}\cos\beta
\\
\rmi\rme^{\rmi(\chi^{\ }_4+\chi^{\ }_1)}\cos \beta
&
-\rmi\rme^{\rmi\chi^{\ }_4}
t^{x}_{-} t^{y}_{+}
\rme^{\rmi\chi^{\ }_3}\sin\beta
\end{array}
\right),
\\
C^{(0)}&:=&
\left(
\begin{array}{cccc}
\rme^{\mathrm{i}(\chi^{\ }_1+\chi^{\ }_2)}\cos\beta
&
-\rmi\rme^{\mathrm{i}\chi^{\ }_1}
t^{x}_{+} t^{y}_{+}
\rme^{\mathrm{i}\chi^{\ }_4}\sin\beta
\\ \!
\rme^{\mathrm{i}\chi^{\ }_3}
t^{x}_{-} t^{y}_{-}
\rme^{\mathrm{i}\chi^{\ }_2}\sin\beta
&
\rmi\rme^{\mathrm{i}(\chi^{\ }_3+\chi^{\ }_4)} \cos\beta \\
\end{array}
\right),
\\
D^{(0)}&:=&
\left(
\begin{array}{cc}
\rme^{\rmi(\chi^{\ }_2+\chi^{\ }_1)}\cos\beta
&
\rme^{\rmi\chi^{\ }_2}
t^{x}_{+} t^{y}_{+}
\rme^{\rmi\chi^{\ }_3}\sin\beta \!
\\ \!
-\rmi\rme^{\rmi\chi^{\ }_4}
t^{x}_{-} t^{y}_{-}
\rme^{\rmi\chi^{\ }_1} \sin\beta
&
\rmi\rme^{\rmi(\chi^{\ }_4+\chi^{\ }_3)}\cos\beta
\end{array}
\right).\end{aligned}$$ Observe that in the limit $\theta=0$, the $\mathbb{Z}^{\ }_{2}$ network model reduces to two decoupled U(1) network models where each time evolution is essentially the same as the one for the U(1) network model derived in [@Ho96].
In the vicinity of the Chalker-Coddington quantum critical point (\[eq: CC QCP\]), we find the $4\times4$ block diagonal Hamiltonian $$\begin{aligned}
\mathcal{H}^{(0)}_{\mathsf{S}\mathsf{S}'}=
\left(
\begin{array}{cc}
D^{\ }_+
&
0
\\
0
&
D^{\ }_-
\\
\end{array}
\right)
\label{eq: HSS' if theta=0}\end{aligned}$$ where the $2\times2$ block are expressed in terms of linear combinations of the $2\times2$ unit matrix $\sigma^{\ }_{0}$ and of the Pauli matrices $\sigma^{\ }_{x}$, $\sigma^{\ }_{y}$, and $\sigma^{\ }_{z}$ according to $$\begin{aligned}
D^{\ }_+
=
\sigma^{\ }_{z}
\left(
-\rmi\partial^{\ }_{x}
+A^{\ }_{x}
\right)
-
\sigma^{\ }_{x}
\left(
-\rmi\partial^{\ }_{y}
+A^{\ }_{y}
\right)
-
\sigma^{\ }_{y} m
+
\sigma^{\ }_{0}
A^{\ }_0,\end{aligned}$$ and $$\begin{aligned}
D^{\ }_-
=
-\sigma^{\ }_{y}
\left(
-\rmi\partial^{\ }_{x}
-A^{\ }_{x}
\right)
+
\sigma^{\ }_{z}
\left(
-\rmi\partial^{\ }_{y}
-A^{\ }_{y}
\right)
+
\sigma^{\ }_{x} m
+
\sigma^{\ }_{0}
A^{\ }_0.\end{aligned}$$ Thus, each $2\times2$ block Hamiltonian is of the Dirac form whereby the linear combinations $$\begin{aligned}
A^{\ }_0:=
-(
\chi^{\ }_{1}+\chi^{\ }_{2}+\chi^{\ }_{3}+\chi^{\ }_{4}
),
\qquad
(A^{\ }_{x},A^{\ }_{y}):=
(
-\chi^{\ }_{1}+\chi^{\ }_{3},
\chi^{\ }_{2}-\chi^{\ }_{4}
),\end{aligned}$$ enter as a scalar gauge potential and a vector gauge potential would do, respectively.
Any deviation of $\theta$ from $\theta=0$ lifts the reducibility of (\[eq: HSS’ if theta=0\]). To leading order in $\theta$ and close to the Chalker-Coddington quantum critical point (\[eq: CC QCP\]), $$\begin{aligned}
\mathcal{M}^{\ }_{\mathsf{S}\mathsf{S}'}
&=&
\left(
\mathcal{M}^{(0)}_{\mathsf{S}}
+
\theta
\mathcal{M}^{(1)}_{\mathsf{S}}
+
\cdots
\right)
\left(
\mathcal{M}^{(0)}_{\mathsf{S}'}
+
\theta
\mathcal{M}^{(1)}_{\mathsf{S}'}
+
\cdots
\right)
\nonumber\\
&=&
\mathcal{M}^{(0)}_{\mathsf{S}\mathsf{S}'}
+
\theta
\left(
\mathcal{M}^{(1)}_{\mathsf{S}}
\mathcal{M}^{(0)}_{\mathsf{S}'}
+
\mathcal{M}^{(0)}_{\mathsf{S}}
\mathcal{M}^{(1)}_{\mathsf{S}'}
\right)
+
\cdots\end{aligned}$$ with $$\begin{aligned}
\fl
\mathcal{M}^{(1)}_{\mathsf{S}}=
\left(
\begin{array}{cccc}
A^{(1)}
&
0
\\
0
&
B^{(1)}
\end{array}
\right)\!,
\qquad
A^{(1)}=
\frac{1}{\sqrt{2}}
\left(
\begin{array}{cccc}
0
&
-1
\\
1
&
0
\end{array}
\right)\!,
\qquad
B^{(1)}=
\frac{1}{\sqrt{2}}
\left(
\begin{array}{cccc}
0
&
\rmi
\\
-\rmi
&
0
\end{array}
\right)\!,
\\
\fl
\mathcal{M}^{(1)}_{\mathsf{S}'}=
\left(
\begin{array}{cccc}
C^{(1)}
&
0
\\
0
&
D^{(1)}
\end{array}
\right)\!,
\qquad
C^{(1)}=
\frac{1}{\sqrt{2}}
\left(
\begin{array}{cccc}
0
&
-1
\\
1
&
0
\end{array}
\right)\!,
\qquad
D^{(1)}=
\frac{1}{\sqrt{2}}
\left(
\begin{array}{cccc}
0
&
-\rmi
\\
\rmi
&
0
\end{array}
\right)\!,\end{aligned}$$ where we have set $m=\chi^{\ }_n=0$ and $t^{x,y}_\pm=1$. We obtain $$\begin{aligned}
\mathcal{H}^{\ }_{\mathsf{S}\mathsf{S}'}=
\left(
\begin{array}{cc}
D^{\ }_{+}
&
D^{\ }_{\theta}
\\
D^{\dag}_{\theta}
&
D^{\ }_{-}
\end{array}
\right),
\qquad
D^{\ }_{\theta}:=
\theta
\left(
\begin{array}{cc}
-\rmi & 1 \\
\rmi & 1
\end{array}
\right)
\label{eq: intermediary Dirac}\end{aligned}$$ to this order.
Next, we perform a sequence of unitary transformation generated by $$U=
\left(\begin{array}{cc}
\rme^{\rmi\pi\sigma^{\ }_{y}/4} & 0 \\
0 & \rme^{\rmi\pi\sigma^{\ }_{z}/4} \\
\end{array}
\right)
\left(\begin{array}{cc}
\rme^{-\rmi\pi\sigma^{\ }_{x}/4}
&
0
\\
0
&
\rme^{-\rmi\pi\sigma^{\ }_{x}/4}
\\
\end{array}
\right)
\left(\begin{array}{cc}
\rme^{-\rmi\pi/8}
&
0
\\
0
&
\rme^{\rmi\pi/8}
\\
\end{array}
\right),$$ yielding $$\mathcal{H}:=
U^\dagger\mathcal{H}^{\ }_{\mathsf{S}\mathsf{S}'}U
=\left(\begin{array}{cc}
\mathcal{H}^{\ }_{+}
&
\alpha\sigma^{\ }_{0}
\\
\alpha\sigma^{\ }_{0}
&
\mathcal{H}^{\ }_{-} \\
\end{array}
\right)
\label{eq: 4 by 4 Dirac}$$ with $\alpha=\sqrt2\theta$ and $$\begin{aligned}
\mathcal{H}^{\ }_{\pm}=
\sigma^{\ }_{x}
\left(
-
\rmi\partial^{\ }_{x}
\pm
A^{\ }_{x}
\right)
+
\sigma^{\ }_{y}
\left(
-\rmi\partial^{\ }_{y}
\pm
A^{\ }_{y}
\right)
\pm
\sigma^{\ }_z m
+
\sigma^{\ }_{0}
A^{\ }_0.
\label{eq: H_pm}\end{aligned}$$ The $2\times2$ matrices $\mathcal{H}^{\ }_+$ and $\mathcal{H}^{\ }_-$ describe a Dirac fermion with mass $\pm m$ in the presence of random vector potential $\pm(A^{\ }_{x},A^{\ }_{y})$ and random scalar potential $A^{\ }_0$, each of which is an effective Hamiltonian for the plateau transition of integer quantum Hall effect [@Ho96; @Ludwig94]. The $\mathcal{H}^{\ }_{\pm}$ sectors are coupled by the matrix element $\alpha\sigma^{\ }_0$.
The $4\times4$ continuum Dirac Hamiltonian $\mathcal{H}$ can be written in the form $$\begin{aligned}
\mathcal{H}=
&
(
-\rmi\partial^{\ }_{x}\sigma^{\ }_{x}
-\rmi\partial^{\ }_{y}\sigma^{\ }_{y}
)\otimes\tau^{\ }_0
+
(A^{\ }_{x}\sigma^{\ }_{x}+A^{\ }_{y}\sigma^{\ }_{y}
+m\sigma^{\ }_z)
\otimes\tau^{\ }_z
\nonumber\\
&
+A^{\ }_0\sigma^{\ }_0\otimes\tau^{\ }_0
+\alpha\,\sigma^{\ }_0\otimes\tau^{\ }_{x},
\label{H_4}\end{aligned}$$ where $\tau^{\ }_0$ is a unit $2\times2$ matrix and $\tau^{\ }_{x}$, $\tau^{\ }_{y}$, and $\tau^{\ }_{z}$ are three Pauli matrices. The Hamiltonian (\[H\_4\]) is invariant for each realization of disorder under the operation $$\begin{aligned}
T\,\mathcal{H}^{*}\, T^{-1}=
\mathcal{H},
\qquad
T:=
\rmi\sigma^{\ }_{y}
\otimes
\tau^{\ }_{x},
\label{eq: TRS}\end{aligned}$$ that implements time-reversal for a spin-1/2 particle.
The Dirac Hamiltonian (\[H\_4\]) is the main result of this subsection. It is an effective model for the Anderson localization of quantum spin Hall systems, which belongs to the symplectic class in view of the symmetry property (\[eq: TRS\]). The Anderson transition in the Dirac Hamiltonian (\[H\_4\]) should possess the same universal critical properties as those found in our numerical simulations of the $\mathbb{Z}^{\ }_{2}$ network model. In the presence of the “Rashba” coupling $\alpha$, there should appear a metallic phase near $m=0$ which is surrounded by two insulating phases. In the limit $\alpha\to0$, the metallic phase should shrink into a critical point of the integer quantum Hall plateau transition.
The $4\times4$ continuum Dirac Hamiltonian $\mathcal{H}$ should be contrasted with a $2\times2$ Hamiltonian of a Dirac particle in random scalar potential, $$\mathcal{H}^{\ }_2=
-
\rmi\partial^{\ }_{x}\sigma^{\ }_{x}
-
\rmi\partial^{\ }_{y}\sigma^{\ }_{y}
+
V(x,y)\sigma^{\ }_0,
\label{H_2}$$ which has the minimal dimensionality of the Clifford algebra in $(2+1)$-dimensional space time and is invariant under time-reversal operation, $
\sigma^{\ }_{y}\mathcal{H}^*_2 \sigma^{\ }_{y}=\mathcal{H}^{\ }_2
$. The $2\times2$ Dirac Hamiltonian (\[H\_2\]) is an effective Hamiltonian for massless Dirac fermions on the surface of a three-dimensional $\mathbb{Z}^{\ }_{2}$ topological insulator. After averaging over the disorder potential $V$, the problem of Anderson localization of the surface Dirac fermions is reduced to a NLSM with a $\mathbb{Z}^{\ }_{2}$ topological term [@Ryu07; @Ostrovsky07]. Interestingly, this $\mathbb{Z}^{\ }_{2}$ topological term prevents the surface Dirac fermions from localizing [@Bardarson07; @NomuraKoshinoRyu]. It is this absence of two-dimensional localization that defines a three-dimensional $\mathbb{Z}^{\ }_{2}$ topological insulator [@Schnyder08]. In contrast, the doubling of the size of the Hamiltonian (\[H\_4\]) implies that the NLSM describing the Anderson localization in the $4\times4$ Hamiltonian (\[eq: 4 by 4 Dirac\]) does not come with a $\mathbb{Z}^{\ }_{2}$ topological term, because two $\mathbb{Z}^{\ }_{2}$ topological terms cancel each other. We can thus conclude that the critical properties of metal-insulator transitions in the $\mathbb{Z}^{\ }_{2}$ network model are the same as those in the standard symplectic class, in agreement with results of our numerical simulations of the $\mathbb{Z}^{\ }_{2}$ network model [@Obuse07a; @Obuse08].
Before closing this subsection, we briefly discuss the Dirac Hamiltonian (\[H\_4\]) in the clean limit where $A^{\ }_0=A^{\ }_{x}=A^{\ }_{y}=0$. Since the system in the absence of disorder is translationally invariant, momentum is a good quantum number. We thus consider the Hamiltonian in momentum space $$\mathcal{H}(\bi{k})=
\left(\begin{array}{cc}
k^{\ }_{x}\sigma^{\ }_{x}
+
k^{\ }_{y}\sigma^{\ }_{y}
+
m\sigma^{\ }_{z}
&
\alpha\sigma^{\ }_0
\\
\alpha\sigma^{\ }_0
&
k^{\ }_{x}\sigma^{\ }_{x}
+
k^{\ }_{y}\sigma^{\ }_{y}
-
m\sigma^{\ }_{z}
\end{array}
\right),
\label{H(k)}$$ where the wave number $\bi{k}=(k^{\ }_{x},k^{\ }_{y})$. When $\alpha=0$, the Hamiltonian (\[H(k)\]) becomes a direct sum of $2\times2$ Dirac Hamiltonian with mass of opposite signs. This is essentially the same low-energy Hamiltonian as the one appearing in the quantum spin Hall effect in HgTe/(Hg,Cd)Te quantum wells [@Bernevig06b].
$\mathbb{Z}^{\ }_{2}$ topological number
------------------------------------------
We now discuss the topological property of the time-reversal invariant insulator which is obtained from the effective Hamiltonian (\[H(k)\]) of the $\mathbb{Z}^{\ }_{2}$ network model in the absence of disorder. The topological attribute of the band insulator is intimately tied to the invariance $$\hat{\Theta}^{-1}
\mathcal{H}(-\boldsymbol{k})
\hat{\Theta}=
\mathcal{H}(\boldsymbol{k})$$ under the operation of time-reversal represented by $$\hat{\Theta}:=
(\rmi\sigma^{\ }_{y}\otimes \tau^{\ }_{x} )\mathcal{K}=
-\hat{\Theta}^{-1},
\label{eq: def hat Theta}$$ where $\mathcal{K}$ implements complex conjugation. We are going to show that this topological attribute takes values in $\mathbb{Z}^{\ }_{2}$, i.e., the $\mathbb{Z}^{\ }_2$ index introduced by Kane and Mele [@Kane05b].
We begin with general considerations on a translation-invariant single-particle fermionic Hamiltonian which has single-particle eigenstates labeled by the wave vector $\boldsymbol{k}$ taking values in a compact manifold. This compact manifold can be the first Brillouin zone with the topology of a torus if the Hamiltonian is defined on a lattice and periodic boundary conditions are imposed, or it can be the stereographic projection between the momentum plane $\mathbb{R}^{2}$ and the surface of a three-dimensional sphere if the Hamiltonian is defined in the continuum. We assume that (i) the antiunitary operation $\hat{\Theta}=-\hat{\Theta}^{-1}=-\Theta^{\dag}$ that implements time-reversal leaves the Hamiltonian invariant, (ii) there exists a spectral gap at the Fermi energy, and (iii) there are two distinct occupied bands with the single-particle orthonormal eigenstates $|u^{\ }_{\hat{a}}(\boldsymbol{k})\rangle$ and energies $E^{\ }_{\hat{a}}(\boldsymbol{k})$ labeled by the index $\hat{a}=1,2$ below the Fermi energy. All three assumptions are met by the $4\times4$ Dirac Hamiltonian (\[H\_4\]), provided that the mass $m$ is nonvanishing.
Because of assumptions (i) and (ii) the $2\times2$ unitary sewing matrix with the matrix elements $w_{\hat{a}\hat{b}}(\bi{k})$ defined by $$w^{\ }_{\hat{a}\hat{b}}(\boldsymbol{k}):=
\langle u^{\ }_{\hat{a}}(-\boldsymbol{k})|
\bigg(
\hat{\Theta}
|u^{\ }_{\hat{b}}( \boldsymbol{k})\rangle
\bigg)
\equiv
\left\langle u^{\ }_{\hat{a}}(-\boldsymbol{k})\left|
\Theta u^{\ }_{\hat{b}}( \boldsymbol{k})\right.\right\rangle,
\quad
\hat{a},\hat{b}=1,2,
\label{eq: def sewing matrix}$$ i.e., the overlaps between the occupied single-particle energy eigenstates with momentum $-\boldsymbol{k}$ and the time reversed images to the occupied single-particle energy eigenstates with momentum $\boldsymbol{k}$, plays an important role [@Fu06]. The matrix elements (\[eq: def sewing matrix\]) obey $$\begin{aligned}
w^{\ }_{\hat{a}\hat{b}}(\boldsymbol{k})&\equiv
\langle u^{\ }_{\hat{a}}(-\boldsymbol{k})|
\bigg(
\hat{\Theta}
|u^{\ }_{\hat{b}}( \boldsymbol{k})\rangle
\bigg)
\nonumber\\
&=
\langle u^{\ }_{\hat{b}}( \boldsymbol{k})|
\bigg(
\hat{\Theta}^{\dag}
|u^{\ }_{\hat{a}}(-\boldsymbol{k})\rangle
\bigg)
\nonumber\\
&=
-
\langle u^{\ }_{\hat{b}}( \boldsymbol{k})|
\bigg(
\hat{\Theta}
|u^{\ }_{\hat{a}}(-\boldsymbol{k})\rangle
\bigg)
\nonumber\\
&\equiv
-w^{\ }_{\hat{b}\hat{a}}(-\boldsymbol{k}),
\qquad\qquad\qquad\qquad
\hat{a},\hat{b}=1,2.
\label{eq: sewing matrix is as}\end{aligned}$$ We used the fact that $\hat{\Theta}$ is antilinear to reach the second equality and that it is antiunitary with $\hat{\Theta}^{2}=-1$ to reach the third equality. Hence, the $2\times2$ unitary sewing matrix $w(\boldsymbol{k})$ with the matrix elements (\[eq: def sewing matrix\]) can be parametrized as $$w(\boldsymbol{k})=
\left(
\begin{array}{cc}
w^{\ }_{11}(\boldsymbol{k})
&
w^{\ }_{12}(\boldsymbol{k})
\\
-
w^{\ }_{12}(-\boldsymbol{k})
&
w^{\ }_{22}(\boldsymbol{k})
\end{array}
\right)=
-w^{\mathrm{T}}(-\boldsymbol{k})
\label{eq: para sewing matrix}$$ with the three complex-valued functions $$w^{\ }_{11}(\boldsymbol{k})=
-
w^{\ }_{11}(-\boldsymbol{k}),
\qquad
w^{\ }_{22}(\boldsymbol{k})=
-
w^{\ }_{22}(-\boldsymbol{k}),
\qquad
w^{\ }_{12}(\boldsymbol{k}).$$ We observe that $w(\bi{k})$ reduces to $$w(\boldsymbol{k})=
e^{\rmi f(\bi{k})}
\left(
\begin{array}{cc}
0
&
-1
\\
+1
&
0
\end{array}
\right)$$ for some real-valued $f(\boldsymbol{k})$ at any time-reversal invariant wave vector $\boldsymbol{k}\sim-\boldsymbol{k}$ (time-reversal invariant wave vectors are half a reciprocal vector for a lattice model, and 0 or $\infty$ for a model in the continuum).
As we shall shortly see, the sewing matrix (\[eq: def sewing matrix\]) imposes constraints on the U(2) Berry connection $$\begin{aligned}
\mathcal{A}^{\ }_{\hat{a}\hat{b}}(\boldsymbol{k}):=
\left\langle u^{\ }_{\hat{a}}(\boldsymbol{k})\left|
\mathrm{d}u^{\ }_{\hat{b}}(\boldsymbol{k})\right.\right\rangle\equiv
\left\langle u^{\ }_{\hat{a}}(\boldsymbol{k})\left|
\frac{\partial}{\partial k^{\ }_{\mu}}
u^{\ }_{\hat{b}}(\boldsymbol{k})\right.\right\rangle
\mathrm{d}k^{\ }_{\mu}\equiv
A^{\mu}_{\hat{a}\hat{b}}(\boldsymbol{k})
\mathrm{d}k^{\ }_{\mu},
\label{eq: def U(2) Berry connection}\end{aligned}$$ where the summation convention over the repeated index $\mu$ is understood (we do not make distinction between superscript and subscript). Here, at every point $\boldsymbol{k}$ in momentum space, we have introduced the U(2) antihermitian gauge field $A^{\ }_{\mu}(\boldsymbol{k})$ with the space index $\mu=1,2$ and the matrix elements $$A^{\mu}_{\hat{a}\hat{b}}(\boldsymbol{k})=
-
\left(A^{\mu}_{\hat{b}\hat{a}}(\boldsymbol{k})\right)^{*}$$ labeled with the U(2) internal indices $\hat{a},\hat{b}=1,2$, by performing an infinitesimal parametric change in the Hamiltonian. We decompose the U(2) gauge field (\[eq: def U(2) Berry connection\]) into the U(1) and the SU(2) contributions $$A^{\ }_{\mu}(\boldsymbol{k})\equiv
a^{0}_{\mu} (\boldsymbol{k})
\frac{\rho^{\ }_{0}}{2\mathrm{i}}
+
\boldsymbol{a}^{\ }_{\mu}(\boldsymbol{k})
\cdot
\frac{\boldsymbol{\rho} }{2\mathrm{i}},$$ where $\rho^{\ }_{0}$ is a $2\times2$ unit matrix and $\boldsymbol{\rho}$ is a 3 vector made of the Pauli matrices $\rho^{\ }_{x}$, $\rho^{\ }_{y}$, and $\rho^{\ }_{z}$. Accordingly, $$\begin{aligned}
\mathcal{A}^{\mathrm{ U}(2)}(\boldsymbol{k})
&=&
\mathcal{A}^{\mathrm{ U}(1)}(\boldsymbol{k})
+
\mathcal{A}^{\mathrm{SU}(2)}(\boldsymbol{k})
.\end{aligned}$$
Combining the identity $\hat{\Theta}^2=-1$ with the (partial) resolution of the identity $
\sum_{\hat{a}=1,2}|u^{\ }_{\hat{a}}(\bi{k})\rangle
\langle u^{\ }_{\hat{a}}(\bi{k})|
$ for the occupied energy eigenstates with momentum $\bi{k}$ yields $$\sum_{\hat{a}=1,2}
\hat\Theta|u^{\ }_{\hat{a}}(\bi{k})\rangle
\langle u^{\ }_{\hat{a}}(\bi{k})|\hat\Theta
=-1,
\label{resolution of -1}$$ where the proper restriction to the occupied energy eigenstates is understood for the unit operator on the right-hand side. Using this identity, we deduce the gauge transformation $$\begin{aligned}
A^{\ }_{\mu}(-\boldsymbol{k})&=
-
\left(
\left\langle u^{\ }_{\hat{a}}(-\bi{k})
\left|
\frac{\partial}{\partial k^{\ }_{\mu}}
u^{\ }_{\hat{b}}(-\bi{k})
\right.
\right\rangle
\right)^{\ }_{\hat{a},\hat{b}=1,2}
\nonumber\\
&=
-
w(\boldsymbol{k})
A^{*}_{\mu}(\boldsymbol{k})
w^{\dag}(\boldsymbol{k})
-
w(\boldsymbol{k})
\partial^{\ }_{\mu}
w^{\dag}(\boldsymbol{k})
\nonumber\\
&=
+
w(\boldsymbol{k})
A^{\mathrm{T}}_{\mu}(\boldsymbol{k})
w^{\dag}(\boldsymbol{k})
-
w(\boldsymbol{k})
\partial^{\ }_{\mu}
w^{\dag}(\boldsymbol{k})
\label{eq: sewing matrix as gauge trsf}\end{aligned}$$ that relates the U(2) connections at $\pm\boldsymbol{k}$. For the U(1) and SU(2) parts of the connection, $$\begin{aligned}
a^{0}_{\mu}(-\boldsymbol{k})=
a^{0}_{\mu}(\boldsymbol{k})
-
2\partial^{\ }_{\mu}\zeta(\boldsymbol{k}),
\label{eq: sewing condition for the U1 gauge fields}
\\
\boldsymbol{a}^{\ }_{\mu}(-\boldsymbol{k})
\cdot
\boldsymbol{\rho}=
\boldsymbol{a}^{\ }_{\mu}(\boldsymbol{k})
\cdot
\tilde{w}(\boldsymbol{k})
\boldsymbol{\rho}^\mathrm{T}
\tilde{w}^{\dag}(\boldsymbol{k})
-
2\rmi\,
\tilde{w}(\boldsymbol{k})\partial_{\mu}
\tilde{w}^{\dag}(\boldsymbol{k}),
\label{eq: sewing condition for the SU2 gauge fields}\end{aligned}$$ where we have decomposed $w(k)$ into the U(1) ($\rme^{\mathrm{i}\zeta}$) and SU(2) ($\tilde{w}$) parts according to $$w(\boldsymbol{k})=
\rme^{\rmi\zeta(\boldsymbol{k})}\tilde{w}(\boldsymbol{k}),$$ (note that this decomposition has a global sign ambiguity, which, however, will not affect the following discussions).
Equipped with these gauge fields, we introduce the U(2) Wilson loop $$\begin{aligned}
W^{\ }_{\mathrm{U}(2)}[\mathcal{C}]&:=
\frac{1}{2}
\mathrm{tr}\,
\mathcal{P}
\exp
\left(
\oint\limits_{\mathcal{C}}
\mathcal{A}^{\mathrm{U}(2)}(\boldsymbol{k})
\right)
\nonumber\\
&=
W^{\ }_{\mathrm{U}(1)}[\mathcal{C}]
\times
W^{\ }_{\mathrm{SU}(2)}[\mathcal{C}],
\label{eq: def U(2) Wilson loop}\end{aligned}$$ where the U(1) Wilson loop is given by $$\begin{aligned}
W^{\ }_{\mathrm{U}(1)}[\mathcal{C}]:=
\exp
\left(
\oint\limits_{\mathcal{C}}
\mathcal{A}^{\mathrm{U}(1)}(\boldsymbol{k})
\right),
\label{eq: def U(1) Wilson loop}\end{aligned}$$ while the SU(2) Wilson loop is given by $$\begin{aligned}
W^{\ }_{\mathrm{SU}(2)}[\mathcal{C}]:=
\frac{1}{2}
\mathrm{tr}\,
\mathcal{P}
\exp
\left(
\oint\limits_{\mathcal{C}}
\mathcal{A}^{\mathrm{SU}(2)}(\boldsymbol{k})
\right).
\label{eq: def SU(2) Wilson loop}\end{aligned}$$ The symbol $\mathcal{P}$ in the definition of the U(2) Wilson loop represents path ordering, while $\mathcal{C}$ is any closed loop in the compact momentum space.
By construction, the U(2) Wilson loop (\[eq: def U(2) Wilson loop\]) is invariant under the transformation $$A^{\mu}(\boldsymbol{k})\to
U^{\dag}(\boldsymbol{k})\,
A^{\mu}(\boldsymbol{k})
U(\boldsymbol{k})
+
U^{\dag}(\boldsymbol{k})
\partial^{\mu}
U(\boldsymbol{k})$$ induced by the local (in momentum space) U(2) transformation $$|u^{\ }_{\hat{a}}(\boldsymbol{k})\rangle\to
|u^{\ }_{\hat{b}}(\boldsymbol{k})\rangle
U^{\ }_{\hat{b}\hat{a}}(\boldsymbol{k})$$ on the single-particle energy eigenstates. Similarly, the SU(2) and U(1) Wilson loops are invariant under any local SU(2) and U(1) gauge transformation of the Bloch wave functions, respectively.
When $\mathcal{C}$ is invariant as a set under $$\boldsymbol{k}\to
-\boldsymbol{k},
\label{eq: def inversion}$$ the SU(2) Wilson loop $W^{\ }_{\mathrm{SU}(2)}[\mathcal{C}]$ is quantized to the two values $$W^{\ }_{\mathrm{SU}(2)}[\mathcal{C}]=\pm1$$ because of time-reversal symmetry. Furthermore, the identity $$W^{\ }_{\mathrm{SU}(2)}[\mathcal{C}]=
\prod_{\bi{K}\in\mathcal{C}}^{\bi{K}\sim -\bi{K}}
\mathrm{Pf}\Big(\tilde{w}(\boldsymbol{K})\Big),
\label{eq: master formula for TRS Wilson loop}$$ which we will prove below, follows. Here, the symbol Pf denotes the Pfaffian of an antisymmetric matrix, and only the subset of momenta $\boldsymbol{K}\in\mathcal{C}$ that are unchanged under $\boldsymbol{K}\to-\boldsymbol{K}$ contribute to the SU(2) Wilson loop. According to (\[eq: sewing matrix is as\]), the sewing matrix at a time-reversal symmetric wave vector is an antisymmetric $2\times2$ matrix. Consequently, the SU(2) part of the sewing matrix at a time-reversal symmetric wave vector is a real-valued antisymmetric $2\times2$ matrix (i.e., it is proportional to $\mathrm{i}\rho^{\ }_{y}$ up to a sign). Hence, its Pfaffian is a well-defined and nonvanishing real-valued number.
Before undertaking the proof of (\[eq: master formula for TRS Wilson loop\]), more insights on this identity can be obtained if we specialize to the case when the Hamiltonian is invariant under any U(1) subgroup of SU(2), e.g., the $z$-component of spin $\sigma_z$. In this case we can choose the basis states which diagonalize $\sigma^{\ }_z$; $\sigma^{\ }_z|u^{\ }_1(\bi{k})\rangle=+|u^{\ }_1(\bi{k})\rangle$, $\sigma^{\ }_z|u^{\ }_2(\bi{k})\rangle=-|u^{\ }_2(\bi{k})\rangle$. Since the time-reversal operation changes the sign of $\sigma^{\ }_z$, the sewing matrix takes the form $$w(\bi{k})=
\left(
\begin{array}{cc}
0
&
\e^{-\rmi\chi(\bi{k})}
\\
-
\e^{-\rmi\chi(-\bi{k})}
&
0
\end{array}
\right),$$ which, in combination with (\[eq: sewing condition for the U1 gauge fields\]) and (\[eq: sewing condition for the SU2 gauge fields\]) implies the transformation laws $$\begin{aligned}
a^0_{\mu}(-\bi{k})
&=
+a^0_{\mu}(\bi{k})
+ \partial^{\ }_{\mu}\left[
\chi(\bi{k})+\chi(\bi{-k})
\right],
\\
a^z_{\mu}(-\bi{k})
&=
-a^z_{\mu}(\bi{k})
+ \partial^{\ }_{\mu}\left[
\chi(\bi{k})-\chi(\bi{-k})
\right].\end{aligned}$$ We conclude that when both the $z$ component of the electron spin and the electron number are conserved, we can set $$a^x_{\mu}(\bi{k})=a^y_{\mu}(\bi{k})=0,
\qquad
A^{\mathrm{U}(2)}_{\mu}(\boldsymbol{k})
=
a^{0}_{\mu}(\boldsymbol{k})
\frac{\sigma^{\ }_{0}}{2\mathrm{i}}
+
a^{z}_{\mu}(\boldsymbol{k})
\frac{\sigma^{\ }_{z}}{2\mathrm{i}},$$ and use the transformation law $$A^{\mathrm{U}(2)}_{\nu,11}(-\bi{k})
=
\frac{1}{2\rmi}
\left[
a^0_{\nu}(-\bi{k})
+
a^z_{\nu}(-\bi{k})
\right]
=
A^{\mathrm{U}(2)}_{\nu,22}(\bi{k})
- \rmi \partial^{\ }_{\nu}\chi (\bi{k}).
\label{eq: consequence of Sz conseevation for trsf law}$$
With conservation of the $z$ component of the electron spin in addition to that of the electron charge, the SU(2) Wilson loop becomes $$\begin{aligned}
W^{\ }_{\mathrm{SU}(2)}[\mathcal{C}]
&=
\frac{1}{2}
\mathrm{tr}\,
\mathcal{P}
\exp\!\left(
\oint\limits_{\mathcal{C}}
\mathcal{A}^{\mathrm{SU}(2)}(\boldsymbol{k})
\right)
\\
&=
\frac{1}{2}
\mathrm{tr}\,
\exp\!\left(
\oint\limits_{\mathcal{C}}
a^{z}_{\mu}(\boldsymbol{k})
\frac{\sigma^{\ }_{z}}{2\mathrm{i}} \mathrm{d} k^{\mu}
\right)
\nonumber\\
&=
\cos\!\left(
\frac{1}{2} \oint\limits_{\mathcal{C}}
a^{z}_{\mu}(\boldsymbol{k})
\mathrm{d} k^{\mu}
\right).\end{aligned}$$ We have used the fact that $\sigma^{\ }_{z}$ is traceless to reach the last line. This line integral can be written as the surface integral $$\begin{aligned}
\oint\limits_{\mathcal{C}}
a^{z}_{\mu}(\boldsymbol{k})
\mathrm{d} k^{\mu}
=
\int\limits_{\mathcal{D}}
\mathrm{d}^2 k\,
\varepsilon^{\mu\nu}
\partial_\mu a^{z}_\nu(\bi{k})\end{aligned}$$ by Stokes’ theorem. Here, $\mathcal{D}$ is the region defined by $\partial\mathcal{D} = \mathcal{C}$, and covers a half of the total Brillouin zone (BZ) because of the condition (\[eq: def inversion\]). In turn, this surface integral is equal to the Chern number for up-spin fermions, $$\begin{aligned}
\mathrm{Ch}_{\uparrow}
&:=
\int_{\mathrm{BZ}}
\frac{\mathrm{d}^2 k}{2\pi \mathrm{i}}
\varepsilon^{\mu\nu} \partial_\mu
A^{\mathrm{U}(2)}_{\nu,11}(\boldsymbol{k})
\\
&\equiv
\int_{\mathrm{BZ}}
\frac{\mathrm{d}^2 k}{2\pi \mathrm{i}}
F^{\mathrm{U}(2)}_{11}(\boldsymbol{k})
\nonumber\\
&=
\int_{\mathcal{D}}
\frac{\mathrm{d}^2 k}{2\pi \mathrm{i}}
\left[
F^{\mathrm{U}(2)}_{11}(\boldsymbol{k})+
F^{\mathrm{U}(2)}_{11}(-\boldsymbol{k})
\right]
\nonumber\\
&=
\int_{\mathcal{D}}
\frac{\mathrm{d}^2 k}{2\pi \mathrm{i}}
\varepsilon^{\mu\nu}\partial_\mu
\left[
A^{\mathrm{U}(2)}_{\nu,11}(\bi{k})-
A^{\mathrm{U}(2)}_{\nu,22}(\bi{k})
\right]
\nonumber\\
&=
-\rmi
\int_{\mathcal{D}}
\frac{\mathrm{d}^2 k}{2\pi \mathrm{i}}
\varepsilon^{\mu\nu}\partial_\mu
a^z_\nu(\bi{k}),\end{aligned}$$ where we have used the transformation law (\[eq: consequence of Sz conseevation for trsf law\]) to deduce that $$\begin{aligned}
F^{\mathrm{U}(2)}_{11}(-\boldsymbol{k})
=
-F^{\mathrm{U}(2)}_{22}(\boldsymbol{k})\end{aligned}$$ to reach the fourth equality.
To summarize, when the $z$ component of the spin is conserved, the quantized SU(2) Wilson loop can then be written as the parity of the spin Chern number (the Chern number for up-spin fermions, which is equal to minus the Chern number for down-spin fermions) [@Kane05a; @Kane05b; @Bernevig06a], $$\begin{aligned}
W^{\ }_{\mathrm{SU}(2)}[\mathcal{C}]
=
(-1)^{\mathrm{Ch}_{\uparrow}}. \end{aligned}$$
Next, we apply the master formula (\[eq: master formula for TRS Wilson loop\]) to the $4\times4$ Dirac Hamiltonian (\[H(k)\]). To this end, we first replace the mass $m$ by the $k$-dependent mass, $$m^{\ }_{k}=m-C\bi{k}^2,
\qquad
C>0,
\label{m_k}$$ and parametrize the wave number $\bi{k}$ as $$k^{\ }_{x} + \rmi k^{\ }_{y} = k \rme^{\rmi\varphi},
\qquad
-\infty<k<\infty,
\qquad
0\le\varphi<\pi.$$ Without loss of generality, we may assume $\alpha>0$. The mass $m^{\ }_{k}$ is introduced so that the SU(2) part of the sewing matrix is single-valued in the limit $|k|\to\infty$.
We then perform another series of unitary transformation with $$\widetilde{U}
=
\left(\begin{array}{cc}
\sigma^{\ }_{0}
&
0
\\
0
&
\rmi\sigma^{\ }_z
\end{array}\right)
\left(\begin{array}{cc}
\frac{\sigma^{\ }_{0} }{\sqrt2}
&
\frac{\sigma^{\ }_{0} }{\sqrt2}
\\
-\frac{\sigma^{\ }_{0} }{\sqrt2}
&
\frac{\sigma^{\ }_{0} }{\sqrt2}
\end{array}\right)
\left(\begin{array}{cc}
\rme^{\rmi\pi\sigma^{\ }_{z}/4}
&
0
\\
0
&
\rme^{-\rmi\pi\sigma^{\ }_{z}/4}
\end{array}\right),$$ to rewrite the Hamiltonian (\[H(k)\]) in the form $$\begin{aligned}
\widetilde{\mathcal{H}}(\bi{k})&:=
\widetilde{U}^\dag\mathcal{H}(\bi{k})\widetilde{U}
\nonumber\\
&=\left(\begin{array}{cc}
0
&
k^{\ }_{x}\sigma^{\ }_{x}
+
k^{\ }_{y}\sigma^{\ }_{y}
+
\left(
\alpha
-
\rmi m^{\ }_{k}
\right)
\sigma^{\ }_{0}
\\
k^{\ }_{x}\sigma^{\ }_{x}
+
k^{\ }_{y}\sigma^{\ }_{y}
+
\left(
\alpha
+
\rmi m^{\ }_{k}
\right)\sigma^{\ }_{0}
&
0
\end{array}\right).
\nonumber\\
\label{tildeH(k)}\end{aligned}$$ The four eigenvalues of the Hamiltonian (\[tildeH(k)\]) are given by $E(\bi{k})=\pm\lambda^+_k$, $\pm\lambda^-_k$, where $$\lambda^{\pm}_{k}=\sqrt{(k\pm\alpha)^2+m^2_k}.
\label{lambda_pm}$$ The occupied eigenstate with the energy $E_1(\bi{k})=-\lambda^-_k$ reads $$|u_1(\varphi,k)\rangle
=\frac{1}{2\lambda^-_k}
\left(\begin{array}{c}
-\lambda^-_k \\
\lambda^-_k \rme^{-\rmi\varphi} \\
-k+\alpha+\rmi m^{\ }_{k} \\
(k-\alpha -\rmi m^{\ }_{k})\rme^{-\rmi\varphi}
\end{array}\right),
\label{u_1}$$ and the occupied eigenstate with the energy $E_2(\bi{k})=-\lambda^+_k$ is $$|u_2(\varphi,k)\rangle
=\frac{1}{2\lambda^+_k}
\left(\begin{array}{c}
-\lambda^+_k \\
-\lambda^+_k \rme^{-\rmi\varphi} \\
k+\alpha + \rmi m^{\ }_{k} \\
(k+\alpha + \rmi m^{\ }_{k})\rme^{-\rmi\varphi}
\end{array}\right).
\label{u_2}$$ Notice that $|u_2(\varphi,k)\rangle=|u_1(\varphi+\pi,-k)\rangle$.
\(a) $m-C\alpha^2<0$
-2mm
$k$ $-\infty$ 0 $+\infty$
---------------------------------------- ----------- -------------------------- -----------
$\theta^+_k$ $\pi/2$ $\arctan(-m/\alpha)$ $\pi/2$
$\theta^-_k$ $-\pi/2$ $\arctan(-m/\alpha)-\pi$ $-\pi/2$
$\rme^{\rmi(\theta^+_k-\theta^-_k)/2}$ $\rmi$ $\rmi$ $\rmi$
: $\theta^{\pm}_{k}$ at the time-reversal invariant momenta $k=0$ and $k=\pm\infty$, when $m-C\alpha^2<0$ (a) and $m-C\alpha^2>0$ (b). It is assumed that $0\le\arctan(|m|/\alpha)\le\pi/2$. \[table:theta\]
\(b) $m-C\alpha^2>0$
-2mm
$k$ $-\infty$ 0 $+\infty$
---------------------------------------- ----------- ------------------------- -----------
$\theta^+_k$ $-3\pi/2$ $-\arctan(m/\alpha)$ $\pi/2$
$\theta^-_k$ $3\pi/2$ $\pi-\arctan(m/\alpha)$ $-\pi/2$
$\rme^{\rmi(\theta^+_k-\theta^-_k)/2}$ $\rmi$ $-\rmi$ $\rmi$
: $\theta^{\pm}_{k}$ at the time-reversal invariant momenta $k=0$ and $k=\pm\infty$, when $m-C\alpha^2<0$ (a) and $m-C\alpha^2>0$ (b). It is assumed that $0\le\arctan(|m|/\alpha)\le\pi/2$. \[table:theta\]
The $2\times2$ sewing matrix $w(\bi{k})$ is obtained from the eigenstates (\[u\_1\])–(\[u\_2\]) as $$\begin{aligned}
w(\varphi,k)&:=
\biggl(
\langle u_{\hat{a}}(\varphi,-k)|
\hat\Theta|u_{\hat{b}}(\varphi,k)\rangle\biggr)^{\ }_{\hat{a},\hat{b}=1,2}
\nonumber\\
&={}
-\rme^{\rmi\varphi}
\left(\begin{array}{cc}
0 & \displaystyle\frac{1}{\lambda^+_k}(k+\alpha-\rmi m^{\ }_{k}) \\
\displaystyle\frac{1}{\lambda^-_k}(k-\alpha+\rmi m^{\ }_{k}) & 0
\end{array}\right),\end{aligned}$$ which is decomposed into the U(1) part, $$\rmi\exp\Big(\rmi\varphi+\rmi(\theta^+_k+\theta^-_k)/2\Big),$$ and the SU(2) part, $$\tilde{w}(k)
=
\left(\begin{array}{cc}
0 & \rmi\rme^{\rmi(\theta^+_k-\theta^-_k)/2} \\
\rmi\rme^{-\rmi(\theta^+_k-\theta^-_k)/2} & 0
\end{array}\right),
\label{eq: def sewing matrix for our Dirac}$$ of the sewing matrix. Here, we have defined $\theta^{\pm}_{k}$ through the relation $$\rme^{\rmi\theta^{\pm}_{k}}
=\frac{1}{\lambda^{\pm}_{k}}[k\pm(\alpha-\rmi m^{\ }_{k})].$$ For the SU(2) sewing matrix (\[eq: def sewing matrix for our Dirac\]), there are two momenta which are invariant under inversion $\boldsymbol{k}\to-\boldsymbol{k}$, namely the south $\boldsymbol{K}=0$ and north $\boldsymbol{K}=\infty$ poles of the stereographic sphere. The values of $\theta^{\pm}_{k}$ at these time-reversal momenta are listed in table \[table:theta\]. The Pfaffian of the sewing matrix at the south and north poles of the stereographic sphere are $$\begin{aligned}
&
\mathrm{Pf}\, \tilde{w}(0)=
-
\mathrm{sgn}(m-C\alpha^2)
\mathrm{Pf}\,(-\mathrm{i}\rho^{\ }_{y}),
\\
&
\mathrm{Pf}\, \tilde{w}(\infty)=
\mathrm{Pf}\,(-\mathrm{i}\rho^{\ }_{y}),\end{aligned}$$ respectively. Hence, $$\begin{aligned}
W^{\ }_{\mathrm{SU}(2)}[\mathcal{C}]=
-\mathrm{sgn}(m)
\label{eq: final SU(2) Wilson loop}\end{aligned}$$ for any time-reversal invariant path $\mathcal{C}$ passing through the south and north poles, where we have suppressed $C\alpha^2$ by taking the limit $C\alpha^2/|m|\to0$.
![ (a) Quantum spin Hall droplet immersed in the reference vacuum \[in real space $(x,y)\in\mathbb{R}^{2}$\]. (b) The $\mathbb{Z}^{\ }_{2}$ network model or its tight-binding equivalent when $x<0$ is separated from the reference vacuum at $x>0$ by the vertical boundary $x=0$ \[in real space $(x,y)\in\mathbb{R}^{2}$\]. []{data-label="fig: QSH droplet"}](figure3.eps){width="12cm"}
The value (\[eq: final SU(2) Wilson loop\]) taken by the SU(2) Wilson loop thus appears to be ambiguous since it depends on the sign of the mass $m$. This ambiguity is a mere reflection of the fact that, as noted in [@Obuse08], the topological nature of the $\mathbb{Z}^{\ }_{2}$ network model is itself defined relative to that of some reference vacuum. Indeed, for any given choice of the parameters $(X,\theta)$ from figure \[fig: predicted phase diagram\] that defines uniquely the bulk properties of the insulating phase in the $\mathbb{Z}^{\ }_2$ network model, the choice of boundary conditions determines if a single helical Kramers’ doublet edge state is or is not present at the boundary of the $\mathbb{Z}^{\ }_2$ network model. In view of this, it is useful to reinterpret the $\mathbb{Z}^{\ }_2$ network model with a boundary as realizing a quantum spin Hall droplet immersed in a reference vacuum as is depicted in figure \[fig: QSH droplet\](a). If so, choosing the boundary condition is equivalent to fixing the topological attribute of the reference vacuum *relative to* that of the $\mathbb{Z}^{\ }_2$ network model, for the reference vacuum in which the quantum spin Hall droplet is immersed also has either a trivial or non-trivial $\mathbb{Z}^{\ }_2$ quantum topology. A single helical Kramers’ doublet propagating unhindered along the boundary between the quantum spin Hall droplet and the reference vacuum appears if and only if the $\mathbb{Z}^{\ }_2$ topological quantum numbers in the droplet and in the reference vacuum differ.
In the low-energy continuum limit (\[H(k)\]), a boundary in real space can be introduced by breaking translation invariance along the vertical line $x=0$ in the real space $(x,y)\in\mathbb{R}^{2}$ through the profile \[see figure \[fig: QSH droplet\](b)\] $$m(x,y)=m(x)=
\left\{
\begin{array}{cc}
-m,
&
\hbox{if $x\to-\infty$,}
\\
&
\\
+m,
&
\hbox{if $x\to+\infty$,}
\end{array}
\right.$$ for the mass.
![ Momentum space $(k^{\ }_{x},k^{\ }_{y})\in\mathbb{R}^{2}$ is discretized with the help of a rectangular grid on which two paths are depicted. The red path that is restricted to the upper left quadrant is not invariant as a set under the inversion $(k^{\ }_{x},k^{\ }_{y})\to-(k^{\ }_{x},k^{\ }_{y})$. The blue path with its center of mass at the origin is. This path is assembled out of 16 links: $(i^{\ }_{0},i^{\ }_{1})=-(i^{\ }_{15},i^{\ }_{0})$, $(i^{\ }_{1},i^{\ }_{2})=-(i^{\ }_{14},i^{\ }_{15})$, $(i^{\ }_{2},i^{\ }_{3})=-(i^{\ }_{13},i^{\ }_{14})$, $(i^{\ }_{3},i^{\ }_{4})=-(i^{\ }_{12},i^{\ }_{13})$, $(i^{\ }_{4},i^{\ }_{5})=-(i^{\ }_{11},i^{\ }_{12})$, $(i^{\ }_{5},i^{\ }_{6})=-(i^{\ }_{10},i^{\ }_{11})$, $(i^{\ }_{6},i^{\ }_{7})=-(i^{\ }_{9},i^{\ }_{10})$, $(i^{\ }_{7},i^{\ }_{8})=-(i^{\ }_{8},i^{\ }_{9})$. Sites $i^{\ }_{0}=i^{\ }_{8}=i^{\ }_{16}$ along the path are the only ones invariant under $(k^{\ }_{x},k^{\ }_{y})\to-(k^{\ }_{x},k^{\ }_{y})$. []{data-label="fig: grid"}](figure4.eps){width="8cm"}
We close Sec. \[sec: 2D dirac\] with a justification of the master formula (\[eq: master formula for TRS Wilson loop\]). To this end, we regularize the continuum gauge theory by discretizing momentum space (figure \[fig: grid\]). We use the momentum coordinate $i\in\mathbb{Z}^{2}$ on a rectangular grid with the two lattice spacings $\Delta k^{\mu}>0$. To each link from the site $i$ to the nearest-neighbor site $i+\mu$ of the grid, we assign the SU(2) unitary matrix $$U^{\ }_{i,i+\mu}
\equiv
\rme^{
A^{\ }_{i,i+\mu}\Delta k^{\mu}
},$$ which is obtained by discarding U(1) part of the U(2) Berry connection. Consistency demands that $$U^{\ }_{i+\mu,i}=
U^{\dag}_{i,i+\mu}
\Longleftrightarrow
A^{\ }_{i+\mu,i}=
A^{\dag}_{i,i+\mu}.$$ We define the SU(2) Wilson loop to be $$W^{\ }_{\mathrm{SU}(2)}(i^{\ }_{0},\cdots,i^{\ }_{N-1}):=
\frac{1}{2}
\mathrm{tr}\!
\left(
U^{\ }_{i^{\ }_{0},i^{\ }_{1}}
U^{\ }_{i^{\ }_{1},i^{\ }_{2}}
\cdots
U^{\ }_{i^{\ }_{N-1},i^{\ }_{0}}
\right)$$ where $i^{\ }_{n}$ and $i^{\ }_{n+1}$ are nearest neighbors, i.e., their difference $i^{\ }_{n+1}-i^{\ }_{n}=\eta^{\ }_{n}$ is a unit vector $\eta^{\ }_{n}$. The Wilson loop is invariant under any local gauge transformation by which $$U^{\ }_{i,i+\mu}\to
V^{\dag}_{i}
U^{\ }_{i,i+\mu}
V^{\ }_{i+\mu}$$ where the $V^{\vphantom{i}}_{i}$’s are U(2) matrices. Observe that the cyclicity of the trace allows us to write $$\begin{aligned}
\fl
W^{\ }_{\mathrm{SU}(2)}(i^{\ }_{0},\cdots,i^{\ }_{N-1})=
\frac{1}{2}
\mathrm{tr}\!
\left(
U^{\ }_{i^{\ }_{\frac{N}{2}},i^{\ }_{\frac{N}{2}+1}}
\cdots
U^{\ }_{i^{\ }_{N-1},i^{\ }_{0}}
U^{\ }_{i^{\ }_{0},i^{\ }_{1}}
U^{\ }_{i^{\ }_{1},i^{\ }_{2}}
\cdots
U^{\ }_{i^{\ }_{\frac{N}{2}-1},i^{\ }_{\frac{N}{2}}}
\right).\end{aligned}$$ To make contact with the master formula (\[eq: master formula for TRS Wilson loop\]), we assume that the closed path with vertices $i^{\ }_{\ell}$ parametrized by the index $\ell=0,1,\cdots,N-1$ obeys the condition that $$\begin{aligned}
\vdots
\nonumber\\
\mbox{$i^{\ }_{N-n}$ is the wave vector
$-\sum_{m=1}^{n}\eta^{\ }_{m}\Delta k^{\mu^{\ }_{m}}$},
\nonumber\\
\vdots
\nonumber\\
\mbox{$i^{\ }_{N-1}$ is the wave vector
$-\eta^{\ }_{1}\Delta k^{\mu^{\ }_{1}}$},
\nonumber\\
\mbox{$i^{\ }_{0}$ is the wave vector
$0$},
\label{eq: proof wilson step help 1}\\
\mbox{$i^{\ }_{1}$ is the wave vector
$+\eta^{\ }_{1}\Delta k^{\mu^{\ }_{1}}$},
\nonumber\\
\vdots
\nonumber\\
\mbox{$i^{\ }_{n}$ is the wave vector
$+\sum_{m=1}^{n}\eta^{\ }_{m}\Delta k^{\mu^{\ }_{m}}$},
\nonumber\\
\vdots
\nonumber\end{aligned}$$ with $\eta^{\ }_{m}=\pm1$ for $m=1,\cdots, N/2$ in order to mimic after discretization the condition that the closed path entering the Wilson loop is invariant as a set under the inversion (\[eq: def inversion\]).
On the discretized momentum lattice the sewing matrix (\[eq: def sewing matrix\]) is defined by $$\bigl(w^{\ }_i\bigr)_{\hat{a}\hat{b}}:=
\langle u^{\ }_{\hat{a}}(-i)|\hat\Theta|u^{\ }_{\hat{b}}(i)\rangle,$$ which obeys the condition $$w^{\ }_{-i}=-w^{\mathrm{T}}_i,$$ i.e., the counterpart to the relation (\[eq: sewing matrix is as\]). This implies that $w^{\ }_{i^{\ }_{ 0}}$ and $w^{\ }_{i^{\ }_{N/2}}$ are antisymmetric unitary $2\times2$ matrices. Furthermore, the sewing matrix $w^{\ }_{i}$ must also obey the counterpart to (\[eq: sewing matrix as gauge trsf\]), namely $$U^{\ }_{-j,-i}=w^{\ }_jU^{\mathrm{T}}_{i,j}w^{\dag}_i.
\label{eq: proof wilson step help 3}$$
It now follows from (\[eq: proof wilson step help 1\]) and (\[eq: proof wilson step help 3\]) that $$\begin{aligned}
U^{\ }_{i^{\ }_{N-1},i^{\ }_{0}}=
w^{\ }_{i^{\ }_1}
U^{\mathrm{T}}_{i^{\ }_{0},i^{\ }_{1}}
w^{\dag}_{i^{\ }_{0}},
\nonumber\\
\vdots
\nonumber\\
U^{\ }_{i^{\ }_{N-1-n},i^{\ }_{N-n}}=
w^{\ }_{i^{\ }_{n+1}}
U^{\mathrm{T}}_{i^{\ }_{n},i^{\ }_{n+1}}
w^{\dag}_{i^{\ }_n},
\\
\vdots
\nonumber\\
U^{\ }_{i^{\ }_{\frac{N}{2}},i^{\ }_{\frac{N}{2}+1}}=
w^{\ }_{i^{\ }_{\frac{N}{2}}}
U^{\mathrm{T}}_{i^{\ }_{\frac{N}{2}-1},i^{\ }_{\frac{N}{2}}}
w^{\dag}_{i^{\ }_{\frac{N}{2}-1}}.
\nonumber\end{aligned}$$ In particular, we observe that $$\begin{aligned}
U^{\ }_{i^{\ }_{N-1},i^{\ }_{0}}
U^{\ }_{i^{\ }_{0},i^{\ }_{1}}
=
w^{\ }_{i^{\ }_{1}}
U^{\mathrm{T}}_{i^{\ }_{0},i^{\ }_{1}}
w^{\dag}_{i^{\ }_{0}}
U^{\ }_{i^{\ }_{0},i^{\ }_{1}}
=
w^{\ }_{i^{\ }_{1}}
w^{\dag}_{i^{\ }_{0}}
U^{\dag}_{i^{\ }_{0},i^{\ }_{1}}
U^{\ }_{i^{\ }_{0},i^{\ }_{1}}
=
w^{\ }_{i^{\ }_{1}}
w^{\dag}_{i^{\ }_{0}},\end{aligned}$$ since $w^{\ }_{i^{\ }_{0}}$ is a $2\times2$ antisymmetric unitary matrix, i.e., $w^{\ }_{i^{\ }_{0}}$ is the second Pauli matrix up to a phase factor, while $$(\boldsymbol{\rho}\cdot\boldsymbol{n})^{\mathrm{T}}\rho^{\ }_{2}=
-\rho^{\ }_{2}(\boldsymbol{\rho}\cdot\boldsymbol{n})$$ holds for any three-vector $\boldsymbol{n}$ contracted with the three-vector $\boldsymbol{\rho}$ made of the three Pauli matrices. By repeating the same exercise a second time, $$\begin{aligned}
U^{\ }_{i^{\ }_{N-2},i^{\ }_{N-1}}
\left(
U^{\ }_{i^{\ }_{N-1},i^{\ }_{0}}
U^{\ }_{i^{\ }_{0},i^{\ }_{1}}
\right)
U^{\ }_{i^{\ }_{1},i^{\ }_{2}}
&=
w^{\ }_{i^{\ }_{2}}
U^{\mathrm{T}}_{i^{\ }_{1},i^{\ }_{2}}
w^{\dag}_{i^{\ }_{1}}
\left(
w^{\ }_{i^{\ }_{1}}
w^{\dag}_{i^{\ }_{0}}
\right)
U^{\ }_{i^{\ }_{1},i^{\ }_{2}}
\nonumber\\
&=
w^{\ }_{i^{\ }_{2}}
w^{\dag}_{i^{\ }_{0}},\end{aligned}$$ one convinces oneself that the dependences on the gauge fields $A^{\ }_{i^{\ }_{0},i^{\ }_{1}}$ and $A^{\ }_{i^{\ }_{N-1},i^{\ }_{0}}$, $A^{\ }_{i^{\ }_{1},i^{\ }_{2}}$ and $A^{\ }_{i^{\ }_{N-2},i^{\ }_{N-1}}$, and so on until $A^{\ }_{i^{\ }_{n-1},i^{\ }_{n}}$ and $A^{\ }_{i^{\ }_{N-n},i^{\ }_{N-n+1}}$ at the level $n$ of this iteration cancel pairwise due to the conditions (\[eq: proof wilson step help 1\])–(\[eq: proof wilson step help 3\]) implementing time-reversal invariance. This iteration stops when $n=N/2$, in which case the SU(2) Wilson loop is indeed solely controlled by the sewing matrix at the time-reversal invariant momenta corresponding to $\ell=0$ and $\ell=N/2$, $$W^{\ }_{\mathrm{SU}(2)}(i^{\ }_{0},\cdots,i^{\ }_{N-1})=
\frac{1}{2}
\mathrm{tr}\!
\left(
w^{\ }_{i^{\ }_{N/2}}
w^{\dag}_{i^{\ }_{0}}
\right).$$
Since $i^{\ }_{0}$ and $i^{\ }_{N/2}$ are invariant under momentum inversion or, equivalently, time-reversal invariant, $$w^{\ }_{i^{\ }_{N/2}}=
\rme^{\mathrm{i}\alpha^{\ }_{N/2}}\,
\rmi\rho^{\ }_{2},
\qquad
w^{\ }_{i^{\ }_{0}}=
\rme^{\mathrm{i}\alpha^{\ }_{0}}\,
\rmi\rho^{\ }_{2}$$ with $\alpha^{\ }_{N/2},\alpha^{\ }_{0}=0,\pi$. Here, the $\mathbb{Z}^{\ }_{2}$ phases $\rme^{\mathrm{i}\alpha^{\ }_{N/2}}$ and $\rme^{\mathrm{i}\alpha^{\ }_{0}}$ are none other than the Pfaffians $$\rme^{\mathrm{i}\alpha^{\ }_{N/2}}=
\mathrm{Pf}\!
\left(
w^{\ }_{i^{\ }_{N/2}}
\right),
\qquad
\rme^{\mathrm{i}\alpha^{\ }_{0}}=
\mathrm{Pf}\!
\left(
w^{\ }_{i^{\ }_{0}}
\right),$$ respectively. Hence, $$\begin{aligned}
W^{\ }_{\mathrm{SU}(2)}(i^{\ }_{0},\cdots,i^{\ }_{N-1})
&=
\frac{1}{2}
\mathrm{tr}\!
\left[
\mathrm{Pf}\!\left(w^{\ }_{i^{\ }_{N/2}}\right)
\mathrm{i}\rho^{\ }_{2}
\times
\mathrm{Pf}\!\left(w^{\dag}_{i^{\ }_{0}}\right)
(-\rmi\rho^{\ }_{2})
\right]
\nonumber\\
&=
\mathrm{Pf}\! \left(w^{\ }_{i^{\ }_{N/2}}\right)
\mathrm{Pf}\! \left(w^{\ }_{i^{\ }_0}\right)
\label{eq: final step proof wilson}\end{aligned}$$ is a special case of (\[eq: master formula for TRS Wilson loop\]). (Recall that $w^{\ }_{i^{\ }_{0}}$ and $w^{\ }_{i^{\ }_{N/2}}$ are real-valued.)
Numerical study of boundary multifractality in the $\mathbb{Z}^{\ }_{2}$ network model {#sec: numerics}
========================================================================================
In [@Obuse08], we have shown that (i) multifractal scaling holds near the boundary of the $\mathbb{Z}^{\ }_{2}$ network model at the transition between the metallic phase and the $\mathbb{Z}^{\ }_{2}$ topological insulating phase shown in figure \[fig: predicted phase diagram\], (ii) it is different from that in the ordinary symplectic class, while (iii) bulk properties, such as the critical exponents for the divergence of the localization length and multifractal scaling in the bulk, are the same as those in the conventional two-dimensional symplectic universality class of Anderson localization. This implies that the boundary critical properties are affected by the presence of the helical edge states in the topological insulating phase adjacent to the critical point. In this work, we improve the precision for the estimate of the boundary multifractal critical exponents. We also compute numerically additional critical exponents that encode corner (zero-dimensional) multifractality at the metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition. We thereby support the claim that conformal invariance is present at the metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition by verifying that conformal relations between critical exponents at these boundaries hold.
Boundary and corner multifractality
-------------------------------------
To characterize multifractal scaling at the metal-insulator transition in the $\mathbb{Z}^{\ }_{2}$ network model, we start from the time-evolution of the plane waves along the links of the network with the scattering matrices defined in (\[eq: def S at S node\])-(\[eq: def X theta\]) at the nodes $\mathsf{S}$ and $\mathsf{S}'$. To minimize finite size effects, the parameter $\theta$ in (\[eq: def S at S node\])-(\[eq: def X theta\]) is chosen to be a random variable as explained in Sec. \[sec: definition\]. We focus on the metal-insulator transition at $X=X^{\ }_{l}=0.971$ as shown in figure \[fig: predicted phase diagram\](b).
When we impose reflecting boundary conditions, a node on the boundary reduces to a unit $2\times 2$ matrix. When the horizontal reflecting boundaries are located at nodes of type $\mathsf{S}'$, as shown in figure \[fig:geometry\](a), there exists a single helical edge states for $X>X^{\ }_{l}$. The insulating phase $X>X^{\ }_{l}$ is thus topologically nontrivial.
For each realization of the disorder, we numerically diagonalize the one-step time-evolution operator of the $\mathbb{Z}^{\ }_{2}$ network model and retain the normalized wave function $\psi^{\ }_{\sigma}(x,y)$, after coarse graining over the 4 edges of the plaquette located at $(x,y)$, whose eigenvalue is the closest to $1$. The wave function at criticality is observed to display the power-law dependence on the linear dimension $L$ of the system, $$\sum_{\sigma=\uparrow,\downarrow}|
\psi^{\ }_{\sigma}(x,y)|^{2q}\propto
L^{-\Delta^{(\zeta,\nu)}_{q}-dq}.
\label{eq:Delta}$$ The anomalous dimension $\Delta^{(\zeta,\nu)}_{q}$, if it displays a nonlinear dependence on $q$, is the signature of multifractal scaling. The index $\zeta$ indicates whether the multifractal scaling applies to the bulk $(\zeta=2)$, the one-dimensional boundary $(\zeta=1)$, or to the zero-dimensional boundary (corner) $(\zeta=0)$, provided the plaquette $(x,y)$ is restricted to the corresponding regions of the $\mathbb{Z}^{\ }_{2}$ network model. For $\zeta=1$ and $0$, the index $\nu$ distinguishes the case $\nu=\mathrm{O}$ when the $\zeta$-dimensional boundary has no edge states in the insulating phase adjacent to the critical point, from the case $\nu=\mathbb{Z}^{\ }_{2}$ when the $\zeta$-dimensional boundary has helical edge states in the adjacent insulating phase. We ignore this distinction for multifractal scaling of the bulk wave functions, $\Delta^{(2,\mathrm{O})}_{q}=
\Delta^{(2,\mathbb{Z}^{\ }_{2})}_{q}=\Delta^{(2)}_{q}$, since bulk properties are insensitive to boundary effects. We will also consider the case of mixed boundary condition for which we reserve the notation $\nu=\mathbb{Z}^{\ }_2|\mathrm{O}$.
![ (a) Boundary multifractality is calculated from the wave function amplitudes near a one-dimensional boundary. Periodic (reflecting) boundary conditions are imposed for the horizontal (vertical) boundaries. (b) Corner multifractality is calculated from the wave function amplitudes near a corner with the wedge angle $\vartheta=\pi/2$. Reflecting boundary conditions are imposed along both vertical and horizontal directions. The relationship between the scattering matrix at a node of type $\mathsf{S}'$ and the scattering matrix at a node of type $\mathsf{S}$ implies that it is a vertical boundary located at nodes of type $\mathsf{S}$ that induces an helical edge state when $X>X^{\ }_{l}$. []{data-label="fig:geometry"}](figure5.eps){width="12cm"}
It was shown in [@Obuse07b] that boundary multifractality is related to corner multifractality if it is assumed that conformal invariance holds at the metal-insulator transition in the two-dimensional symplectic universality class. Conversely, the numerical verification of this relationship between boundary and corner multifractality supports the claim that the critical scaling behavior at this metal-insulator transition is conformal. So we want to verify numerically if the consequence of the conformal map $w=z^{\vartheta/\pi}$, namely $$\Delta^{(0,\nu)}_{q}=
\frac{\pi}{\vartheta}
\Delta^{(1,\nu)}_{q}
\label{eq:Delta_boundary_corner}$$ where $\vartheta$ is the wedge angle at the corner, holds. Equivalently, $f^{(\zeta,\nu)}(\alpha)$, which is defined to be the Legendre transformation of $\Delta^{(\zeta,\nu)}_{q}+dq$, i.e., $$\begin{aligned}
&
\alpha^{(\zeta,\nu)}_{q}=
\frac{d \Delta^{(\zeta,\nu)}_{q}}{d q}+d,
\label{eq:alpha}
\\
&
f^{(\zeta,\nu)}(\alpha^{\ }_{q})=
q\alpha^{(\zeta,\nu)}
-
\Delta^{(\zeta,\nu)}_{q}
-
dq
+\zeta,
\label{eq:f(alpha)}\end{aligned}$$ must obey $$\begin{aligned}
&
\alpha^{(0,\nu)}_{q}
-
d=
\frac{\pi}{\vartheta}
(\alpha^{(1,\nu)}_{q}-d),
\label{eq:alpha_boundary_corner}\\
&
f^{{(0,\nu)}}(\alpha)=
\frac{\pi}{\vartheta}\!
\left[f^{(1,\nu)}(\alpha)-1\right],
\label{eq:f(alpha)_boundary_corner}\end{aligned}$$ if conformal invariance is a property of the metal-insulator transition.
To verify numerically the formulas (\[eq:Delta\_boundary\_corner\]), (\[eq:alpha\_boundary\_corner\]), and (\[eq:f(alpha)\_boundary\_corner\]), we consider the $\mathbb{Z}^{\ }_{2}$ network model with the geometries shown in figure \[fig:geometry\]. We have calculated wave functions for systems with the linear sizes $L=50,80,120,150,$ and $180$ for the two geometries displayed in figure \[fig:geometry\]. Here, $L$ counts the number of nodes of the same type along a boundary. The number of realizations of the static disorder is $10^5$ for each system size.
![ (a) The boundary (filled circles, red) and corner with $\theta=\pi/2$ (open circles, blue) anomalous dimensions at the metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition. The solid curve is computed from (\[eq:Delta\_boundary\_corner\]) by using the boundary anomalous dimension as an input. The rescaled $\Delta^{\ }_{1-q}$ confirming the reciprocal relation for boundary and corner multifractality are shown by upper (magenta) and lower (green) triangles, respectively. (b) The multifractal spectra for the boundary (filled circles, red) and the corner (open circles, blue). The solid curve is computed from (\[eq:alpha\_boundary\_corner\]) and (\[eq:f(alpha)\_boundary\_corner\]). []{data-label="fig:Delta_boundary_corner"}](figure6.eps){width="15cm"}
Figure \[fig:Delta\_boundary\_corner\](a) shows the boundary anomalous dimensions $\Delta^{(1,\mathbb{Z}^{\ }_{2})}_{q}$ (filled circles) and the corner anomalous dimensions $\Delta^{(0,\mathbb{Z}^{\ }_{2})}_{q}$ (open circles). In addition, the anomalous dimensions $\Delta^{(\zeta,\nu)}_{1-q}$ are shown by upper and lower triangles for boundary and corner anomalous dimensions, respectively. They fulfill the reciprocal relation $$\Delta^{(\zeta,\nu)}_{q}=\Delta^{(\zeta,\nu)}_{1-q}$$ derived analytically in [@Mirlin06]. Since the triangles and circles are consistent within error bars, our numerical results are reliable, especially between $0<q<1$. If we use the numerical values of $\Delta^{(1,\mathbb{Z}^{\ }_{2})}_{q}$ as inputs in (\[eq:Delta\_boundary\_corner\]) with $\vartheta=\pi/2$, there follows the corner multifractal scaling exponents that are plotted by the solid curve. Since the curve overlaps with the direct numerical computation of $\Delta^{(0,\mathbb{Z}^{\ }_{2})}_{q}$ within the error bars, we conclude that the relation (\[eq:Delta\_boundary\_corner\]) is valid at the metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition.
Figure \[fig:Delta\_boundary\_corner\](b) shows the boundary (filled circles) and corner (open circles) multifractal spectra. These multifractal spectra are calculated by using (\[eq:Delta\]), (\[eq:alpha\]), and (\[eq:f(alpha)\]). The numerical values of $\alpha^{(\zeta,\mathbb{Z}^{\ }_{2})}_{0}$ are $$\begin{aligned}
\alpha^{(1,\mathbb{Z}^{\ }_{2})}_{0} = 2.091\pm0.002, \\
\alpha^{(0,\mathbb{Z}^{\ }_{2})}_{0} = 2.179\pm0.01.
\label{eq:alpha_0_value}\end{aligned}$$ The value of $\alpha^{(1,\mathbb{Z}^{\ }_{2})}_{0}$ is consistent with that reported in [@Obuse08], while its accuracy is improved. The solid curve obtained from the relations (\[eq:alpha\_boundary\_corner\]) and (\[eq:f(alpha)\_boundary\_corner\]) by using $f^{(1,\mathbb{Z}^{\ }_{2})}(\alpha)$ as an input, coincides with $f^{(0,\mathbb{Z}^{\ }_{2})}(\alpha)$. We conclude that the hypothesis of conformal invariance at the quantum critical point of metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition is consistent with our numerical study of multifractal scaling.
At last, we would like to comment on the dependence on $z$ of $$\langle
\ln |\Psi|^2
\rangle^{\ }_{z,L}
\equiv
\frac{1}{2L} \sum_{y=1}^{2L}
\overline{
\ln\left(
\sum_{\sigma=\uparrow,\downarrow} |\psi^{\ }_{\sigma}(x,y)|^2
\right)
}$$ found in [@Obuse08]. Here, $z\equiv (x-1)/2L$, while $x$ and $y$ denote the positions on the network along its axis and along its circumference, respectively (our choice of periodic boundary conditions imposes a cylindrical geometry). The overline denotes averaging over disorder. Figure \[fig:ldos\](a) shows the $z$ dependence of $\langle \ln |\Psi|^2\rangle^{\ }_{z,L}$ for different values of $L$ in this cylindrical geometry at the metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition. We observe that $\langle \ln |\Psi|^2\rangle^{\ }_{z,L}$ becomes a nonmonotonic function of $z$.
![ (a) The $z$ dependence of $\langle\ln|\Psi|^2\rangle^{\ }_{z,L}$ at the metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition in the cylindrical geometry for $L=50,80,120,150,180$ from the top to the bottom. (b) The $z$ dependence of $\tilde{\alpha}^{\ }_{0}(z)$ ($\textcolor{red}{\bullet}$)and $c(z)$ ($\textcolor{blue}{\blacksquare}$) extrapolated from the system size dependence of $\langle\ln|\Psi|^2\rangle^{\ }_{z,L}$ averaged over a small interval of $z$’s. $\tilde{\alpha}_0(z)$ and $c(z)$ at $z=0,1$ without averaging over a small interval of $z$’s are shown by open circles and squares, respectively. The solid line represents the bulk value of $\alpha^{(2)}_{0}=2.173$ computed in [@Obuse07b]. The asymmetry with respect to $z=0.5$ is due to statistical fluctuations. []{data-label="fig:ldos"}](figure7.eps){width="15cm"}
We are going to argue that this nonmonotonic behavior is a finite size effect. We make the scaling ansatz $$\langle
\ln |\Psi|^2
\rangle^{\ }_{z,L}=
-
\tilde{\alpha}^{(\zeta,\mathbb{Z}^{\ }_{2})}_{0}(z)\ln L
+
c(z),
\label{eq:ldos_scaling}$$ where $\zeta=1$ if $z=0,1$ and $\zeta=2$ otherwise, while $c(z)$ depends on $z$ but not on $L$. To check the $L$ dependence of $\langle\ln|\Psi|^{2}\rangle^{\ }_{z,L}$ in figure \[fig:ldos\], we average $\langle\ln|\Psi|^2\rangle^{\ }_{z,L}$ over a narrow interval of $z$’s for each $L$. Figure \[fig:ldos\](b) shows the $z$ dependence of $\tilde{\alpha}^{\ }_{0}(z)$ ($\bullet$) and $c(z)$ ($\blacksquare$) obtained in this way. In addition, $\tilde{\alpha}^{\ }_{0}(z)$ and $c(z)$ calculated for $z=0,1$ without averaging over the narrow interval of $z$’s are shown by open circles and open squares, respectively.
We observe that $\tilde{\alpha}^{\ }_{0}(z)$, if calculated by averaging over a finite range of $z$’s, is almost constant and close to $\alpha^{(2)}_{0}=2.173$. In contrast, $\tilde{\alpha}^{\ }_{0}(z=0,1) \approx 2.09$, if calculated without averaging over a finite range of $z$’s, is close to $\alpha^{(1,\mathbb{Z}^{\ }_{2})}_{0}=2.091$. We also find that $|c(z)|$ increases near the boundaries. We conclude that it is the nonmonotonic dependence of $|c(z)|$ on $z$ that gives rise to the nonmonotonic dependence of $\langle\ln|\Psi|^{2}\rangle^{\ }_{z,L}$ on $z$. This finite-size effect is of order $1/\ln L$ and vanishes in the limit $L\to\infty$.
Boundary condition changing operator
-------------------------------------
Next, we impose mixed boundary conditions by either (i) coupling the $\mathbb{Z}^{\ }_{2}$ network model to an external reservoir through point contacts or (ii) by introducing a long-range lead between two nodes from the $\mathbb{Z}^{\ }_{2}$ network model, as shown in figure \[fig:geometry\_mix\]. In this way, when $X>X^{\ }_{l}$, a single Kramers’ pair of helical edge states indicated by the wavy lines in figure \[fig:geometry\_mix\] is present on segments of the boundary, while the complementary segments of the boundary are devoid of any helical edge state (the straight lines in figure \[fig:geometry\_mix\]). The helical edge states either escape the $\mathbb{Z}^{\ }_{2}$ network model at the nodes at which leads to a reservoir are attached \[the green lines in figure \[fig:geometry\_mix\](a) and figure \[fig:geometry\_mix\](b)\], or shortcut a segment of the boundary through a nonlocal connection between the two nodes located at the corners \[figure \[fig:geometry\_mix\](c)\]. These are the only options that accommodate mixed boundary conditions and are permitted by time-reversal symmetry. As shown by Cardy in [@Cardy1989], mixed boundary conditions are implemented by boundary-condition-changing operators in conformal field theory. Hence, the geometries of figure \[fig:geometry\_mix\] offer yet another venue to test the hypothesis of two-dimensional conformal invariance at the metal-insulator quantum critical point.
![ (a) The system with two point contacts (green curves) attached (a) at the two interfaces between different types of boundaries and (b) at the reflecting boundary. The periodic boundary conditions are imposed for the horizontal direction. The thick wavy and solid lines on the edges represent two different types of boundaries, with and without a helical edge mode at $X>X_l$, respectively. (c) Closed network with mixed boundaries. Each dashed line or curve represents a Kramers’ doublet. []{data-label="fig:geometry_mix"}](figure8.eps){width="15cm"}
When coupling the $\mathbb{Z}^{\ }_{2}$ network model to an external reservoir, we shall consider two cases shown in figure \[fig:geometry\_mix\](a) and figure \[fig:geometry\_mix\](b), respectively.
First, we consider the case of figure \[fig:geometry\_mix\](a) in which only two nodes from the $\mathbb{Z}^{\ }_{2}$ network model couple to the reservoir. At each of these point contacts, the scattering matrix $S$ that relates incoming to outgoing waves from and to the reservoir is a $2\times2$ matrix which is invariant under time reversal $s_2S^*s_2=S$, and it must be proportional to the unit $2\times2$ matrix up to an overall (random) phase. Hence, the two-point-contact conductance in the geometry of figure \[fig:geometry\_mix\](a) is unity, however far the two point contacts are from each other.
Second, we consider the case of figure \[fig:geometry\_mix\](b) in which there are again two point contacts, however each lead between the $\mathbb{Z}^{\ }_{2}$ network model and the reservoir now supports two instead of one Kramers’ doublets. The two point-contact scattering matrices connecting the $\mathbb{Z}^{\ }_{2}$ network model to the reservoirs are now $4\times4$ matrices, which leads to a non-vanishing probability of backscattering. Hence, this even channel two-point-contact conductance is expected to decay as a function of the separation between the two attachment points of the leads to the network.
To test whether the two-point-contact conductance in figure \[fig:geometry\_mix\](a) and figure \[fig:geometry\_mix\](b) do differ as dramatically as anticipated, we have computed numerically the two-point-contact conductance at the quantum critical point $X=X^{\ }_{l}$ for a *single realization* of the static disorder. The two-point-contact conductance is calculated by solving for the stationary solution of the time-evolution operator with input and output leads [@Janssen99]. We choose the cylindrical geometry imposed by periodic boundary conditions along the horizontal directions in figures \[fig:geometry\_mix\](a) and \[fig:geometry\_mix\](b) for a squared network with the linear size $L=200$. Figure \[fig:Delta\_corner\_mix\](a) shows with the symbol $\bullet$ the dependence on $r$, the distance between the two contacts in figure \[fig:geometry\_mix\](a), of the dimensionless two-point-contact conductance $g$. It is evidently $r$ independent and unity, as expected. Figure \[fig:Delta\_corner\_mix\](a) also shows with the symbol $\circ$ the dependence on $r$ of the dimensionless two-point conductance $g$ for leads supporting two Kramers’ doublet as depicted in figure \[fig:geometry\_mix\](b). Although it is not possible to establish a monotonous decay of the two-point-contact conductance for a single realization of the static disorder, its strong fluctuations as $r$ is varied are consistent with this claim.
We turn our attention to the closed geometry shown in figure \[fig:geometry\_mix\](c). We recall that it is expected on general grounds that the moments of the two-point conductance in a network model at criticality, when the point contacts are far apart, decay as power laws with scaling exponents proportional to the scaling exponents $\Delta^{(\zeta,\nu)}_{q}$ [@Janssen99; @Klesse01]. Consequently, after tuning the $\mathbb{Z}^{\ }_{2}$ network model to criticality, the anomalous dimensions at the node (corner) where the boundary condition is changed must vanish, $$\Delta^{(0,\mathbb{Z}^{\ }_{2}|\mathrm{O})}_{q}=0,
\label{eq:Delta(0,T|O)=0}$$ since the two-point-contact conductance in figure \[fig:geometry\_mix\](a) is $r$ independent. (\[eq:Delta(0,T|O)=0\]) is another signature of the nontrivial topological nature of the insulating side at the Anderson transition that we want to test numerically. Thus, we consider the geometry figure \[fig:geometry\_mix\](c) and compute numerically the corner anomalous dimensions. This is done using the amplitudes of the stationary wave function restricted to the links connecting the two corners where the boundary conditions are changed. Figure \[fig:Delta\_corner\_mix\](b) shows the numerical value of the corner anomalous dimension $\Delta^{(0,\mathbb{Z}^{\ }_{2}|\mathrm{O})}_{q}$. The linear sizes of the network are $L=50,80,120,150$, and $180$ and the number of disorder realizations is $10^5$ for each $L$. We observe that $\Delta^{(0,\mathbb{Z}^{\ }_{2}|\mathrm{O})}_{q}$ is zero within the error bars, thereby confirming the validity of the prediction (\[eq:Delta(0,T|O)=0\]) at the metal-to-$\mathbb{Z}^{\ }_{2}$-topological-insulator transition.
![ (a) The distance $r$ dependence of the two-point-contact conductance $g$ for the mixed boundary ($\bullet$) and the reflecting boundary ($\circ$). (b) The zero dimensional anomalous dimension $\Delta^{(0,\mathbb{Z}^{\ }_{2}|\mathrm{O})}_{q}$ obtained from the wave function amplitude on the link which connecting the boundary condition changing points. []{data-label="fig:Delta_corner_mix"}](figure9.eps){width="15cm"}
Conclusions {#sec: conclusions}
===========
In summary, we have mapped the $\mathbb{Z}^{\ }_{2}$ network model to a $4\times4$ Dirac Hamiltonian. In the clean limit of this Dirac Hamiltonian, we expressed the Kane-Mele $\mathbb{Z}^{\ }_{2}$ invariant as an SU(2) Wilson loop and computed it explicitly. In the presence of weak time-reversal symmetric disorder, the NLSM that can be derived out of this Dirac Hamiltonian describes the metal-insulator transition in the $\mathbb{Z}^{\ }_{2}$ network model and yields bulk scaling exponents that belong to the standard two-dimensional symplectic universality class; an expectation confirmed by the numerics in [@Obuse07a] and [@Obuse08]. A sensitivity to the $\mathbb{Z}^{\ }_{2}$ topological nature of the insulating state can only be found by probing the boundaries, which we did numerically in the $\mathbb{Z}^{\ }_{2}$ network model by improving the quality of the numerical study of the boundary multifractality in the $\mathbb{Z}^{\ }_{2}$ network model.
References {#references .unnumbered}
==========
[99]{}
Roland Winkler 2003 *“Spin-orbit coupling effects in two-dimensional electron and hole systems,”* (Springer-Verlag Berlin Heidelberg)
Hikami S, Larkin A I and Nagaoka Y 1980 *Prog. Theor. Phys. ***63** 707 Kane C L and Mele E J 2005 *Phys. Rev. Lett. ***95** 226801
Kane C L and Mele E J 2005 *Phys. Rev. Lett. ***95** 146802
Bernevig B A and Zhang S C 2006 *Phys. Rev. Lett. ***96** 106802
Bernevig B A, Hughes T L and Zhang S C 2006 *Science* **314** 1757 König M, Wiedmann S, Brüne C, Roth A, Buhmann H, Molenkamp L W, Qi X L and Zhang S C 2007 *Science* **318** 766 Moore J E and Balents L 2007 *Phys. Rev. B* **75** 121306(R)
Roy R 2009 *Phys. Rev. B* **79** 195322
Fu L, Kane C L and Mele E J 2007 *Phys. Rev. Lett.* **98** 106803
Hsieh D, Qian D, Wray L, Xia Y, Hor Y, Cava R and Hasan M Z 2008 *Nature* **452** 970 Hsieh D, Xia Y, Wray L, Qian D, Pal A, Dil J H, Osterwalder J, Meier R, Bihknayer G, Kane C L, Hor Y, Cava R, and Hasan M 2009 *Science* **323** 919
Xia Y, Qian D, Hsieh D, Wray L, Pal A, Lin H, Bansil A, Grauer D, Hor Y S, Cava R J, and Hasan M Z 2009 [*Nature Phys.*]{} **5** 398
Hsieh D, Xia Y, Qian D, Wray L, Dil J H, Meier F, Osterwalder J, Patthey L, Checkelsky J G, Ong N P, Fedorov A V, Lin H, Bansil A, Grauer D, Hor Y S, Cava R J, and Hasan M Z 2009 *Nature* **460** 1101 Chen Y L, Analytis J G, Chu J-H, Liu Z K, Mo S-K, Qi X L, Zhang H J, Lu D H, Dai X, Fang Z, Zhang S C, Fisher I R, Hussain Z, and Shen Z-X 2009 *Science* **325** 178 Thouless D J, Kohmoto M, Nightingale M P and den Nijs M 1982 *Phys. Rev. Lett. ***49** 405 Fu L and Kane C L 2006 *Phys. Rev.* B **74** 195312
Onoda M, Avishai Y and Nagaosa N 2007 *Phys. Rev. Lett.* **98** 076802
Obuse H, Furusaki A, Ryu S and Mudry C 2007 *Phys. Rev. B. ***76** 075301
Obuse H, Furusaki A, Ryu S and Mudry C 2008 *Phys. Rev. B. ***78** 115301
Chalker J T and Coddington P D 1988 *J. Phys. C* **21** 2665 Kramer B, Ohtsuki T and Kettemann S 2005 *Phys. Rep. ***417** 211 Wegner F J 1979 *Z. Phys. B* **35** 207 Fendley P 2001 *Phys. Rev. B* **63** 104429
Ryu S, Mudry S, Obuse H and Furusaki A 2007 *Phys. Rev. Lett.* **99** 116601
Ostrovsky P M, Gornyi I V and Mirlin A D 2007 *Phys. Rev. Lett.* **98** 256801
Bardarson J H, Tworzyd[ł]{}o J, Brouwer P W, and Beenakker C W J, Phys. Rev. Lett. **99**, 106801 (2007).
Nomura K, Koshino M, and Ryu S 2007 *Phys. Rev. Lett. * **99** 146806
Schnyder A P, Ryu S, Furusaki A and Ludwig A W W 2008 *Phys. Rev.* B **78** 195125
Subramaniam A R, Gruzberg I A, Ludwig A W W, Evers F, Mildenberger A and Mirlin A D 2006 *Phys. Rev. Lett. ***96** 126802
Obuse H, Subramaniam A R, Furusaki A, Gruzberg I A and Ludwig A W W 2007 *Phys. Rev. Lett. ***98** 156802
Ho C M and Chalker J T 1996 *Phys. Rev. B* **54** 8708 Ludwig A W W, Fisher M P A, Shankar R and Grinstein G 1994 *Phys. Rev.* B **50** 7526 Mirlin A D, Fyodorov Y V, Mildenberger A and Evers F 2006 *Phys. Rev. Lett. ***97** 046803
Cardy J L 1989 *Nucl. Phys. B* **324** 581 Janssen M, Metzler M and Zirnbauer M R 1999 *Phys. Rev. B* **59** 15836 Klesse R and Zirnbauer M R 2001 *Phys. Rev. Lett. ***86** 2094
[^1]: The presence or absence of a single helical edge state in an insulating phase is solely dependent on the boundary conditions which one imposes on the network model.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Jakob Gulddahl Rasmussen\
Department of Mathematical Sciences\
Aalborg University\
Denmark\
jgr@math.aau.dk
bibliography:
- 'bibliography.bib'
title: |
Lecture Notes:\
Temporal Point Processes\
and the Conditional Intensity Function
---
Introduction
============
A temporal point pattern is basically a list of times of events. Many real phenomena produce data that can be represented as a temporal point pattern; the left column of Table \[tab.examples\] shows a few examples. Common to these examples is that we do not know how many events will occur, or at what times they will occur. Usually complex mechanisms are behind these seemingly random times, for example earthquakes cause new earthquakes in the form of aftershocks. An essential tool for dealing with these mechanisms, for example in predicting future events, is a stochastic process modelling the point patterns: a [*temporal point process*]{}. The term point is used since we may think of an event as being instant and thus can represent it as a point on the time line. For the same reason the words point and event will be used interchangeably throughout this note.
Events Marks
---------------------- ------------------
Earthquakes Magnitudes
Locations
Arrivals at a server Service time
Accidents Insurance claims
Type of Injury
: Examples of events and marks.
\[tab.examples\]
Often there is more information available associated with an event. This information is known as marks. Examples are given in the right column of Table \[tab.examples\]. The marks may be of separate interest or may simply be included to make a more realistic model of the event times. For example, it is of practical relevance to know the position and magnitude of an earthquake, not just its time. At the same time, the magnitude of an earthquake also influences how many aftershocks there will be, so a model not including magnitudes as marks may not be reliable at modelling the event times either.
In this note, familiarity with the Poisson process on the line as well as basic probability theory and statistics is assumed. On the other hand, measure theory is not assumed; for a much more thorough treatment with all the measure theoretical details, see [@daley-vere-jones-03] and [@daley-vere-jones-08].
Evolutionary point processes
============================
There are many ways of treating (marked) temporal point processes. In this note we will explore one approach based on the so-called conditional intensity function. To understand what this is, we first have to understand the concept of evolutionarity.
Evolutionarity
--------------
Usually we think of time as having an [*evolutionary character*]{}: what happens now may depend on what happened in the past, but not on what is going to happen in the future. This order of time is also a natural starting point for defining practically useful temporal point processes. Roughly speaking, we can define a point process by specifying a stochastic model for the time of the next event given we know all the times of previous events. The term [*evolutionary point process*]{} is used for processes defined in this way.
The past in a point process is captured by the concept of the [ *history*]{} of the process. If we consider the time $t$, then the history ${{\cal H}}_{t-}$ is the knowledge of times of all events, say $(\ldots,t_1,t_2,\ldots,t_n)$, up to but not including time $t$; ${{\cal H}}_t$ also includes the information whether there is an event at time $t$. Note that theoretically the point process may extend infinitely far back in time, but it does not have to do this. Note also that we assume that we have a [*simple point process*]{}, i.e. a point process where no points coincide, such that the points can be strictly ordered in time.
Interevent times
----------------
When specifying a temporal point process we can use many different approaches. In this note, we start by specifying the distribution of the time lengths between subsequent events, and then in the next section we reformulate this in terms of conditional intensity functions.
The lengths of the time intervals between subsequent events are known as [*interevent times*]{}. We can define a temporal point process by specifying the distributions of these. Let $f(t_{n+1}|{{\cal H}}_{t_n})$ be the conditional density function of the time of the next event $t_{n+1}$ given the history of previous events $(\ldots,t_{n-1},t_n)$. Note that the density functions $f(t_n|\ldots,t_{n-2},t_{n-1})$ specify the distributions of all interevent times, one by one, starting in the past, and thus the distribution of all events is given by the joint density $$f(\ldots,t_1,t_2,\ldots) = \prod_n f(t_n|\ldots,t_{n-2},t_{n-1}) =
\prod_n f(t_n|{{\cal H}}_{t_{n-1}})$$ in the same manner as the joint density for a bivariate random variable factorises into $p(x,y) = p(x) p(y|x)$. Let us consider a simple example of a point process defined by specifying the density function for interevent times:
\[ex.ren\] The simplest process we can define by specifying the distribution of the interevent times is the renewal process. This process is defined by letting the interevent times be i.i.d. stochastic variables, i.e. $f(t_n|{{\cal H}}_{t_{n-1}})=g(t_n-t_{n-1})$ where $g$ is a density function for a distribution on $(0,\infty)$. An important special case of this is the homogeneous Poisson process with intensity $\lambda$, where $g$ is the density of the exponential distribution with inverse mean $\lambda$. Figure \[fig-renewal-processes\] shows simulations of three different renewal processes: one is the homogeneous Poisson process, one is more [*clustered*]{} than the Poisson process (i.e. the points tend to occur in clusters), and one is more [*regular*]{} than the Poisson process (i.e. the points tend to be more evenly spread out).
![Three simulations of renewal processes with different interevent time distributions: Gamma(0.02,0.2) (upper), Gamma(0.1,1) (middle), Gamma(2,20) (lower). Note how the upper case is clustered and the lower case is regular compared to the middle case (which is a Poisson process). Also note that all the simulations have roughly 100 points for easy comparison (they are very densely packed together for the upper case).[]{data-label="fig-renewal-processes"}](fig-renewal-processes.pdf){height="3cm"}
Conditional intensity function {#sec.cif}
------------------------------
Example \[ex.ren\] show cases where $t_n$ depends only on $t_{n-1}$. However, in general it may depend on the whole history, and it turns out that the density function of the interevent times is not the best way of specifying the general case. Instead the conditional intensity function is a more convenient and intuitive way of specifying how the present depends on the past in an evolutionary point process. Consider the conditional density $f(t|{{\cal H}}_{t_n})$ and its corresponding cumulative distribution function $F(t|{{\cal H}}_{t_n})$ for any $t>t_n$. Then the [*conditional intensity function*]{} (or hazard function) is defined by $$\label{eq.int}
\lambda^*(t) = \frac{f(t|{{\cal H}}_{t_n})}{1-F(t|{{\cal H}}_{t_n})}.$$ The conditional intensity function can be interpreted heuristically in the following way: consider an infinitisemal interval around $t$, say ${\textup{d}}t$, then $$\begin{aligned}
\lambda^*(t){\textup{d}}t
&=& \frac{f(t|{{\cal H}}_{t_n}){\textup{d}}t}{1-F(t|{{\cal H}}_{t_n})}\\
&=& \frac{{\mathbb{P}}(t_{n+1}\in[t,t+{\textup{d}}t]|{{\cal H}}_{t_n})}
{{\mathbb{P}}(t_{n+1}\notin(t_n,t)|{{\cal H}}_{t_n})}\\
&=& \frac{{\mathbb{P}}(t_{n+1}\in[t,t+{\textup{d}}t],t_{n+1}\notin(t_n,t)|{{\cal H}}_{t_n})}
{{\mathbb{P}}(t_{n+1}\notin(t_n,t)|{{\cal H}}_{t_n})}\\
&=& {\mathbb{P}}(t_{n+1}\in[t,t+{\textup{d}}t]|t_{n+1}\notin(t_n,t),{{\cal H}}_{t_n})\\
&=& {\mathbb{P}}(t_{n+1}\in[t,t+{\textup{d}}t]|{{\cal H}}_{t-})\\
&=& {\mathbb{E}}[N([t,t+{\textup{d}}t])|{{\cal H}}_{t-}],\end{aligned}$$ where $N(A)$ denotes the number of points falling in an interval $A$, and the last equality follows from the assumption that no points coincide, so that there is either zero or one point in an infinitisemal interval. In other words, the conditional intensity function specifies the mean number of events in a region conditional on the past. Here we use the notation $*$ from [@daley-vere-jones-03] to remind ourselves that this density is conditional on the past right up to but not including the present, rather than writing explicitly that the function depends on the history.
We consider a few examples of point processes where the conditional intensity has particular functional forms:
The (inhomogeneous) Poisson process is among other things characterised by the number of points in disjoint sets being independent. The conditional intensity function inherets this independence. The Poisson process is quite simply the point process where the conditional intensity function is independent of the past, i.e. the conditional intensity function is equal to the intensity function of the Poisson process, $\lambda^*(t) = \lambda(t)$.
\[ex.haw\] Define a point process by the conditional intensity function $$\label{eq.hawexp}
\lambda^*(t) = \mu + \alpha\sum_{t_i<t}\exp(-(t-t_i)),$$ where $\mu$ and $\alpha$ are positive parameters. Note that each time a new point arrives in this process, the conditional intensity grows by $\alpha$ and then decreases exponentially back towards $\mu$. In other words, a point increases the chance of getting other points immediately after, and thus this is model for clustered point patterns. A simulation of the process with parameters $(\mu,\alpha)
= (0.5,0.9)$ is shown in Figure \[fig-hawkes-process\] together with its conditional intensity function (in Section \[sec.sim\] we will learn how to make such a simulation). The so-called Hawkes process is a generalization of this process and has the conditional intensity function $$\lambda^*(t) = \mu(t) + \alpha\sum_{t_i<t}\gamma(t-t_i;\beta),$$ where $\mu(t)\geq0$, $\alpha>0$, and $\gamma(t;\beta)$ is a density on $(0,\infty)$ depending on some parameter $\beta$ (which may be a single value or a vector, depending on the choice of distribution). For more on the Hawkes process, see e.g. [@hawkes-71a; @hawkes-71b; @hawkes-72; @hawkes-oakes-74].
![A simulation of the Hawkes process is shown at the bottom of this plot, and the corresponding conditional intensity function is shown in the top. Note that the point pattern is clustered.[]{data-label="fig-hawkes-process"}](fig-hawkes-process.pdf){height="6cm"}
\[ex.inhib\] What do we do if we want a point process for regular point patterns? Exchanging the plus for a minus in the Hawkes process will not work, since a conditional intensity function has to be non-negative. We can instead use $$\lambda^*(t) = \exp\left(\mu t - \sum_{t_i<t}\alpha\right),$$ where $\mu$ and $\alpha$ are positive parameters. Now the intensity rises as time passes, but each time a new point appears we multiply by a constant $e^{-\alpha}<1$, and thus the chance of new points decreases immediately after a point has appeared; in other words, this is a regular point process. A simulated point pattern and the conditional intensity function is shown in Figure \[fig-selfcorr\]. This process is a special case of the so-called self-correcting process [@isham-westcott-79].
![A simulation of a self-correcting process is shown at the bottom of this plot, and the corresponding conditional intensity function is shown in the top. Note that the point pattern is regular.[]{data-label="fig-selfcorr"}](fig-selfcorr.pdf){height="6cm"}
Note that the models in examples \[ex.haw\] and \[ex.inhib\] are specified simply by choosing a particular form of the conditional intensity and interpreting this. A little creativity and common sense can be used to define many new models using the conditional intensity function. This, of course, depends on the fact that the conditional intensity function uniquely defines a point process. To prove this we first need to note that the definition of the conditional intensity function can also be reversed such that an expresion for the density or cumulative distribution function of the interevent times can be obtained:
\[prop.fstar\] The reverse relation of (\[eq.int\]) is given by $$\label{eq.fstar}
f(t|{{\cal H}}_{t_n})=\lambda^*(t)\exp\left(-\int_{t_{n}}^t\lambda^*(s){\textup{d}}s\right),$$ or $$\label{eq.Fstar}
F(t|{{\cal H}}_{t_n}) = 1-\exp\left(-\int_{t_{n}}^t\lambda^*(s) {\textup{d}}s\right),$$ where $t_n$ is the last point before $t$.
By (\[eq.int\]), we get that $$\label{eq.l1F}
\lambda^*(t) = \frac{f(t|{{\cal H}}_{t_n})}{1-F(t|{{\cal H}}_{t_n})}
= \frac{\frac{{\textup{d}}}{{\textup{d}}t}F(t|{{\cal H}}_{t_n})}{1-F(t|{{\cal H}}_{t_n})}
= -\frac{{\textup{d}}}{{\textup{d}}t}\log(1-F(t|{{\cal H}}_{t_n})).$$ Integrating both sides, we get by the fundamental theorem of calculus that $$\int_{t_n}^t\lambda^*(s) {\textup{d}}s = -(\log(1-F(t|{{\cal H}}_{t_n})) -
\log(1-F(t_n|{{\cal H}}_{t_n}))) = -\log(1-F(t|{{\cal H}}_{t_n})),$$ since $F(t_n|{{\cal H}}_{t_n})=0$ (point $t_{n+1} = t_{n}$ with probability zero, since the point process is simple). Isolating $F(t|{{\cal H}}_{t_n})$ we get (\[eq.Fstar\]), and (\[eq.fstar\]) then follows by differentiating $F(t|{{\cal H}}_{t_n})$ with respect to $t$, again using the fundamental theorem of calculus.
\[prop.defuni\] A conditional intensity function $\lambda^*(t)$ uniquely defines a point process if it satisfies the following conditions for any point pattern $(\ldots,t_1,\ldots,t_n)$ and any $t>t_n$:
1. $\lambda^*(t)$ is non-negative and integrable on any interval starting at $t_n$, and
2. $\int_{t_{n}}^t\lambda^*(s){\textup{d}}s\rightarrow\infty$ for $t\rightarrow\infty$.
The distribution of the point process is well-defined, if all interevent times have well-defined densities, i.e. $f(t|{{\cal H}}_{t_n})$ should be a density function on $t\in[t_n,\infty)$, or equivalently $F(t|{{\cal H}}_{t_n})$ should be a cumulative distribution function. From the assumptions and (\[eq.Fstar\]) it follows that
- $0 \leq F(t|{{\cal H}}_{t_n}) \leq 1$,
- $F(t|{{\cal H}}_{t_n})$ is a non-decreasing function of $t$,
- $F(t|{{\cal H}}_{t_n})\rightarrow1$ for $t\rightarrow\infty$,
which means that $F(t|{{\cal H}}_{t_n})$ is a distribution function. Uniqueness follows from Proposition \[prop.fstar\], since $F(t|{{\cal H}}_{t_n})$ is uniquely obtained from $\lambda^*(t)$ using (\[eq.Fstar\]).
Note that item 2. in Proposition \[prop.defuni\] implies that the point process continues forever, a property which is often not desireable for practical use - luckily we can get rid of this assumption. If we remove this, the proof still holds except that item 2. in the proof has to be removed. Now $F(t|{{\cal H}}_{t_n})\rightarrow p$ for some probability $p<1$, so we have to understand what it means when the cumulative distribution function for the interevent time does not tend to one when time tends to infinity. Basically this means that there is only probability $p$ of having one (or more) points in the rest of the process, and with probability $1-p$ the process terminates with no more points.
Consider a unit-rate Poisson process on $[0,1]$. This has conditional intensity function $\lambda^*(t)={\bf1}[t\in[0,1]]$. Thus starting at zero (with no points so far), we get that $$F(t|{{\cal H}}_0) = 1 - \exp\left(-\int_0^t{\bf1}[s\in[0,1]]{\textup{d}}s\right)
= 1 - \exp\left(-\min\{t,1\}\right),$$ where ${\bf1}[\cdot]$ denotes the indicator function. For $t>1$, this equals $1-\exp(-1)\approx0.63$, so there is a probability of about $0.37$ of having no points at all. If we do get a point, say $t_1$, there is an even smaller chance of getting another point in the remaining interval $(t_1,1]$. Another terminating unit-rate process could be a process that behaves like a Poisson process but stops after $n$ points. In this case $$F(t|{{\cal H}}_{t_i}) = (1 - \exp(-t)) {\bf1}[i<n].$$ Both these examples illustrate that assumption 3. in Proposition \[prop.defuni\] is not necessary to get well-defined point processes.
The marked case
---------------
The conditional intensity function also generalises to the marked case, but before we get that far it is worth reminding ourselves that the mark space $\mathbb{M}$ can be many different types of spaces, often (a subset of) $\mathbb{R}$ or $\mathbb{N}$. We can specify the distribution of the mark $\kappa$ associated with the point $t$ by its conditional density function $f^*(\kappa|t)=f(\kappa|t,{{\cal H}}_{t-})$, i.e. this specifies the distribution of the mark $\kappa$ given $t$ and the history ${{\cal H}}_{t-}$, which now includes information of both times and marks of past events. Here the term density function is used in a broad sense: if the mark is a continuous random variable, this is the usual (conditional) density function, but if it is a discrete random variable, this is its (conditional) probability function. Note also that $f^*(\kappa|t)=f(\kappa|t,{{\cal H}}_{t_n})$ if $t_n$ is the the last point before $t$, since the additional condition that the next point is located at $t$ means that the histories ${{\cal H}}_{t-}$ and ${{\cal H}}_{t_n}$ contain the same information.
We can now define the conditional intensity function for the marked case as $$\lambda^*(t,\kappa) = \lambda^*(t) f^*(\kappa|t),$$ where $\lambda^*(t)$ is called the [*ground intensity*]{}, and is defined exactly as the conditional intensity function for the unmarked case, except that it is allowed to depend on the marks of the past events also; note the close resemblance of this formula with $p(x,y)=p(x)p(y|x)$ for the relation between the joint, marginal and conditional distributions for random variables. Thus we can rewrite this expression to $$\lambda^*(t,\kappa) = \lambda^*(t) f^*(\kappa|t) =
\frac{f(t|{{\cal H}}_{t_n})f^*(\kappa|t)}{1-F(t|{{\cal H}}_{t_n})} =
\frac{f(t,\kappa|{{\cal H}}_{t_n})}{1-F(t|{{\cal H}}_{t_n})},$$ where $f(t,\kappa|{{\cal H}}_{t_n})$ is the joint density of the time and the mark (again the word the density is used in a broad sense) conditional on past times and marks, and $F(t|{{\cal H}}_{t_n})$ is the conditional cumulative distribution function of $t$ also conditional on the past times and marks. Therefore following the same arguments as in Section \[sec.cif\], the conditional intensity function $\lambda^*(t,\kappa)$ can now be interpreted for the case of discrete marks by $$\begin{aligned}
\lambda^*(t,\kappa){\textup{d}}t = {\mathbb{E}}[N({\textup{d}}t \times \kappa)|{{\cal H}}_t],\end{aligned}$$ that is, the mean number of points in a small time interval ${\textup{d}}t$ with the mark $\kappa$. Similarly for the continuous case, $$\begin{aligned}
\lambda^*(t,\kappa){\textup{d}}t{\textup{d}}\kappa = {\mathbb{E}}[N({\textup{d}}t \times {\textup{d}}\kappa)|{{\cal H}}_t],\end{aligned}$$ that is, the mean number of points in a small time interval ${\textup{d}}t$ with the mark in a small interval ${\textup{d}}\kappa$.
We revisit the Hawkes process from Example \[ex.haw\], now with marks:
\[ex.etas\] The ETAS (epidemic type aftershock sequence) model is a particular type of marked Hawkes process for modelling earthquakes times and magnitudes. Here $\kappa_i\in[0,\infty)$ denotes the magnitude of an earthquake occurring at time $t_i$. In its simplest form the ETAS model can be defined by its ground intensity $$\lambda^*(t) = \mu +
\alpha\sum_{t_i<t}e^{\beta\kappa_i}e^{-\gamma(t-t_i)},$$ where $\alpha,\beta,\gamma>0$ are parameters, and an exponential distribution as its mark density $$f^*(\kappa|t) = \delta e^{-\delta \kappa}.$$ Equivalently we could define it by its conditional intensity function including both marks and times $$\lambda^*(t,\kappa) = \left(\mu +
\alpha\sum_{t_i<t}e^{\beta\kappa_i}e^{-\gamma(t-t_i)}\right)
\delta e^{-\delta \kappa}.$$ The idea behind using this model is that earthquakes cause aftershocks - this is reflected in the fact that every new earthquake increases the intensity by $\alpha
e^{\beta\kappa_i}$. Note that large earthquakes increase the intensity more than small earthquakes. For more on the ETAS model, see e.g. [@ogata-88; @ogata-98].
We sometimes make simplifying independence assumptions on the marks. An [*unpredictable mark*]{} is a mark that does not depend on the past (and therefore cannot be “predicted” using the information about the past, hence the term “unpredictable”). Example \[ex.etas\] has unpredictable marks, since $f^*(\kappa|t)$ does not depend on the past. An even stronger assumption is that of an [*independent mark*]{}, which means that $\kappa_i$ is independent of everything else except maybe $t_i$. Example \[ex.etas\] does not have independent marks, since the ground intensity depends on the past marks (which is just another way of saying that the marks depend on the future events).
Inference
=========
There are many possibilities for estimating the parameters in a process specified by a conditional intensity function. The likelihood function for such a process has a fairly simple expression, which usually means that maximum likelihood inference or Bayesian inference are good choices.
Likelihood function
-------------------
Assume that we have observed a point pattern $(t_1,\ldots,t_n)$ on $[0,T)$ for some given $T>0$, and if we are in the marked case, also its accompanying marks $(\kappa_1,\ldots,\kappa_n)$. Furthermore, let the [*integrated conditional intensity function*]{} (or integrated ground intensity function in the marked case) be given by $$\Lambda^*(t) = \int_0^t \lambda^*(s) {\textup{d}}s.$$ Then the likelihood function is given by the following proposition.
\[prop.lik\] Given an unmarked point pattern $(t_1,\ldots,t_n)$ on an observation interval $[0,T)$, the likelihood function is given by $$L = \left( \prod_{i=1}^n \lambda^*(t_i) \right) \exp
(-\Lambda^*(T)).$$ Given a marked point pattern $((t_1,\kappa_1),\ldots,(t_n,\kappa_n))$ on $[0,T)\times\mathbb{M}$, the likelihood function is given by $$L = \left( \prod_{i=1}^n \lambda^*(t_i,\kappa_i) \right) \exp
(-\Lambda^*(T)).$$
The likelihood function is the joint density function of all the points in the observed point pattern $(t_1,\ldots,t_n)\in [0,T)$, and can therefore be factorised into all the conditional densities of each points given all points before it. This yields $$\begin{aligned}
L = f(t_1|{{\cal H}}_0) f(t_2|{{\cal H}}_{t_1}) \cdots f(t_n|{{\cal H}}_{t_{n-1}})
(1-F(T|{{\cal H}}_{t_n})),
\end{aligned}$$ where the last term $(1-F(T|{{\cal H}}_{t_n}))$ appears since the unobserved point $t_{n+1}$ must appear after the end of the observation interval, and the term ${{\cal H}}_0$ contains the information that there are no events before time 0. Using (\[eq.int\]) and (\[eq.fstar\]), we get that $$\begin{aligned}
L &=& \left(\prod_{i=1}^n f(t_i|{{\cal H}}_{t_{i-1}})\right)
\frac{f(T|{{\cal H}}_{t_n})}{\lambda^*(T)}\\
&=& \left(\prod_{i=1}^n \lambda^*(t_i) \exp
\left(-\int_{t_{i-1}}^{t_i} \lambda^*(s) {\textup{d}}s \right)\right)
\exp\left(-\int_{t_n}^T \lambda^*(s) {\textup{d}}s \right)\\
&=& \left(\prod_{i=1}^n\lambda^*(t_i)\right) \exp
\left(-\int_0^T \lambda^*(s) {\textup{d}}s \right),
\end{aligned}$$ where $t_0=0$. This proves the result for the unmarked case. To obtain the result for the marked case, start by the factorisation $$\begin{aligned}
L &=& f(t_1|{{\cal H}}_{t_0})f(\kappa_1|t_1,{{\cal H}}_{t_0}) \cdots
f(t_n|{{\cal H}}_{t_{n-1}})f(\kappa_n|t_n,{{\cal H}}_{t_{n-1}})
(1-F(T|{{\cal H}}_{t_n}))
\end{aligned}$$ All the terms except the conditional mark densities $f(\kappa_i|t_i,{{\cal H}}_{t_{i-1}})=f^*(\kappa_i|t_i)$ are the same as in the unmarked case, so $$\begin{aligned}
L &=& \left(\prod_{i=1}^n f^*(\kappa_i|t_i)\right)
\left(\prod_{i=1}^n\lambda^*(t_i)\right) \exp
\left(-\int_0^T \lambda^*(s) {\textup{d}}s \right) \\
&=& \left(\prod_{i=1}^n\lambda^*(t_i,\kappa_i)\right) \exp
\left(-\int_0^T \lambda^*(s) {\textup{d}}s \right),
\end{aligned}$$ which establishes the result for the marked case.
Estimation
----------
Although Proposition \[prop.lik\] gives an explicit expression for the likelihood function, it is rarely simple enough that we can find the maximum likelihood estimate (MLE) analytically. One special case where we can find the MLE is the homogeneous Poisson process:
For the homogeneous Poisson process with intensity $\lambda^*(t) =
\lambda$ observed on an interval $[0,T)$ for some $T>0$, the likelihood simplifies to $$L = \lambda^n \exp(-\lambda T).$$ Differentiating this and equating to zero, we get that the MLE is given by $$\hat\lambda = \frac{n}{T}.$$ Note that this expression does not depend on the times of the points, only the total number of points. However, this is not true for other processes.
For most other point processes we will require numerical methods to obtain estimates, such as Newton-Raphson for maximizing the likelihood, or Markov chain Monte Carlo for approximating the posterior in a Bayesian approach.
Simulation {#sec.sim}
==========
Simulation turns out to be fairly easy when the conditional intensity function is specified. The conditional intensity function leads to two different approaches for simulating a point process: The inverse method and Ogata’s modified thinning algorithm. Both are generalisations of similar methods for simulation of inhomogeneous Poisson processes.
Inverse method {#sec.inv}
--------------
The basic idea in the inverse method is that we simulate a unit-rate Poisson process (this is just a series of independent exponential random variables with mean one) and transform these into the desired point process using the integrated conditional intensity function. The following proposition is the key result behind this method.
\[prop.inverse\] If $(s_i)_{i\in\mathbb{Z}}$ is a unit rate Poisson process on $\mathbb{R}$, and $t_i=\Lambda^{*-1}(s_i)$, then $(t_i)_{i\in\mathbb{Z}}$ is a point process with intensity $\lambda^*(t_i)$.
We prove this by induction, so assume that for $i\leq n$, $s_i$ follows a unit rate Poisson process, and $t_i$ follows a point process with intensity $\lambda^*$. Now consider the next point in both processes, say $S_{n+1}$ and $T_{n+1} = \Lambda^*(S_{n+1})$. Letting $S=S_{n+1}-s_n$ follow a unit rate exponential distribution which is independent of everything else, we need to prove that $T_{n+1}$ follows a point process with intensity $\lambda^*$ or equivalently has the correct distribution function $F(\cdot|{{\cal H}}_{t_n})$. Denoting the distribution function of $T_{n+1}$ by $F_{T_{n+1}}(t|{{\cal H}}_{t_n})$, we get that $$\begin{aligned}
F_{T_{n+1}}(t|{{\cal H}}_{t_n})
&=& {\mathbb{P}}(T_{n+1} \leq t | {{\cal H}}_{t_n})\\
&=& {\mathbb{P}}(\Lambda^{*-1}(S+s_n) \leq t | {{\cal H}}_{t_n})\\
&=& {\mathbb{P}}(S \leq \Lambda^*(t)-s_n | {{\cal H}}_{t_n})\\
&=& 1 - \exp(-(\Lambda^*(t)-s_n))\\
&=& 1 - \exp(-(\Lambda^*(t)-\Lambda^*(t_n)))\\
&=& 1-\exp\left(-\int_{t_{n}}^t\lambda^*(u) {\textup{d}}u\right)\\
&=& F(t|{{\cal H}}_{t_n}),\end{aligned}$$ where we have used that $s_n=\Lambda^*(t_n)$ in the fifth equality, and (\[eq.Fstar\]) in the last one. Thus $T_{n+1}$ follows the correct distribution.
Although the point process is defined on the whole of $\mathbb{R}$ in Theorem \[prop.inverse\], this condition can be relaxed. If we instead use a Poisson process with $s_i\in[0,T]$, then we get a new point process with $t_i\in[0,\Lambda^{*-1}(T)]$, i.e. we also need to transform the final end point. This means we cannot simply simulate a Poisson process on the interval needed, since this interval changes during the transformation, so we need to simulate one exponential variable at a time, and then transform them to see if our simulation fills out the whole interval. The following algorithm does this.
\[algo.inv\][**(Simulation by inversion)**]{}
1. Set $t=0$, $t_0=0$ and $n=0$ (note that $t_0$ is not an event).
2. Repeat until $t>T$:
1. Generate $s_n\sim{\textup{Exp}}(1)$.
2. Calculate $t$, where $t = \Lambda^{*-1}(s_n)$.
3. If $t < T$, set $n=n+1$ and $t_n=t$.
3. Output is $\{t_1,\ldots,t_n\}$.
The difficult part of this algorithm is of course calculating $t$ in step 2(b) since this requires finding the inverse of the integrated conditional intensity function. Notice that since $\lambda^*$ is non-negative, we get that $\Lambda^*$ is non-decreasing. Strictly speaking, this means that $\Lambda^*$ may not even be an invertible function, since it can be constant on intervals (corresponding to $\lambda^*$ being zero in these intervals). However, any point $s_i$ from the Poisson process will hit these points with probability zero, so we never need to evaluate $\Lambda^{*-1}$, where it is not well-defined.
We revisit the special case of Hawkes process from Example \[ex.haw\] given by (\[eq.hawexp\]). For this we get the integrated conditional intensity function $$\Lambda^*(t) = \mu t + \alpha \sum_{t_i<t}
\left(1-e^{-(t-t_i)}\right).$$ Looking at the expression, it seems to be hard solve this with respect to $t$, so an analytical expression for $\Lambda^{*-1}$ is not available, meaning we will need to approximate this when we use Algorithm \[algo.inv\]. A simple way of doing this is to calculate $\tilde s_i=\Lambda^*(\tilde t_i)$ starting at very small values of $\tilde t_i$ and then increase $\tilde t_i$ until $s_i\approx\Lambda^*(\tilde t_i)$, and then use $t_i=\tilde t_i$.
The easiest way to generalise this to the marked case is to simulate the associated mark to an event $t_i$ just after we have transformed $s_i$ to $t_i$ (notice that we have all the information that this may depend on, since we have already simulated the past events and marks).
Ogata’s modified thinning algorithm
-----------------------------------
Ogata’s modified thinning algorithm [@ogata-81] is a thinning algorithm based on simulating homogeneous Poisson processes with too high intensities and then thin out the points that are too many according to the conditional intensity function. Since the conditional intensity function depends on the past, we have to do this starting in the past and follow the direction of time.
The basic idea behind the algorithm is that when we are at time $t$ we need to find out where to place the next point $t_i>t$. To do this we simulate a homogeneous Poisson process on some interval $[t,t+l(t)]$ for some chosen function $l(t)$ (this is the maximum distance we may go forward in time from $t$ and it may be infinite). This Poisson process has a chosen constant intensity on $[t,t+l(t)]$, which fulfills $$\label{eq.m}
m(t)\geq\sup_{s\in[t,t+l(t)]}\lambda^*(s).$$ Actually we only need to simulate the first point $t_i$ of this Poisson process. There are now two possibilities: If $t_i>l(t)$, then there is no point in $[t,t+l(t)]$, so we start again from $t+l(t)$, but if $t_i\leq l(t)$, there may be a point at $t_i$ in $[t,t+l(t)]$. In the latter case we need to figure out whether to keep this point or not. To get the correct intensity, we keep it with probability $\lambda^*(t_i)/m(t)$. Whether or not we keep it, we start all over at $t_i$.
\[algo.ogata\]([*Ogata’s modified thinning algorithm.*]{})
1. Set t=0 and n=0.
2. Repeat until $t>T$:
1. Compute $m(t)$ and $l(t)$.
2. Generate independent random variables $s \sim {\textup{Exp}}(m(t))$ and $U \sim \text{Unif}([0,1])$.
3. If $s>l(t)$, set $t = t+l(t)$.
4. Else if $t+s>T$ or $U>\lambda^*(t+s)/m(t)$, set $t = t+s$.
5. Otherwise, set $n = n+1$, $t_n = t+s$, $t = t+s$.
3. Output is $\{t_1,\ldots,t_n\}$.
\[prop.ogata\] The output of Algorithm \[algo.ogata\] is a realisation of a point process with conditional intensity function $\lambda^*(t)$.
It follows from independent thinning that this process has the right conditional intensity function (essentially the explanation above the algorithm is the proof).
In order to use the algorithm we need to choose the $m(t)$ and $l(t)$, and the only requirement is that the inequality (\[eq.m\]) is fulfilled at any possible step of the algorithm. Since $$\lambda^*(t) = \mu + \alpha\sum_{t_i<t}\exp(-(t-t_i)),$$ is non-increasing (except when new points appear), we can choose $m(t)=\lambda(s)$ at every starting point $s$ in the algorithm and any $t\geq s$, and $l(t)=\infty$. This choice can be used for any point process where $\lambda^*(t)$ only increases when new points arrive. So the Hawkes process can be simulated either by the inverse method or Ogata’s modified thinning algorithm (but in fact there are simpler methods for simulating the Hawkes process, see e.g. [@moller-rasmussen-05; @moller-rasmussen-06]).
It is easy to generalise the algorithm to the marked case: every time we keep a point $t_i$ in the algorithm, we should simulate its marks from the mark distribution $f^*(\kappa_i|t_i)$ (just as for the inverse method we have the required knowledge of the past when we need to simulate this).
Why simulate a point process? {#sec.whysim}
-----------------------------
Simulations of point processes are useful for many things:
[*What does a point pattern typically look like?*]{} Simulating a point process a couple of times for a given model and a given set of parameters will provide valuable information on what a typical point pattern looks. Is it clustered or regular? Is it inhomogeneous or homogeneous? Does it look anything remotely like the data you are going to spend the next week fitting the model to?
[*Prediction:*]{} Given an observed past, what does the future hold? The specification of the conditional intensity function means that it is easy to include the already observed past, and then simulate the future.
[*Model checking:*]{} Prediction can also be used for model checking if we only use the data in the first half of the observation interval to fit a model, and then simulate predictions of the second half to see if this corresponds to the second half of the observed data. Or we can use all of the data, and compare with simulations of the whole dataset.
[*Summary statistics:*]{} Many quantities can be calculated explicitly from the conditional intensity function, such as the probability of getting no events in the next month or the mean time to the next event. However, particularly complicated summary statistics may not be available on closed form, but can instead be approximated by simulation. For example, the mean number of events in a given time interval may not be available on closed form for a complicated model, but we can then approximate it by the average number of points in a number of simulations.
Model checking
==============
In addition to the model checking approaches mentioned in Section \[sec.whysim\], there is a particular kind of model checking associated with the conditional intensity function known as residual analysis.
Residual analysis
-----------------
Residual analysis [@ogata-88] is a type of model checking for point processes specified by a conditional intensity function. It is based on the reverse transformation than the one used in Proposition \[prop.inverse\].
\[prop.resid\] If $(t_i)_{i\in\mathbb{Z}}$ is a point process with intensity $\lambda^*(t_i)$, and $s_i=\Lambda^{*}(t_i)$, then $(s_i)_{i\in\mathbb{Z}}$ is a unit rate Poisson process.
This is proved in a similar manner as Proposition \[prop.inverse\].
Thus if a point pattern is a realization of a point process with conditional intensity function $\lambda^*$, then the integrated conditional intensity function will transform the pattern into a realization of a unit rate Poisson process. In practice this means that if we have modelled an observed point pattern with a point process, and the type of point process is well-chosen, then the transformed pattern should closely resemble a unit-rate Poisson process. In other words, the model checking boils down to checking whether the interevent times are independent exponential variables with mean one.
If the model does not fit, residual analysis may provide important information on how it does not fit. For example, if the data contains an unrealistically large gap for the model between $t_i$ and $t_{i+1}$, then the transformed data will contain a large gap between $s_i$ and $s_{i+1}$, i.e. $s_{i+1}-s_i$ will be to large to realistically come from a unit rate exponential distribution. A bit of creativity in analysing the residuals can give us all kinds of information about the original point pattern.
Concluding remarks
==================
We have now seen that the conditional intensity function is a valuable tool for point process modelling, and can be used at all stages of data analysis:
- Preliminary analysis (simulation of potential models)
- Model specification and interpretation.
- Parameter estimation (maximum likelihood or Bayesian estimation).
- Model checking (residual analysis or simulation based approaches).
- Prediction.
However, we should note that basing parameter estimation and model checking on the same functions of the data is usually considered bad practice. For example, if we fit a model using maximum likelihood estimation, we have essentially fitted the conditional intensity function as well as we can, and it should not come as a surprise if the residuals fit rather well, since they are also based on the conditional intensity function. Here it would be more appropriate to base the model checking on other aspects of the model (such as the summary statistics given for example in [@moller-waagepetersen-04]), which may not be caught so well by the conditional intensity function.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: '[We propose a one-loop neutrino mass model with several $SU(2)_L$ multiplet fermions and scalar fields in which the inert feature of a scalar to realize the one-loop neutrino mass can be achieved by the cancellation among Higgs couplings thanks to non-trivial terms in the Higgs potential and to present it in a simpler way.]{} Then we discuss our typical cut-off scale by computing renormalization group equation for $SU(2)_L$ gauge coupling, lepton flavor violations, muon anomalous magnetic moment, possibility of dark matter candidate, neutrino mass matrix satisfying the neutrino oscillation data. Finally, we search for our allowed parameter region to satisfy all the constraints, and discuss a possibility of detecting new charged particles at the large hadron collider.'
author:
- Takaaki Nomura
- Hiroshi Okada
title: ' A one-loop neutrino mass model with $SU(2)_L$ multiplet fields '
---
[KIAS-P18115]{}, APCTP Pre2018 - 017
Introductions
=============
Radiatively induced neutrino mass models are one of the promising candidates to realize tiny neutrino masses with natural parameter spaces at TeV scale and to provide a dark matter (DM) candidate, both of which cannot be explained within the standard model (SM). In order to build such a radiative model, an inert scalar boson plays an important role and its inert feature can frequently be realized by imposing additional symmetry such as $Z_2$ symmetry [@Ma:2006km; @Krauss:2002px; @Aoki:2008av; @Gustafsson:2012vj] and/or $U(1)$ symmetry [@Okada:2012np; @Kajiyama:2013zla; @Kajiyama:2013rla], which also play an role in stabilizing the DM. On the other hand, once we introduce large $SU(2)_L$ multiplet fields such as quartet [@Nomura:2018ktz; @Nomura:2018ibs], quintet [@Nomura:2018lsx; @Nomura:2018cle], septet fields [@Nomura:2018cfu; @Nomura:2017abu; @Nomura:2016jnl], we sometimes can evade imposing additional symmetries [@Anamiati:2018cuq; @Cirelli:2005uq]. Then, the stability originates from a remnant symmetry after the spontaneous electroweak symmetry breaking due to the largeness of these multiplets. In addition, the cut-off scale of a model is determined by the renormalization group equations (RGEs) of $SU(2)_L$ gauge coupling, and it implies that a theory can be within TeV scale, depending on the number of multiplet fields. Thus a good testability could be provided in such a scenario.
[ Then, using large $SU(2)_L$ multiplet fields, we would like to realize one-loop neutrino generation by inert scalar field without imposing additional symmetry such as $Z_2$. In this case scalar quintet $H_5$ is minimal choice for inert multiplet since scalar multiplet smaller than quintet easily develop a vacuum expectation value (VEV) by renormalizable interaction with SM Higgs field $H$ like $H_4 H H H$ for the quadruplet $H_4$. In addition we need quadruplet fermion $\psi_4$ to interact $H_5$ with the SM lepton doublet and septet scalar $H_7$ is also required to get Majorana mass term from $\psi_4$ by its VEV (Higgs triplet is also possible but it allows type-II seesaw mechanism [@Magg:1980ut; @Konetschny:1977bn]). We find that scalar quadruplet $H_4$ is needed to realize vacuum configuration in which the VEV of $H_5$ to be zero; in addition we can avoid dangerous massless Goldstone boson from scalar multiplets by non-trivial terms with these multiplets. Although number of exotic fields is smaller in other one-loop neutrino mass models like scotogenic model [@Ma:2006km] they usually require additional discrete symmetry such as $Z_2$. We show the realization of one-loop neutrino mass without additional symmetry which result in introduction of several exotic multiplets. ]{}
In this letter, we introduce several multiplet fermions and scalar fields under the $SU(2)_L$ gauge symmetry. As a direct consequence of multiplet fields, our cut-off scale is of the order 10 PeV that could be tested by current or future experiments. In our model we do not impose additional symmetry and search for possible solution to obtain inert condition for generating neutrino mass at loop level. Then required inert feature can be realized not via a remnant symmetry but via cancellations among couplings in our scalar potential thanks to several non-trivial couplings [@Okada:2015bxa]. In such a case, generally DM could decay into the SM particles, but we can control some parameters so that we can evade its too short lifetime without requiring too small couplings. Therefore our DM is long-lived particle which represents clear difference from the scenario where the stability of DM is due to an additional or remnant symmetry. We also discuss lepton flavor violations (LFVs), and anomalous magnetic moment (muon $g-2$), and search for allowed parameter region to satisfy all the constraints such as neutrino oscillation data, LFVs, DM relic density, and demonstrate the possibility of detecting new charged particles at the large hadron collider (LHC).
This letter is organized as follows. In Sec. II, [we review our model and formulate the Higgs sector, neutral fermion sector including active neutrinos. Then we discuss the RGE of the $SU(2)_L$ gauge coupling, LFVs, muon $g-2$, and our DM candidate. In Sec. III, we explore the allowed region to satisfy all the constraints, and discuss production of our new fields (especially charged bosons) at he LHC. In Sec. IV, we devote the summary of our results and the conclusion.]{}
Model setup and Constraints
===========================
$L_L^a$ $e_R^a$ $\psi^a$ $H_2$ $H_4$ $H_5$ $H_7$
----------- ------------ ----------- ---------------- ----------- ----------- ---------- ----------
$SU(2)_L$ $\bm{2}$ $\bm{1}$ $\bm{4}$ $\bm{2}$ $\bm{4}$ $\bm{5}$ $\bm{7}$
$U(1)_Y$ -$\frac12$ -$1$ [-$\frac12$]{} $\frac12$ $\frac12$ [$0$]{} [$1$]{}
: Charge assignments of the our lepton and scalar fields under $SU(2)_L\times U(1)_Y$, where the upper index $a$ is the number of family that runs over 1-3 and all of them are singlet under $SU(3)_C$. []{data-label="tab:1"}
In this section we formulate our model. As for the fermion sector, we introduce three families of vector-like fermions $\psi$ with $(4,{-1/2})$ charge under the $SU(2)_L\times U(1)_Y$ gauge symmetry. As for the scalar sector, we respectively add an $SU(2)_L$ quartet ($H_4$), quintet ($H_5$), and septet ($H_7$) complex scalar fields with [$(1/2,0,1)$]{} charge under the $U(1)_Y$ gauge symmetry in addition to the SM-like Higgs that is denoted by $H_2$, where the quintet $H_5$ is expected to be an inert scalar field. Here we write the nonzero vacuum expectation values [(VEVs)]{} of $H_2$, $H_4$, and $H_7$ by $\langle H_2\rangle\equiv v_H/\sqrt2$, $\langle H_4\rangle\equiv v_4/\sqrt2$ and $\langle H_7\rangle\equiv v_7/\sqrt2$, respectively, which induces the spontaneous electroweak symmetry breaking. All the field contents and their assignments are summarized in Table \[tab:1\], where the quark sector is exactly the same as the SM. The renormalizable Yukawa Lagrangian under these symmetries is given by $$\begin{aligned}
-{\cal L_\ell}
& = y_{\ell_{aa}} \bar L^a_L H_2 e^a_R + f_{ab} [ \bar L^a_L H_5 (\psi_R)^b ]
+ g_{L_{aa}} [(\bar\psi^c_L)^a H_7 \psi^a_L]
+ g_{R_{aa}} [(\bar\psi^c_R)^a H_7\psi^a_R] {\nonumber}\\
&+M_{D_{aa}} \bar \psi^a_R \psi_L^a + {\rm h.c.}, \label{Eq:yuk}
$$ where $SU(2)_L$ index is omitted assuming it is contracted to be gauge invariant inside bracket \[$\cdots$\], upper indices $(a,b)=1$-$3$ are the number of families, and $y_\ell$ and either of $g_{L/R}$ or $M_D$ are assumed to be diagonal matrix with real parameters without loss of generality. Here, we assume $g_{L/R}$ and $M_D$ to be diagonal for simplicity. The mass matrix of charged-lepton is defined by $m_\ell=y_\ell v/\sqrt2$. Here we assign lepton number $1$ to $\psi_{}$ so that the source of lepton number violation is only the terms with coupling $g_{ab}$ and $g'_{ab}$ in the Lagrangian requiring the lepton number is conserved at high scale.
Scalar sector
-------------
: The scalar potential in our model is given by [$$\begin{aligned}
{\cal V} = & - M_2^2 H_2^\dagger H_2 + M_4^2 H_4^\dagger H_4 + M_7^2 H_7^\dagger H_7 + \lambda_{H} (H_2^\dagger H_2)^2 {\nonumber}\\
&+ \mu_H^2 [H_5^2] + \mu_1 [H_2 \tilde H_4 H_5] + \mu_2 [H_4^T \tilde H_7 H_4]+ \lambda_0 [H_2^T H_2 H_5 H_7^*] {\nonumber}\\
& + \lambda_1 [H_2 H_4 H_5 \tilde H_7] + \lambda_2 [H_2^\dag H_2 H_4^\dag H_2] +{\rm h.c.} + V_{tri},
\label{Eq:potential}\end{aligned}$$ where $V_{tri}$ is the trivial quartic terms containing $H_{4,5,7}$. ]{} From the conditions of $\partial {\cal V}/\partial v_5 = 0$ and $\langle H_5\rangle=0$, we find the following relation: $$\begin{aligned}
v_4 =\frac{3\sqrt{10} v_7 v_2 \lambda_0}{\sqrt{30}v_7\lambda_1+15 \mu_1} \label{eq:cond1}.\end{aligned}$$ [Then, the stable conditions to the $H_4$ and $H_7$ lead to the following equations: $$\begin{aligned}
v_2 = \frac{3}{8} \left( \frac{\lambda_2}{\lambda_H} v_4 + \sqrt{\frac{\lambda_2^2}{\lambda_H^2}v_4^2 + \frac{64 M_2^2}{9 \lambda_H} } \right), \quad
v_4 =\frac{5v_2^3 \lambda_2 }{2\sqrt{3}(10 M^2_4 +\sqrt{30} \mu_2)}, \quad
v_7 = -\sqrt{\frac{3}{10}}\frac{v_4^2 \mu_2 }{2 M^2_7}, \label{eq:cond2}\end{aligned}$$ where we have ignored contributions from terms in $V_{tri}$ assuming corresponding couplings are negligibly small; we can always find a solution satisfying the inert condition including such terms.]{} Solving Eqs.(\[eq:cond1\]) and (\[eq:cond2\]), one rewrites VEVs and one parameter in terms of the other parameters. In addition to the above conditions, we also need to consider the constraint from $\rho$ parameter, which is given by the following relation at tree level: $$\begin{aligned}
\rho\approx \frac{v_2^2+\frac{11}2 v_4^2+22v_7^2}{v_2^2 + v_4^2 + 4v_7^2},\end{aligned}$$ where the experimental values is given by $\rho=1.0004^{+0.0003}_{-0.0004}$ at 2$\sigma$ confidential level [@pdg]. Then, we have, e.g., the solutions of $(v_2,v_4,v_7)\approx(246, 2.18,1.03)$ GeV, where $v_2^2 + v_4^2 + 4v_7^2\approx 246$ GeV$^2$.
0 : The scalars and fermions with large $SU(2)_L$ multiplet provide exotic charged particles. Here we write components of multiplets as $$\begin{aligned}
& H_5 = (\phi_5^{++}, \phi_5^{+}, \phi_5^{0}, \phi'^{-}_5, \phi'^{--}_5)^T, \label{eq:H5} \\
& \psi_{L(R)} = (\psi^{+}, \psi^{0}, \psi'^{-}, \psi^{--})^T_{L(R)}, \label{eq:psiLR} \\
& \Sigma_R = (\Sigma^{++}, \Sigma^{+}, \Sigma^{0}, \Sigma'^{-}, \Sigma'^{--})_R^T. \label{eq:sigmaR} \end{aligned}$$ The mass of component in $H_5$ is given by $\sim M_5$ where charged particles in the same multiplet have degenerate mass at tree level which will be shifted at loop level [@Cirelli:2005uq]. For charged fermions, components from $\psi_{L(R)}$ and $\Sigma_R$ can be mixed after electroweak symmetry breaking via Yukawa coupling. If the Yukawa couplings are negligibly small the charged components in $\psi_{L(R)}$ have Dirac mass $M_D$ while the charged components in $\Sigma_R$ have Dirac mass $M_\Sigma$ where mass terms are constructed by pairs of positive-negative charged components in the multiplet. Note that mass term of neutral component is discussed with neutrino sector below.
0
![Feynman diagram to generate the masses of $\mu_{L/R}$.[]{data-label="fig:mu_mass"}](mu_mass.eps){width="5.0cm"}
Neutral fermion masses
----------------------
: After the spontaneously electroweak symmetry breaking, extra neutral fermion mass matrix in basis of $\Psi^0_R\equiv (\psi^0_R,\psi_L^{0c})^T$ is given by $$\begin{aligned}
M_N
&=
\left[\begin{array}{cc}
\mu_R & M_D^T \\
M_D & \mu_L \\
\end{array}\right],\end{aligned}$$ where $\mu_{R}\equiv \sqrt{\frac{3}{10}}g_{R} v_7$ and $\mu_{L}\equiv \sqrt{\frac{3}{10}}g^*_{L} v_7$. Since we can suppose hierarchy of mass parameters to be $\mu_{L/R}<<M_D$, the mixing is expected to be maximal. Thus, we formulate the eigenstates in terms of the flavor eigenstate as follows: $$\begin{aligned}
\psi^0_R=\frac{i}{\sqrt2} \psi_{1_R} - \frac{i}{\sqrt2} \psi_{2_L}^{c},\quad
\psi^{0c}_L=\frac{1}{\sqrt2} \psi_{1_R} + \frac{1}{\sqrt2} \psi_{2_L}^{c},\end{aligned}$$ where $\psi_{1_R}$ and $\psi^c_{2_L}$ represent the mass eigenstates, and their masses are respectively given by $M_a\equiv M_D- (\mu_R+\mu_L)/2$ (a=1-3) $M_b\equiv M_D + (\mu_R+\mu_L)/2$ (b=4-6).
: In our scenario, active neutrino mass is induced at one-loop level, where $\psi_{1,2}$ and $H_5$ propagate inside a loop diagram as in Fig. \[fig:diagram\], and the masses of real and imaginary part of electrically neutral component of $H_5$ are respectively denoted by $m_R$ and $m_I$. As a result the active neutrino mass matrix is obtained such that $$\begin{aligned}
m_\nu = \sum_{\alpha=1}^6 \frac{f_{i\alpha} M_\alpha f^T_{\alpha j} }{8(4\pi)^2}
\left[
\frac{r_R^\alpha \ln r^\alpha_R}{1-r^\alpha_R}
-
\frac{r_I^\alpha \ln r^\alpha_I}{1-r^\alpha_I}
\right],\end{aligned}$$ where $r^{\alpha}_{R/I} \equiv\frac{m^2_{R/I}}{M_\alpha^2}$. Neutrino mass eigenvalues ($D_\nu$) are given by $D_\nu=U_{\rm MNS} m_\nu U^T_{\rm MNS}$, where $U_{\rm MNS}$ is the MNS matrix. Once we define $m_{\nu} \equiv f {\cal M} f^T$, one can rewrite $f$ in terms of the other parameters [@Casas:2001sr; @Chiang:2017tai] as follows: $$\begin{aligned}
f_{ik}=\sum_{\alpha=1}^6 U^\dag_{ij} \sqrt{D_{\nu_{jj}}} O_{j\alpha} \sqrt{{\cal M}_{\alpha\alpha}} V^*_{\alpha k},\end{aligned}$$ where $O$ is a three by six arbitrary matrix, satisfying $OO^T=1$, and $|f|\lesssim \sqrt{4\pi}$ is imposed not to exceed the perturbative limit.
![The diagram inducing active neutrino mass.[]{data-label="fig:diagram"}](diagram.eps){width="10cm"}
0
Charged fermion masses
----------------------
: The singly-charged fermion mass matrix, in basis of $\Psi^-_R \equiv (\psi^{-}_R (\equiv (\psi_L^{+})^c ),\psi'^-_R,\Sigma'^-_R)^T$ and $\Psi^-_L \equiv (\psi^{-}_L ,\psi'^-_L,\Sigma^-_L (\equiv (\Sigma_R^+)^c ))^T$, is given by $$\begin{aligned}
L_{M_\pm} = \bar \Psi^-_L M_\pm \Psi^-_R, \quad
M_\pm
&=
\left[\begin{array}{ccc}
M_D^T & 0 & -\frac12 m' \\
0 & M_D & \frac{\sqrt3}{2}m \\
-\frac12 m'^T & \frac{\sqrt3}2 m^T & \frac12(M_\Sigma+ M^T_{\Sigma})\\
\end{array}\right].\end{aligned}$$ When $M_\pm$ is symmetric, $M_\pm$ are $\Psi^\pm_{L(R)}$ and respectively rotated by the unitary matrix as $$\begin{aligned}
\Psi^\pm_{L(R)}=V^T_C \psi^\pm_{{L(R)}_{1-9}},\quad D_\pm\equiv{\rm diag}(M_{C_1},...,M_{C_{9}})=V_C M_\pm V_C^T,\end{aligned}$$ where $\psi^\pm_{R_{1-9}}$ and $D_\pm$ are respectively mass eigenvectors and mass eigenvalues of Dirac type.
: The doubly-charged fermion mass matrix, in basis of $\Psi^{--}_R \equiv ( \psi_R^{--},\Sigma'^{--}_R)^T$ and $\Psi^{--}_L \equiv ( \psi_L^{--} ,\Sigma^{--}_L (\equiv (\Sigma_R^{++})^c ))^T$, is given by $$\begin{aligned}
L_{M_{\pm \pm}} = \bar \Psi_L^{--} M_{\pm \pm} \Psi^{--}_R, \quad
M_{\pm\pm} =
\left[\begin{array}{cc}
M_D & m \\
m'^T & \frac12 (M_\Sigma + M^T_\Sigma) \\
\end{array}\right].\end{aligned}$$ When $M_{\pm\pm}$ is symmetric, then $M_{\pm\pm}$ and $\Psi^{\pm\pm}_{L(R)}$ are respectively rotated by the unitary matrix as $$\begin{aligned}
\Psi^{\pm\pm}_{L(R)}=V^T_{CC} \psi_{{L(R)}_{1-6}}^{\pm\pm},\quad D_{\pm \pm} \equiv{\rm diag}(M_{CC_1},...,M_{CC_{6}})=V_{CC} M_{\pm\pm} V_{CC}^T,\end{aligned}$$ where $ \psi_{{L(R)}_{1-6}}^{\pm\pm}$ and $D_{\pm\pm}$ are respectively mass eigenvectors and mass eigenvalues of Dirac type.
Analysis of other phenomenological formulas
-------------------------------------------
\[beta-func\] Here we estimate the running of gauge coupling of $g_2$ in the presence of several new multiplet fields of $SU(2)_L$. The new contribution to $g_2$ from fermions (with three families) and bosons are respectively given by [@Nomura:2017abu; @Kanemura:2015bli] $$\begin{aligned}
\Delta b^{f}_{g_2}=\frac{10}{3}, \ \Delta b^{b}_{g_2}=\frac{43}{3} .\end{aligned}$$ Then one finds that the resulting flow of ${g_2}(\mu)$ is then given by the Fig. \[fig:rge\]. This figure shows that the red line is relevant up to the mass scale $\mu={\cal O}(1)$ PeV in case of $m_{th}=$0.5 TeV, while the blue line is relevant up to the mass scale $\mu={\cal O}(10)$ PeV in case of $m_{th}=$5 TeV.
![The running of $g_2$ in terms of a reference energy of $\mu$, where the red line corresponds to $m_{th}=$0.5 TeV, while the blue one does $m_{th}=$5 TeV. []{data-label="fig:rge"}](rge.eps){width="10.0cm"}
LFV decays $\ell_i \to \ell_j \gamma$ arise from the term associated with coupling $f$ at one-loop level, and its form can be given by [@Lindner:2016bgg; @Baek:2016kud] $$\begin{aligned}
{\rm BR}(\ell_i\to\ell_j\gamma)= \frac{48\pi^3\alpha_{\rm em} C_{ij} }{{\rm G_F^2} m_{\ell_i}^2}\left(|a_{R_{ij}}|^2+|a_{L_{ij}}|^2\right),
\end{aligned}$$ where $$\begin{aligned}
a_{R_{ij}} &=\sum_{\alpha=1}^3
\frac{f_{j\alpha} m_{\ell_i} f^\dag_{\alpha i}} {(4\pi)^2}
\left[-\frac1{12} G(m_{a},M_{\pm_\alpha}) +G(M_\alpha,m_\pm)+G(M_{3+\alpha},m_\pm)
\right.{\nonumber}\\
&+ \left. \frac14\left[ 2 G(M_{\pm_\alpha}, m_{\pm\pm}) + G(m_{\pm\pm}, M_{\pm_\alpha}) \right]
- G(M_{\pm\pm_\alpha}, m_{\pm}) - 2 G(m_{\pm}, M_{\pm\pm_\alpha}) \right],\label{eq:amu}\end{aligned}$$ and $$\begin{aligned}
&G(m_a,m_b)\equiv \int_0^1dx\int_0^{1-x}dy\frac{xy}{(x^2-x)m^2_{\ell_i} +x m_a^2+(1-x) m^2_b},\end{aligned}$$ where $a_L=a_R(m_{\ell_i}\to m_{\ell_j})$.
(muon $g-2$: $\Delta a_\mu$) : We obtain $\Delta a_\mu$ from the same diagrams for LFVs and it can be formulated by the following expression $$\begin{aligned}
&\Delta a_\mu \approx -m_\mu [{a_{L_{\mu\mu}}+a_{R_{\mu\mu}}}]
= -2m_\mu{a_{L_{\mu\mu}}}, \label{eq:G2-ZP}\end{aligned}$$ [where $a_{L_{\mu \mu}} = a_{R_{\mu \mu}}$ has been applied. In Eq. (\[eq:amu\]), one finds that the first term and the last two terms provide positive contributions, while the other terms do the negative contributions. When mediated masses are same value for all the modes; $(m\equiv) m_a=m_\pm=m_{\pm\pm}=M_{\pm}=M_{\pm\pm}=M_{\pm\pm}$, one simplifies the formula of $a_R$ as $$\begin{aligned}
a_{R_{ij}} &\approx -\frac13\sum_{\alpha=1}^3
\frac{f_{j\alpha} m_{\ell_i} f^\dag_{\alpha i}} {(4\pi)^2} G(m,m).
\if0
\approx
-\frac1{72}\sum_{\alpha=1}^3
\frac{f_{j\alpha} m_{\ell_i} f^\dag_{\alpha i}} {(4\pi)^2m^2}.
\fi\end{aligned}$$ Thus one would have positive contribution to the muon $g-2$, and we use the allowed range of $\Delta a_\mu= (26.1\pm8.0)\times 10^{-10}$ in our numerical analysis below. ]{}
: Interactions among SM Higgs field and large multiplet scalars affect the branching ratio of $h \to \gamma \gamma$ process via charged scalar loop. Here we write the relevant interactions such that $$\mathcal{V} \supset \sum_{\Phi = H_4, H_5, H_7} \lambda_{H \Phi} (H_2^\dagger H_2)(\Phi^\dagger \Phi) \supset \sum_{\Phi = H_4, H_5, H_7} \lambda_{H \Phi} v_2 h (\Phi^\dagger \Phi),$$ where $\Phi^\dagger \Phi$ provide sum of charged scalar bilinear terms. Then we obtain decay width of $h \to \gamma \gamma$ at one-loop level as [@Gunion:1989we] $$\Gamma_{h \to \gamma \gamma} \simeq \frac{\alpha_{em}^2 m_h^3}{256 \pi^3} \left| \frac{4}{3 v_2} A_{1/2}(\tau_t) + \frac{1}{v_2} A_1(\tau_W) + \sum_\Phi \sum_{\Phi_i} Q_{\Phi_i}^2 \frac{\lambda_{H \Phi}}{2 m_\Phi^2} A_0(\tau_\Phi) \right|^2,$$ where $\Phi_i$ indicates components in the multiplet $\Phi$ and $Q_{\Phi_i}$ is its electric charge, and $\tau_f = 4 m_f^2/m_h^2$. The loop functions are given by $$\begin{aligned}
& A_0 (x) = -x^2[x^{-1} - [\sin^{-1} (1/\sqrt{x})]^2], \\
& A_{1/2} (x) = 2 x^2[x^{-1} + (x^{-1} -1 )[\sin^{-1} (1/\sqrt{x})]^2], \\
& A_1 (x) = -x^2[2 x^{-2} + 3 x^{-1} + 3(2 x^{-1}-1) [\sin^{-1} (1/\sqrt{x})]^2]\end{aligned}$$ where $x < 1$ is assumed and subscript of $A_{0,1/2,1}(x)$ correspond to spin of particle in loop diagram. We then estimate $\mu_{\gamma \gamma} \equiv BR(h \to \gamma \gamma)_{\rm SM+ exotic}/BR(h \to \gamma \gamma)_{\rm SM}$ assuming Higgs production cross section is the same as in the SM. In Fig. \[fig:diphoton\], we show the $\mu_{\gamma \gamma}$ as a function of function of $\lambda_{H \Phi}$ assuming they are same value for $\Phi = (H_4, H_5, H_7)$ and masses of corresponding multiplets are $(1, 5, 1)$ TeV. The value of $\mu_{\gamma \gamma}$ is constrained by the current LHC data [@ATLAS:2018doi; @Sirunyan:2018koj] and we indicate $1 \sigma$ region in the plot. We thus find that $|\lambda_{H \Phi}|$ is required to be less than around $1$ for TeV scale scalar masses.
![$\mu_{\gamma \gamma} \equiv BR(h \to \gamma \gamma)_{\rm SM+ exotic}/BR(h \to \gamma \gamma)_{\rm SM}$ as a function of $\lambda_{H \Phi}$ assuming they are same value for $\Phi = H_4, H_5, H_7$ and masses of corresponding multiplets are $(1, 5, 1)$ TeV. The shaded region is $1 \sigma$ region from the LHC data [@ATLAS:2018doi].[]{data-label="fig:diphoton"}](diphoton.eps){width="10cm"}
: In our case, the lightest neutral fermion among $\psi_{1,2}$ can be a DM candidate, which comes from $SU(2)_L$ quintet field with $-1/2$ charge under $U(1)_Y$. [Here we firstly require that higher-dimensional operator inducing decay of the DM is not induced by the physics above cut-off scale so that decay of DM can only be induced via renormalizable Lagrangian in the model. Assuming the dominant contribution to explain the relic density originates from gauge interactions in the kinetic terms, the typical mass range is $M_{DM} \gtrsim 2.4$ TeV where $M_{DM} = 2.4 \pm 0.06$ TeV is estimated by perturbative calculation [@Cirelli:2005uq] and heavier mass is required including non-perturbative Sommerfeld enhancement effect [@Cirelli:2007xd]. Then the typical order of spin independent cross section for DM-nucleon scattering via Z-portal is at around $1.6\times10^{-45}$ cm$^2$ [@Cirelli:2005uq] for $M_{DM} \simeq 2.4$ TeV, which marginally satisfies the current experimental data of direct detection searches such as LUX [@Akerib:2016vxi], XENON1T [@Aprile:2017iyp], and PandaX-II [@Cui:2017nnn]; the direct detection constraint is weaker for heavier DM mass. In the numerical analysis, below, we fix the DM mass to be 2.4 TeV as a reference value for simplicity. ]{} One feature of our model is possible instability of DM since we do not impose additional symmetry at TeV scale. We thus have to estimate the decay of DM so that the life time $\tau_{DM}=\Gamma^{-1}_{DM}$ does not exceed the age of universe that is around $4.35\times 10^{17}$ second. The main decay channel arises from interactions associated with couplings $f$ and $\lambda_0$, when we neglect the effect of mixing among neutral bosons. Then the three body decay ratio of $\Gamma(DM\to \nu_i h_{} h_{})$ via the neutral component of $H_5$ is given by $$\begin{aligned}
\Gamma(DM\to\nu_i h_{}h_{})\approx \frac{\lambda_0^2 |f_{1i}|^2 M_{DM}^3 v_7^2 }{7680 m_R^4\pi^3}\lesssim
\frac{\lambda_0^2 |{\rm Max}[f_{1i}]|^2 M_{DM}^3 v_7^2 }{7680 m_R^4\pi^3},\end{aligned}$$ where we assume the final states to be massless, $m_R\approx m_I$, $M_{DM}$ is the mass of DM, and $h_{}$ is the SM Higgs. In the numerical analysis, we will estimate the lifetime and show the allowed region, where we take the maximum value of $|f_{1a}|$. [^1]
Numerical analysis and phenomenology
====================================
Here we carry out numerical analysis to discuss consistency of our model under the constraints discussed in previous section. Then we discuss collider physics focusing on charged scalar bosons in the model.
: In our numerical analysis, we assume all the mass of $\psi_{1,2}$ to be the mass of DM; 2.4 TeV, and all the component of $H_5$ except $m_I$ to be degenerate, where $m_{I}=1.1 m_R$. These assumptions are reasonable in the aspect of oblique parameters in the multiplet fields [@pdg]. Also we fix to be the following values so as to maximize the muon $g-2$: $$\begin{aligned}
& O _{12}=0.895+12.3i ,\quad O _{23}=1.88+0.52i ,\quad O _{13}=0.4+0.6i,\end{aligned}$$ where $O_{12,23,13}$ are arbitral mixing matrix with complex values that are introduced in the neutrino sector [@Nomura:2018lsx; @Chiang:2017tai]. Notice here that we also impose $|f|\lesssim \sqrt{4\pi}$ not to exceed the perturbative limit.
Fig. \[fig:f\] represents various LFV processes and $\Delta a_\mu$ in terms of $m_R$, where $BR(\mu\to e\gamma)$, $BR(\tau\to e\gamma)$, $BR(\tau\to \mu\gamma)$, and $\Delta a_\mu$ are respectively colored by red, magenta, blue, and black. The black horizontal line shows the current upper limit of the experiment [@Cai:2017jrq; @TheMEG:2016wtm], while the green one does the future upper limit of the experiment [@Cai:2017jrq; @Baldini:2013ke]. Considering these bounds of $\mu\to e\gamma$, one finds that the current allowed mass range of $m_R \sim$ 4-20 TeV can be tested in the near future. Here the upper bounds of $BR(\tau\to e\gamma)$ and $BR(\tau\to \mu\gamma)$ are of the order $10^{-8}$, which is safe for all the range. The maximum value of $\Delta a_\mu$ is about $10^{-12}$, which is smaller than the experimental value by three order of magnitude.
![Various LFV processes and $\Delta a_\mu$ in terms of $m_R$, where $BR(\mu\to e\gamma)$, $BR(\tau\to e\gamma)$, $BR(\tau\to \mu\gamma)$, and $\Delta a_\mu$ are respectively colored by red, magenta, blue, and black. The black horizontal line shows the current upper limit of the experiment [@Cai:2017jrq; @TheMEG:2016wtm], while the green one does the future upper limit of the experiment [@Cai:2017jrq; @Baldini:2013ke]. []{data-label="fig:f"}](mr-lfvs.eps){width="10cm"}
Fig. \[fig:tau\] shows the lifetime of DM in terms of $m_R$, where we fix $v_7\approx1.03$ GeV, and $\lambda_0=(10^{-7}, 10^{-9},10^{-11})$ with (red, green, blue). The black horizontal line shows the current age of Universe. The figure demonstrates as follows: $$\begin{aligned}
\lambda_0=10^{-7}:\ m_R\sim 1000\ {\rm TeV},\quad
\lambda_0=10^{-9}:\ 100\ {\rm TeV}\lesssim m_R,\quad
\lambda_0=10^{-11}:\ 10\ {\rm TeV}\lesssim m_R.\end{aligned}$$
![ the lifetime of DM in terms of $m_R$, where we fix $v_7\approx1.03$ GeV, and $\lambda_0=(10^{-7}, 10^{-9},10^{-11})$ with (red, green, blue). The black horizontal line shows the current age of Universe $\tau_0$. []{data-label="fig:tau"}](mr-tau.eps){width="10cm"}
: Here let us briefly comments possible collider physics of our model. We have many new charged particles from $SU(2)_L$ multiplet scalars and fermions. Clear signal could be obtained from charged scalar bosons in $H_7$ and $H_4$, since they can decay into final states containing only SM particles where the components in these multiplets are given by $$\begin{aligned}
& H_7 = (\phi_7^{++++}, \phi_7^{+++}, \phi_7^{++}, \phi^{+}_7, \phi^{0}_7, \phi'^{-}_7, \phi'^{--})^T, \label{eq:H7} \\
& H_4 = (\phi_4^{++}, \phi_4^{+}, \phi_4^{0}, \phi'^{-}_4)^T. \label{eq:H4} \end{aligned}$$ The quadruply charged scalar is particularly interesting since it is specific in our model and would provide sizable production cross section. We thus focus on $\phi_7^{\pm \pm \pm \pm}$ signal in our model [^2]. The quadruply charged scalar can be pair produced by Drell-Yan(DY) process, $q \bar q \to Z/\gamma \to \phi^{++++}_7 \phi^{----}_7$, and by photon fusion(PF) process $\gamma \gamma \to \phi^{++++}_7 \phi^{----}_7$ [@Babu:2016rcr; @Ghosh:2017jbw; @Ghosh:2018drw]. We estimate the cross section using [MADGRAPH/MADEVENT5]{} [@Alwall:2014hca], where the necessary Feynman rules and relevant parameters of the model are implemented by use of FeynRules 2.0 [@Alloul:2013bka] and the [NNPDF23LO1]{} PDF [@Deans:2013mha] is adopted. In Fig. \[fig:LHC\] we show the cross section for the quadruply charged scalar production process $pp \to \phi^{++++}_7 \phi^{----}_7$ at the LHC 14 TeV, where dashed line indicates the cross section from only Drell-Yan process and solid line corresponds to the cross section including both Drell-Yan and photon fusion processes. We thus find that the cross section is highly enhanced including PF process due to large electric charge of the scalar boson. Thus sizable number of $\phi_7^{\pm \pm \pm \pm}$ pair can be produced at the LHC 14 TeV if its mass is $\mathcal{O}(1)$ TeV, with sufficiently large integrated luminosity. Produced $\phi^{\pm \pm \pm \pm}_7$ mainly decays into $\phi_4^{\pm \pm} \phi^{\pm \pm}_4$ via $H_4^T \tilde H_7 H_4$ interactions in the scalar potential since components in $H_7$ have degenerate mass. Then $\phi_4^{\pm \pm}$ decays into $W^\pm W^\pm$ via $(D_\mu H_4)^\dagger (D^\mu H_4)$ term. We thus obtain multi $W$ boson signal from quadruply charged scalar boson production. Mass reconstruction from multi $W$ boson final state is not trivial and detailed analysis is beyond the scope of this paper.
![Cross section for $pp \to \phi^{++++}_7 \phi^{----}_7$ at the LHC 14 TeV where dashed line indicate the cross section from only Drell-Yan process and solid line corresponds to the cross section including both Drell-Yan and photon fusion processes.[]{data-label="fig:LHC"}](LHC.eps){width="10cm"}
[ In addition to the charged scalar bosons, we consider production of exotic charged fermions at the LHC. The quadruplet fermion $\psi^a$ is written by $$\psi^a = (\psi^0, \psi^-, \psi^{--}, \psi^{---})^a$$ where the subscript indicates electric charge of components. As in the scalar sector, we focus on the component with the highest electric charge that is $\psi^{\pm \pm \pm}$ in the multiplet. Pair production of $\psi^{\pm \pm \pm}$ is estimated by [MADGRAPH/MADEVENT5]{} as in the charged scalar case where we consider both DY- and PF-processes. The production cross section is shown In Fig. \[fig:LHC2\] where the dashed and solid lines correspond to values from only DY process and from sum of both processes as in the scalar case. We obtain cross section $\sigma \sim 0.03$ fb for $M_\psi \sim 2.4$ TeV which is motivated by DM relic density. In that case we can obtain $\sim 10 (100)$ events for integrated luminosity of $300 (3000)$ fb. Charged fermions in $\psi^a$ decay as $\psi^{n} \to \psi^{n\pm1} W^{\mp*}$ where $n$ indicates electric charge and $W$ boson is off-shell since the mass differences between components are radiatively induced and its value is around 350 MeV [@Cirelli:2005uq]; exotic fermions cannot decay via $\bar L H_5 \psi$ coupling since $H_5$ is heavier than $\psi$. Thus $\psi^{\pm \pm \pm}$ production gives signature of light mesons with missing transverse momentum through decay chain of $\psi^{\pm \pm \pm} \to W^{\pm*} \psi^{\pm \pm} (\to W^{\pm*} \psi^\pm (\to W^{\pm*} \psi^0))$ where $\psi^0$ is DM. Furthermore we would have displaced vertex signature since decay length of charged fermions is long as $\mathcal{O}(1)$ cm [@Cirelli:2005uq] for quadruplet fermion. Therefore analysis of displaced vertex will be important to test our scenario. ]{}
![Cross section for $pp \to \psi^{+++} \psi^{---}$ at the LHC 14 TeV where dashed line indicate the cross section from only Drell-Yan process and solid line corresponds to the cross section including both Drell-Yan and photon fusion processes.[]{data-label="fig:LHC2"}](LHC2.eps){width="10cm"}
Summary and discussions
=======================
We have proposed an one-loop neutrino mass model, introducing large multiplet fields under $SU(2)_L$. The inert boson is achieved by nontrivial cancellations among quadratic terms. We have also considered the RGE for $g_2$, the LFVs, muon $g-2$, and fermionic DM candidate, and shown allowed region to satisfy all the constraints as we have discussed above. RGE of $g_2$ determines our cut-off energy that does makes our theory stay within the order $10$ PeV scale, therefore our model could totally be tested by current or near future experiments. Due to the multiplet fields, we have positive value of muon $g-2$, but find its maximum value to be of the order $10^{-12}$ that is smaller than the sizable value by three order of magnitude. For the LFVs, the most promising mode to be tested in the current and future experiments is $\mu\to e\gamma$ at the range of 3.2 TeV $\lesssim m_R\lesssim$ 11 TeV. [We have also discussed possible decay mode of our DM candidate and some parameters are constrained requiring DM to be stable on cosmological time scale. Notice that the decay of DM is one feature of our model and we would discriminate our model from models with absolutely stable DM by searching for signal of the DM decay.]{} Finally, we have analyzed the collider physics, focussing on multi-charged scalar bosons $H_4$ and $H_7$, [and triply charged fermion $\psi^{\pm \pm \pm}$ in exotic fermion sector]{}. For scalar sector, we find that sizable production cross section for quadruply charged scalar pair can be obtained adding the photon fusion process that is enhanced by large electric charge of $\phi^{\pm\pm\pm\pm}_7$. Then possible signal of $\phi^{\pm\pm\pm\pm}_7$ comes from decay chain of $\phi^{\pm\pm\pm\pm}_7 \to \phi^{\pm\pm}_4 \phi^{\pm\pm}_4 \to 4 W^\pm$ which would provide multi-lepton plus jets at the detector. We expect sizable number of events with sufficiently large integrated luminosity to detect them at the LHC 14 TeV where the detailed analysis of the signal and background is left in future works. [For exotic fermion sector, we have also find sizable production cross section for triply charged fermion pair. The triply charged fermion decay gives signature of light mesons with missing transverse momentum through decay chain of $\psi^{\pm \pm \pm} \to W^{\pm*} \psi^{\pm \pm} (\to W^{\pm*} \psi^\pm (\to W^{\pm*} \psi^0))$ where $\psi^0$ is DM. In addition, would have displaced vertex signature since decay length of charged fermions is long as $\mathcal{O}(1)$ cm for components in quadruplet fermion, and thus analysis of displaced vertex will be important to test our scenario. ]{}
Acknowledgments {#acknowledgments .unnumbered}
===============
This research is supported by the Ministry of Science, ICT and Future Planning, Gyeongsangbuk-do and Pohang City (H.O.). H. O. is sincerely grateful for KIAS and all the members.
[99]{}
E. Ma, Phys. Rev. D [**73**]{}, 077301 (2006) \[hep-ph/0601225\].
L. M. Krauss, S. Nasri and M. Trodden, Phys. Rev. D [**67**]{}, 085002 (2003) \[arXiv:hep-ph/0210389\].
M. Aoki, S. Kanemura and O. Seto, Phys. Rev. Lett. [**102**]{}, 051805 (2009) \[arXiv:0807.0361\]. M. Gustafsson, J. M. No and M. A. Rivera, Phys. Rev. Lett. [**110**]{}, no. 21, 211802 (2013) Erratum: \[Phys. Rev. Lett. [**112**]{}, no. 25, 259902 (2014)\] \[arXiv:1212.4806 \[hep-ph\]\].
H. Okada and T. Toma, Phys. Rev. D [**86**]{}, 033011 (2012) \[arXiv:1207.0864 \[hep-ph\]\].
Y. Kajiyama, H. Okada and K. Yagyu, Nucl. Phys. B [**874**]{}, 198 (2013) \[arXiv:1303.3463 \[hep-ph\]\]. Y. Kajiyama, H. Okada and T. Toma, Phys. Rev. D [**88**]{}, no. 1, 015029 (2013) \[arXiv:1303.7356 \[hep-ph\]\].
T. Nomura and H. Okada, arXiv:1809.06039 \[hep-ph\]. T. Nomura and H. Okada, arXiv:1806.07182 \[hep-ph\]. T. Nomura and H. Okada, arXiv:1808.05476 \[hep-ph\]. T. Nomura and H. Okada, Phys. Lett. B [**783**]{}, 381 (2018) \[arXiv:1805.03942 \[hep-ph\]\]. T. Nomura and H. Okada, arXiv:1807.04555 \[hep-ph\]. T. Nomura and H. Okada, Phys. Rev. D [**96**]{}, no. 9, 095017 (2017) \[arXiv:1708.03204 \[hep-ph\]\]. T. Nomura, H. Okada and Y. Orikasa, Phys. Rev. D [**94**]{}, no. 5, 055012 (2016) \[arXiv:1605.02601 \[hep-ph\]\].
G. Anamiati, O. Castillo-Felisola, R. M. Fonseca, J. C. Helo and M. Hirsch, arXiv:1806.07264 \[hep-ph\].
M. Cirelli, N. Fornengo and A. Strumia, Nucl. Phys. B [**753**]{}, 178 (2006) \[hep-ph/0512090\]. M. Magg and C. Wetterich, Phys. Lett. B [**94**]{}, 61 (1980); G. Lazarides, Q. Shafi and C. Wetterich, Nucl. Phys. B [**181**]{}, 287 (1981); R. N. Mohapatra and G. Senjanovic, Phys. Rev. D [**23**]{}, 165 (1981); E. Ma and U. Sarkar, Phys. Rev. Lett. [**80**]{}, 5716 (1998) \[hep-ph/9802445\]. W. Konetschny and W. Kummer, Phys. Lett. B [**70**]{}, 433 (1977); J. Schechter and J. W. F. Valle, Phys. Rev. D [**22**]{}, 2227 (1980); T. P. Cheng and L. -F. Li, Phys. Rev. D [**22**]{}, 2860 (1980); S. M. Bilenky, J. Hosek and S. T. Petcov, Phys. Lett. B [**94**]{}, 495 (1980).
H. Okada, N. Okada and Y. Orikasa, Phys. Rev. D [**93**]{}, no. 7, 073006 (2016) \[arXiv:1504.01204 \[hep-ph\]\].
C. Patrignani [*et al.*]{} \[Particle Data Group\], Chin. Phys. C [**40**]{}, no. 10, 100001 (2016).
J. A. Casas and A. Ibarra, Nucl. Phys. B [**618**]{}, 171 (2001) \[hep-ph/0103065\]. C. W. Chiang, H. Okada and E. Senaha, Phys. Rev. D [**96**]{}, no. 1, 015002 (2017) \[arXiv:1703.09153 \[hep-ph\]\]. S. Kanemura, K. Nishiwaki, H. Okada, Y. Orikasa, S. C. Park and R. Watanabe, PTEP [**2016**]{}, no. 12, 123B04 (2016) \[arXiv:1512.09048 \[hep-ph\]\].
M. Lindner, M. Platscher and F. S. Queiroz, Phys. Rept. [**731**]{}, 1 (2018) \[arXiv:1610.06587 \[hep-ph\]\]. S. Baek, T. Nomura and H. Okada, Phys. Lett. B [**759**]{}, 91 (2016) \[arXiv:1604.03738 \[hep-ph\]\].
J. F. Gunion, H. E. Haber, G. L. Kane and S. Dawson, Front. Phys. [**80**]{}, 1 (2000).
The ATLAS collaboration \[ATLAS Collaboration\], ATLAS-CONF-2018-031. A. M. Sirunyan [*et al.*]{} \[CMS Collaboration\], \[arXiv:1809.10733 \[hep-ex\]\].
M. Cirelli, A. Strumia and M. Tamburini, Nucl. Phys. B [**787**]{}, 152 (2007) \[arXiv:0706.4071 \[hep-ph\]\].
D. S. Akerib [*et al.*]{} \[LUX Collaboration\], Phys. Rev. Lett. [**118**]{}, no. 2, 021303 (2017) \[arXiv:1608.07648 \[astro-ph.CO\]\]. E. Aprile [*et al.*]{} \[XENON Collaboration\], Phys. Rev. Lett. [**119**]{}, no. 18, 181301 (2017) \[arXiv:1705.06655 \[astro-ph.CO\]\]. X. Cui [*et al.*]{} \[PandaX-II Collaboration\], Phys. Rev. Lett. [**119**]{}, no. 18, 181302 (2017) \[arXiv:1708.06917 \[astro-ph.CO\]\]. Y. Cai, J. Herrero-Garcia, M. A. Schmidt, A. Vicente and R. R. Volkas, Front. in Phys. [**5**]{}, 63 (2017) \[arXiv:1706.08524 \[hep-ph\]\]. A. M. Baldini [*et al.*]{} \[MEG Collaboration\], Eur. Phys. J. C [**76**]{}, no. 8, 434 (2016) \[arXiv:1605.05081 \[hep-ex\]\]. A. M. Baldini [*et al.*]{}, arXiv:1301.7225 \[physics.ins-det\].
F. del Aguila, M. Chala, A. Santamaria and J. Wudka, Phys. Lett. B [**725**]{}, 310 (2013) \[arXiv:1305.3904 \[hep-ph\]\]. F. del Águila and M. Chala, JHEP [**1403**]{}, 027 (2014) \[arXiv:1311.1510 \[hep-ph\]\]. M. Chala, C. Krause and G. Nardini, arXiv:1802.02168 \[hep-ph\].
K. S. Babu and S. Jana, Phys. Rev. D [**95**]{}, no. 5, 055020 (2017) \[arXiv:1612.09224 \[hep-ph\]\]. K. Ghosh, S. Jana and S. Nandi, JHEP [**1803**]{}, 180 (2018) \[arXiv:1705.01121 \[hep-ph\]\]. T. Ghosh, S. Jana and S. Nandi, Phys. Rev. D [**97**]{}, no. 11, 115037 (2018) \[arXiv:1802.09251 \[hep-ph\]\]. J. Alwall [*et al.*]{}, JHEP [**1407**]{}, 079 (2014) \[arXiv:1405.0301 \[hep-ph\]\].
A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, Comput. Phys. Commun. [**185**]{}, 2250 (2014) \[arXiv:1310.1921 \[hep-ph\]\]. C. S. Deans \[NNPDF Collaboration\], arXiv:1304.2781 \[hep-ph\].
0 K. Cheung, T. Nomura and H. Okada, Phys. Rev. D [**94**]{}, no. 11, 115024 (2016) \[arXiv:1610.02322 \[hep-ph\]\]. T. Nomura and H. Okada, Phys. Rev. D [**94**]{}, 075021 (2016) \[arXiv:1607.04952 \[hep-ph\]\]. K. Cheung and H. Okada, Phys. Rev. D [**97**]{}, no. 7, 075027 (2018) \[arXiv:1801.00585 \[hep-ph\]\]. K. Cheung and H. Okada, Phys. Lett. B [**774**]{}, 446 (2017) \[arXiv:1708.06111 \[hep-ph\]\].
M. E. Peskin and T. Takeuchi, Phys. Rev. Lett. [**65**]{}, 964 (1990). M. E. Peskin and T. Takeuchi, Phys. Rev. D [**46**]{}, 381 (1992). T. Nomura and H. Okada, Phys. Rev. D [**96**]{}, no. 1, 015016 (2017) \[arXiv:1704.03382 \[hep-ph\]\].
[^1]: In case where the neutral component of $H_5$ is DM candidate, $H_5$ decays into SM-like Higgs pairs via $\lambda_0$, and its decay rate is given by $\frac{\lambda_0^2v_7^2}{800\pi M_X}$. Then the required lower bound of $\lambda_0$ is of the order $10^{-19}$ so that its lifetime is longer than the age of Universe, where DM is estimated as 5 TeV [@Cirelli:2005uq].
[^2]: Collider phenomenology of charged scalars from quartet is discussed in refs. [@delAguila:2013yaa; @delAguila:2013mia; @Nomura:2017abu; @Chala:2018ari].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We observe a strong peak in the capacitive photocurrent of a MDMO-PPV / PCBM bulk heterojunction solar cell for excitation below the absorbance threshold energy. Illumination at the peak energy blocks charge capture at other wavelengths, and causes the photovoltage to drop dramatically. These results suggest that the new peak is due to a charge transfer state, which provides a pathway for charge separation and photocurrent generation in the solar cell.'
author:
- 'H.M. Shah'
- 'A.D. Mohite'
- 'T. Bansal'
- 'B.W. Alphenaar'
bibliography:
- 'OPVtext2.bib'
title: |
Photovoltage Bleaching in Bulk Heterojunction Solar Cells through\
Occupation of the Charge Transfer State
---
Organic photovoltaic (OPV) cells are cheaper and more easily fabricated than silicon solar cells and other inorganic photovoltaics. However, they also suffer from relatively low efficiency and time dependent output degradation. Solving these problems requires an improved understanding of the unique charge generation mechanism in OPVs. The highest OPV efficiencies have been observed in bulk heterojunction (BHJ) solar cells. BHJs consist of a mixture of two different organic materials a $\pi$ conjugated polymer (which acts as an electron donor) and a fullerene based polymer (which acts as an electron acceptor)[@yu1; @yu2; @shah]. Under illumination, excitons (or bound electron-hole pairs) are formed[@nunzi]. For photocurrent to be produced, the excitons must dissociate across the donor/acceptor interface and the resulting free carriers diffuse to the contacts. Recent experimental[@goris; @jjbs; @hall; @trivngs] and theoretical[@aryan] work suggests that the excitons dissociate via an intermediate charge transfer state (or exciplex[@morteani]) consisting of an electron hole pair bound across the donor / acceptor interface. There is still considerable debate however as to the nature of this state, and what its role is in the charge dissociation process.
Here, we describe characterization of BHJ solar cells using capacitive photocurrent spectroscopy, (CPS) a novel spectroscopy technique developed in our laboratory[@mohite1] that is particularly sensitive to the exciton dissociation process. CPS has been successfully used to distinguish between excitonic and free carrier states in individual carbon nanotubes[@mohite2; @mohite3] and is able to detect exciton dissociation for carriers not captured by the electrical contacts. Using CPS we are able to identify a photo-absorption state lying at an energy below the main exciton peak in an MDMO-PPV / PCBM solar cell. This peak has low absorbance, but very high dissociation efficiency, and its energy correlates well with previous observations of the charge transfer state[@hall]. Illumination at the peak energy results in a decrease in the photovoltage signal by more than 70%, while no decrease is observed under lower or higher energy illumination. This strongly suggests that this state has a significant role in the charge dissociation process.
![(a) Experimental set-up for the photovoltage measurement. The active region consists of a mixture of an electron donor (light regions) and an electron acceptor (dark regions). Electron-hole pairs excited by the incident light dissociate, and diffuse to the contacts where they are detected as an in-phase voltage. (b) Experimental set-up for the capacitive photocurrent measurement. The quartz dielectric blocks charge capture by the contacts. Dissociation of electron-hole pairs in the polymer results in an out-of phase voltage which is detected by a current amplifier which forms a virtual ground at the Al contact. ](Fig1.eps)
A comparison of standard photovoltage and capacitive photocurrent measurement techniques is shown in Figs. 1(a) and (b), respectively. In both cases, the active region consists of a 1:4 mixture of MDMO-PPV:PCBM. In the standard photovoltage measurement, electrical contacts are made to the top and bottom of the sample, using Al/LiF and ITO/PEDOT:PSS, respectively. Light incident on the polymer (through the transparent ITO contact) excites electron-hole pairs into excitonic states. Some fraction of the excitons dissociate into free carriers, and some fraction of these then diffuse to the contacts to be detected as a photovoltage.
![Comparison of the (a) absorbance, (b) standard photovoltage and (c) capacitive photocurrent for a MDMO-PPV:PCBM bulk heterojunction solar cell. The inset in (c) shows the excitation power dependence of the low energy feature in the capacitive photocurrent, measured for a second device (whose spectrum is shown in Fig. 4(d)) at an excitation of 688 nm.](Fig2.eps)
In the capacitive photocurrent measurement, the ITO contact is separated from the polymer by an insulating quartz layer. This blocks the flow of dc current to the contact; however, the probe remains sensitive to charge that dissociates and separates (due to the built-in potential) to form a net dipole moment in the polymer layer. For a modulated light source this produces an out-of-phase ac voltage which can be measured with respect to the isolated ITO contact. An advantage of this technique is that it is sensitive to charge that dissociates, but is unable to diffuse to the contacts. By contrast, in the standard photovoltage measurement the out-of-phase signal is dominated by the in-phase (or dc) signal due to the diffusion of carriers to the contacts. Carriers which dissociate, but do not make it to the contact remain undetected.
Figure 2(a) shows the absorbance spectrum of a 100 nm thick layer of 1:4 MDMO-PPV:PCBM, spin coated onto a glass slide. The measurements were performed using a Perkin Elmer Lambda 950 UV-VIS spectrometer under atmospheric conditions. A peak in the absorbance is observed at 2.6 eV (477 nm), and drops-off at lower energy. This peak has been attributed to the ground excitonic state in the MDMO-PPV . A second smaller peak associated with the ground state exciton of the PCBM lies at 3.28 eV (378 nm)[@cook]. Fig. 2(b) shows the open circuit photovoltage of the BHJ measured using the two-contact set-up shown in Fig. 1(a) with the sample in vacuum. The sample is illuminated by a tungsten halogen white light source (Newport Q-T-Halogen, 1 kW ) resolved by a monochromator (Acton Research, SpectraPro 500i) over the wavelength range of 380 - 800 nm. The incident light is chopped at low frequency (13 Hz) and the photovoltage detected with a lock-in amplifier. The photovoltage roughly correlates with the absorbance (showing a peak at 2.6 eV), and is similar to photocurrent and photovoltage measurements of the MDMO-PPV:PCBM system described in the literature[@hoppe; @shah].
Figure 2(c) shows the results of the capacitive photocurrent measurement of the BHJ, using the set-up shown in Fig. 1(b). The capacitor structure is made by spin coating a 100 nm film of 1:4 (MDMO: PPV-PCBM) onto a quartz slide with an ITO contact on the opposite side. An aluminum contact is then deposited onto the polymer film. The polymer and ITO function as the positive and negative electrodes of the capacitor, with the quartz functioning as the capacitor dielectric. The sample is anchored to a copper block within an optical access flow cryostat and kept at vacuum. Illumination is done with an optical parametric amplifier (OPA) excited by a pulsed Ti-Sapphire regenerative amplifier. This produces tunable excitation between 0.4 and 2.4 eV. The pulse width is 120 fs with a repetition rate of 1 kHz, and the output power is kept constant at 14 mW. The capacitive photocurrent is detected by passing the output into a current amplifier and then measuring the out-of-phase signal using a lock-in amplifier.
The capacitive photocurrent spectrum shows a peak at 2.6 eV, similar to that observed in the absorbance and standard photovoltage. However, a second peak is also observed at 1.77 eV ( 699 nm) whose magnitude is even larger than the ground state exciton peak. Comparison to Figs. 2(a) and 2(b) shows that evidence for the low energy feature can also be observed in the absorbance and standard photovoltage. In both cases, a small deflection is observed near 1.77 eV. However, the feature is much stronger in the capacitive photocurrent spectrum. Since evidence for the new feature is observed in all three measurements, its appearance is clearly not dependent on the use of a femtosecond pulsed laser (such as a multiple photon transition) or to an anomaly of the capacitive photocurrent technique. To further confirm that the low energy CPS peak is not due to two-photon absorption, the magnitude of the peak was measured as a function of laser power. As shown in the inset to Fig. 2(c), the magnitude of the peak increases approximately linearly with increasing power, indicating that it is due to a single photon absorption process.
The observation of a sub-absorption threshold feature in the CPS signal follows reports from a number of groups who have observed sub-gap features in the photoexcitation spectrum of BHJ solar cells using a variety of techniques (including absorption[@goris], photothermal deflection[@jjbs], photoluminescence[@hall], electroluminescence[@trivngs] and electroabsorption[@holt]). Photoluminescence measurements of MDMO:PPV-PCBM show a broad sub-gap peak centered at 1.65 eV[@hall], close in energy to the CPS peak that we observe. A question that remains is to what extent the observed sub-gap states contribute to the charge dissociation process. From Fig. 2(a), it is clear that the absorbance is relatively weak at the low energy CPS peak. The large magnitude of the CPS signal must mean that charge carriers photoexcited at this energy have a very high dissociation efficiency. However, most of the dissociated carriers do not diffuse far enough to be captured by the contacts, (as indicated by relatively weak signal in the standard photovoltage measurement). It is still possible, though, that the low energy state provides a pathway through which higher energy excitons can dissociate into free carriers. If so, it is expected that the occupation of the low energy state would reduce its availability for the dissociation of higher energy excitons, and that this would then lead to a reduction in the photovoltage.
To test this hypothesis, we performed a series of standard photovoltage measurements while exposing the sample to fixed wavelength light. The photovoltage is measured as a function of wavelength using light from the monochrometer (probe beam). For each measurement, the sample is also exposed to fixed wavelength light from the OPA laser (pump beam). The results are plotted in Fig. 3 for four different pump beam wavelengths (solid lines) and for the pump beam blocked (dashed line). The magnitude of the standard photovoltage decreases under exposure to the OPA light, by an amount that is strongly dependent on the wavelength of the pump beam. The maximum change is observed for a pump wavelength of 699 nm (1.77 eV), while almost no change is observed for a pump wavelength of 474 nm (2.6 eV).
Figure 4(a) plots the percent change in the photovoltage signal (integrated over the 380-800 nm probe beam wavelength range) as a function of the pump beam wavelength. As stated above, the greatest change in the photovoltage is centered at an excitation energy of 1.77 eV (699 nm) where a reduction of 71% is observed. For pump beam energies on either side of this value, the bleaching is reduced, forming a pronounced minimum in the photovoltage signal. A comparison to the capacitive photocurrent spectrum (reproduced from Fig 2(c) in Fig. 4(b)) shows that the wavelength dependence of the bleaching directly correlates with the longer wavelength feature in the CPS. To prove that this result is not specific to a particular device, Figs. 4(c) and 4(d) show results of the same measurement performed on a second device structure. Although there is some variation in the detailed spectrum, the main result is reproduced a large, sub-absorbance threshold peak is observed in the CP spectrum, and a decrease in the photovoltage is observed when the sample is exposed to light at the peak energy. It is interesting to note that in the second device, a pair of sub-threshold peaks are observed, however, only one is reproduced in the bleaching measurements. This suggests that lowest energy peak is not involved in the charge dissociation process, and is perhaps due to a sample specific defect state.
![Standard photovoltage spectra, measured as a function of probe beam wavelength while under illumination from a second pump beam. Results are shown for four different pump beam wavelengths. Also included are results for the pump beam blocked. (dashed line).](Fig3.eps)
It is striking that such large photovoltage bleaching occurs under exposure to long wavelength light even though the absorbance in this regime is relatively weak (see Fig. 2(a)). In contrast, little or no bleaching is observed under exposure to short wavelength light corresponding to the main absorbance peak. This demonstrates that the effect is not simply due to heating of the sample through absorption of the pump beam energy. Instead, the bleaching must somehow be caused by the occupation of the 1.77 eV state. One possibility is that filling of the 1.77 eV state and the subsequent charge dissociation creates an electric field which blocks the flow of additional charge to the contacts. To test this, we measured the voltage directly by replacing the current amplifier used in the capacitive photocurrent measurement with a voltage amplifier. We observe that the photocurrent peak corresponds to a voltage difference of only 30 $\mu$V, which is far less than the change of 10 mV observed in the standard photovoltage, making this explanation unlikely. In addition, no bleaching is observed under exposure to high energy light even though a similar or larger blocking voltage is created.
These results can be understood in terms of exciton dissociation through an interfacial charge transfer state, as has been described in the literature[@goris; @jjbs; @hall; @trivngs; @aryan; @morteani]. Here, the long wavelength peak corresponds to an intermediate state needed for the efficient dissociation of photo generated excitons, while the main absorbance peak (at 2.6 eV) corresponds to the ground exciton state of the MDMO-PPV. The majority of absorbance occurs into the MDMO-PPV and PCBM exciton states. These have long diffusion lengths, but also long dissociation times[@muller]. At the interface between the MDMO-PPV and PCBM, charge transfer excitons are formed with fast dissociation times[@sariciftci]. However, these states also have short diffusion lengths and a small absorption cross section. Charge pairs in the bulk exciton states diffuse to the interface, where dissociation occurs. If the interface is sufficiently close to the contact, a photovoltage is produced. In our bleaching experiment, we populate the interfacial states, blocking the dissociation of the main excitonic states. This reduces the photovoltage by removing the pathway for efficient charge dissociation.
![(a) Percent change in the integrated photovoltage signal as a function of pump beam wavelength. Comparison to the capacitive photocurrent signal (reproduced in (b)) shows that the dip in the photovoltage occurs for the pump beam wavelength at which the sub-threshold peak is observed. (c) and (d) show similar results for a second MDMO-PPV:PCBM solar cell. ](Fig4.eps)
In conclusion, using the capacitive photocurrent technique we are able to resolve a low energy feature in the photo-excitation spectrum of BHJ solar cells that is much more weakly observed in standard absorbance and photovoltage spectroscopy. The state has a large dissociation rate and a low absorbance cross-section compared to states at higher energy. Bleaching of the photovoltage signal is observed for illumination at the low energy feature suggesting that filling this state impedes the charge dissociation process. The experimental results counterintuitively demonstrate that increasing the amount of light on a BHJ solar cell can actually cause the output to go down. This implies that the solar cell efficiency could be improved by filtering out the light over a narrow band of wavelengths corresponding to the interfacial state energies.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The purpose of this study is the identification of young ($1< age < 100$ Myr), nearby ($d \leqslant100$ pc) moving groups (YNMGs) through their kinematic signature. YNMGs could be the result of the recent dispersal of young embedded clusters, such that they still represent kinematically cold groups, carrying the residual motion of their parental cloud. Using the fact that a large number ($\sim$ 14000) of the RAVE sources with evidence of chromospheric activity, also present signatures of stellar youth, we selected a sample of solar type sources with the highest probability of chromospheric activity to look for common kinematics. We made use of radial velocity information from RAVE and astrometric parameters from GAIA DR2 to construct a 6-dimension position-velocity vector catalog for our full sample. We developed a method based on the grouping of stars with similar orientation of their velocity vectors, which we call the Cone Method Sampling. Using this method, we detected 646 sources with high significance in the velocity space, with respect to the average orientation of artificial distributions made from a purely Gaussian velocity ellipsoid with null vertex deviation. We compared this sample of highly significant sources with a catalog of YNMGs reported in previous studies, which yield 75 confirmed members. From the remaining sample, about 50% of the sources have ages younger than 100 Myr, which indicate they are highly probable candidates to be new members of identified or even other YNMGs in the solar neighborhood.'
author:
- 'Valeria G. Ramírez-Preciado'
- 'Carlos G. Román-Zúñiga'
- Luis Aguilar
- Genaro Suárez
- Juan José Downes
title: Kinematic Identification of Young Nearby Moving Groups from a sample of Chromospherically Active Stars in the RAVE catalog
---
Introduction
============
Young star clusters are typically found in star forming regions within Giant Molecular Clouds (GMC) while still embedded in their parental gas [e.g. @Lada2003; @Allen2007; @Pelu2012]. Young stellar aggregations are usually bright in infrared wavelengths because many of their members display dusty circumstellar disks and the regions are normally associated to bright nebulosity features, all clear indicators of youth. Moreover, the dusty parental gas acts as a screen against the background population making clusters easier to spot on images. This is valid for relatively nearby associations where foreground population is minimum and contamination can be easily removed. But once a young cluster is no longer embedded, its detection gets complicated. Typically, after 10 Myr most of the parental gas is evacuated, along with the dispersal of most circumstellar disks and the dynamical relaxation of the system, which diminishes its density [@Lada2003]. Then, the components of most of the clusters mix with the Galactic disk population, which challenge the identification of the cluster members. At that evolutionary stage, emission line youth indicators such as the Li 6707 ${\mbox{\normalfont\AA}}$ line [e.g. @Sicilia2005] or X-ray emission [e.g. @Stelzel] are invoked for membership confirmation, but these diagnostics are difficult to implement in large samples. Moreover, if the cluster is close enough (10-$10^2$ pc), it cannot be distinguished as an overdensity in the sky because it covers very large areas.
A feature that can help to identify a dispersed cluster is the common kinematics still imprinted in their members; at ages of one to a few tens of Myr, young groups still stand out from the local velocity ellipsoid and have small velocity dispersions [@Antoja2012]. Known at this point as Young Nearby Moving Groups [YNMG; for a detailed review on the topic see @Torres2008], they can still be identified as they keep moving together away from their parental cloud before entering the Galactic Disk highway.\
The motivation of this work is to present a new method and procedure to identify emerging groups of young stars in the disk of the Galaxy through their kinematic signature. Finding such groups can be useful to understand the early evolution of unbound stellar groups, particularly how they disperse and integrate to the population of the galactic disk. Also, we look for a suitable technique for the identification of YNMG members, taking advantage of the increasing availability of data to provide the necessary parameters to construct full position-velocity vectors. Moreover, studying these kind of stars in the Solar Neighborhood can contribute to the understanding of star formation on the disk, as well as to its kinematic evolution.
Current identification methods are largely based on proper motions [e.g. @Aguilar1999; @Gagne2018] and positions [e.g. @kop2018], but this limited kinematical information makes group identification inconclusive, or unreliable. [@Riedel2017] developed a statistical method for the identification of YNMGs through their position and partial kinematical information, but as we mentioned before, position is not a reliable parameter for the identification of such disperse groups of stars. That is why it is necessary to implement other methods for its identification.
In this paper we present a simple and effective method to identify candidate members of YNMGs when their full 6D position-velocity vectors are known. The method can be applied to any sample that combines reliable proper motions, radial velocities and distances. Our test dataset is the RAdial Velocity Experiment (RAVE) catalog [@kunder2017], known to contain solar-type young star candidates.
This paper is organized as follows: we describe our sample selection in section 2, followed by a description of our methology in section 3. The results of applying our method to the selected dataset are presented in section 4. Finally, section 5 contains a discussion and summary of this work.
Sample \[s:sample\]
===================
The RAVE catalog contains stars with F, G and K spectral types, distributed in brightness within 8 and 12 magnitudes in the near-infrared $I$ band across the Southern hemisphere sky. The current (Data Release 5) RAVE catalog contains parameters for 520,630 individual stellar spectra, which are classified by means of a method called Local Linear Embedding [LLE, @Matijevic2012] which is based on a dimensionality reduction algorithm [@LLE]. The classification consists on applying the method directly to stellar spectra, reducing the number of dimensions (defined from a set of spectral features across previously defined spectral bins) that are needed to make the classification of a certain type of star. These dimensions are defined as spectral features common to an specific type of source. Each observed spectrum is compared against a grid of synthetic spectra in order to define a comparison sub-sample (5000 stars) from the previous data release. If the observed spectrum presents, with a certain level of confidence, features similar to those of other previously classified spectra in one or more dimensions, it is assigned a “flag” value [@Matijevic2010] in the corresponding dimension, indicating the nearest classification type. For each source listed, twenty flags representing the spectral feature dimensions are listed and each flag contains a letter representing the closest among eleven different stellar spectral classes of sources [@Matijevic2012]. The first three flags are those that have the highest weight in the classification. If these three first flags coincide, then the star has a high probability of belonging to that class.
For our study, we were interested in chromospherically active stars (CAS) in the RAVE catalog because it is estimated that about 40% of the CAS in RAVE coincide with high H$\alpha$ emitters from the ESO-GAIA catalog [@Zerjal2013; @zwitter2015]. The other $\sim$60% left could be young stars with not strong emission in that line or could be contaminants such as giants (see Section \[s:diagrams-HR\]). For the selection of our sample we considered stars for which the 3 first flags coincided with type *“chromospherically active”*. With this criterion we obtained a sample of 3128 stars with a high level of confidence of being CAS, from which many of them are expected to be young stars.
GAIA DR2 kinematic parameters
-----------------------------
The RAVE catalog provides radial velocities for all sources in our sample [RAVE provides good measurements for radial velocities, with uncertainties of 5 km/s or less]{}). In order to obtain the kinematic parameters for our main analysis, we complemented the RAVE CAS sample with astrometric parameters obtained from the GAIA Collaboration Data release 2 [hereafter GAIA DR2 @GAIADR2]. From GAIA DR2 we obtained RA and DEC positions, parallaxes and proper motions in RA and DEC, for all the sources in the sample. Using the `TOPCAT` tool version 4.6-1 [@topcat], we applied the method of @bj2018 to convert parallaxes into distance estimates using a local density exponential decrease prior with a scale parameter $h=500$ pc. With this information we were able to provide 6-dimensional position-velocity vectors for all the sources in the sample.
Data Analysis\[s:analysis\]
===========================
Velocity ellipsoids projections \[s:analysis:ss:kinematics\]
------------------------------------------------------------
The velocity vector of a star is fully determined by the radial velocity and the two components of its tangential velocity on the celestial sphere in addition to a parallax or distance to the source. In the 6-dimensional position-velocity space (hereafter PV6), each star is defined by six observational parameters: two coordinates (e.g. $l,b$ in the Galactic coordinate system), the corresponding two proper motion components $\mu_{l}$, $\mu_{b}$, a parallax $\varpi$ from which a distance can be estimated, and the radial velocity, usually calculated from Doppler shift measurements on a spectrum. From this, a PV6 vector ($X$, $Y$, $Z$, $U$, $V$, $W$) can be determined.
We use a heliocentric frame and following the canonical scheme [e.g. @schonrich2012], $X$ and $U$ are positive toward the galactic center, $Y$ and $V$ are positive along the direction of the galactic rotation, while $Z$ and $W$ are positive toward the galactic north pole. $U$, $V$ and $W$ are defined by equations \[u1\],\[u2\] and \[u3\] below (see Figure \[schon12\], based on Figure 1 of @schonrich2012 2012):
$$\begin{aligned}
\label{u1}
U = v_r (\cos{l} \cos{b}) - v_l (\sin{l}) - v_b (\sin{b} \cos{l})\\
\label{u2}
V = v_r (\sin{l} \cos{b}) + v_l (\cos{l}) - v_b (\sin{b} \sin{l}) \\
\label{u3}
W = v_r (\sin{b}) + v_b (\cos{b})\end{aligned}$$
where $v_r$ is the radial velocity, $v_l$ and $v_b$ are the tangential components of the velocity.\
The U, V, W velocity components give us the velocity distribution for a given population, usually described to first approximation as a *velocity ellipsoid*, and scatter plots of any two such components give us projections of this ellipsoid. These projections can be used to look for overdensities that may represent YNMG of stars [e.g. @Antoja2009; @Antoja2012].
Lack of structure in the velocity ellipsoid projections
-------------------------------------------------------
We constructed the velocity ellipsoid projections (see Figure \[UV-maps\]) for our sample of RAVE CAS using 2-dimensional histograms. We look for substructure on top of the smooth ellipsoidal distribution, as signal of moving groups. Our criteria for these plots were *a)* a resolution of 3 km/s per bin, and *b)* to consider only those stars with velocity moduli smaller than 600 $\mathrm{km/s}$. The latter was chosen from a histogram for $(U,V,W)$ values, indicating that more than 80% of our original sample was smaller than 200 $\mathrm{km/s}$.
![Graphical description of the $(U,V,W)$ coordinates frame according to the galactic coordinates as expressed in equations \[u1\], \[u2\] y \[u3\].[]{data-label="schon12"}](f1.pdf){width="50.00000%"}
All three projections show an ellipsoidal-like distribution for our sample. The left panel of Figure \[UV-maps\] shows a clear vertex deviation with an angle of approximately 15 $^\circ$, and an indication of an overdensity near $(U,V)=(-15,-20)$ plus a few small lumps near the center. The $VW$ and $UW$ projections are mostly featureless. However, we conclude that the signal to noise of these histogram images may not be high enough to resolve substructure clearly with our relative small sample.\
Increasing the resolution of these 2D histograms did not improve our results either, because the noise level also increased, making it actually more difficult to distinguish any overdensity features.
![image](f2a.pdf){width="0.46\linewidth"} ![image](f2b.pdf){width="0.46\linewidth"} ![image](f2c.pdf){width="0.46\linewidth"} ![image](f2d.pdf){width="0.072\linewidth"}
Figure \[wave\] shows a further effort to highlight and identify structures within the $UV$ projection. We maintaned the resolution to a projection bin of 3 km/s and we applied a wavelet filtering method to the resultant image in order to highlight overdensity regions while removing extended low density structure. For this purpose we used an algorithm (B. Vandame, personal communication) based on the Multi-scale Vision Model (MVM) by @Rue1997. This filtering process allowed to highlight some possibly significant over-densities in the UV projection, as shown in Figure \[wave\]. Our wavelet image shows four main lumps with sizes of about 10 km/s. The sizes and the separations between groups in our wavelet filtered map are consistent with the velocity ellipsoid projections for moving groups in the solar vicinity by @Antoja2012. The largest lump near (U,V)=(-10,-20) is actually close with the Hyades, and the central lump is close to Coma Berenice, as reported in that work, but the coincidences are not exact, and the other two small lumps in our diagram are not directly related to any of their groups. On one hand, the differences could be explained by our use of GAIA DR2 parameters, which may be refining some values and highlighting distinct features. But on the other hand, our CAS sample is very different from theirs and we cannot venture to claim a real coincidence. Moreover, on Antoja work, they also used a higher resolution in their maps, but we cannot reach that resolution due to the size of our sample, which complicates the use of the velocity projections to identify additional structure. As we expect more than four YNMGs in our sample, we implemented an additional method to try to identify them.
![Density map of the UV projection of the velocity ellipsoid for RAVE CAS after applying the wavelet filtering process based on MVM. Although the substructure of left panel in Figure 2 is more evident here, the resolution of this image (3 km/s) is still not sufficient to carry out a satisfactory separation of individual moving groups.The colorbar indicates source density.[]{data-label="wave"}](f3.pdf){width="54.00000%"}
The Cone Method \[s:analysis:ss:cone\]
--------------------------------------
As shown in the velocity ellipsoid projection figures (Figures \[UV-maps\] and \[wave\]), some substructure becomes apparent when using some contrast enhancing techniques, like the wavelet filter. The basic problem we have here is that we are dealing with projections, which diminishes the contrast of 3D substructure. As we are searching for stars that belong to a YNMG,they should constitute a kinematically “cold” group. This means that their velocity vectors, when seen from a reference frame away from its own barycenter, should all point roughly in the same direction. This is the basis of the method we introduce here. While this is still based on a projection, it is entirely different in its construction.
![The panel shows the configuration of the cone on each star. It shows how the direction of the velocity vector fall within the cone. Thus, all stars that have the same configuration of $\theta$ and $\phi$ are grouped.[]{data-label="cone-method"}](f4.pdf){width="0.9\linewidth"}
First, a cone with vertex at the velocity frame origin is defined (see Figure \[cone-method\]). The angles $\theta$ and $\phi$ define its orientation in this space ($0 < \theta \leqslant 360^\circ$ along the $U$-$V$ plane, measured from the first toward the second axis; $-90^\circ \leqslant \phi \leqslant 90^\circ$ perpendicular to the first axis and null for $W = 0$) and the angle $\alpha$ denotes its aperture. The unit vector $\hat{L}$ indicates the cone symmetry axis:
$$\label{cono2}
\hat{L}=[\cos{(\theta)} \cos{(\phi)}, \sin{(\theta)} \cos{(\phi)},\sin{(\phi)}].$$
Then, we identify all stars in the sample whose velocity vector $\vec{v}$ falls within this cone, i.e. the following condition is satisfied: $$\label{constriction}
\frac{\hat{L} \cdot \vec{v}}{|\vec{v}|} < \cos{(\alpha)}.$$
Notice that this condition is set in velocity space, not in configuration space, which means that the star position in configuration space is irrelevant.\
All stars that fall within the cone are assigned to this particular combination of $(\theta, \phi)$ values. We then establish a $(\theta, \phi)$ grid on the unit sphere and compute the number of stars assigned to each such gridpoint, calling it the Cone Method Sampling (CMS). The grid is designed so that each cell subtends equal solid angles. We can then generate star count maps on the unit sphere where local peaks indicate groupings of stars that share the same velocity vector orientations (within an angle $\alpha$). Reducing the cone opening makes the criterion more stringent, allowing to identify colder clumps, but reduces the number of stars within a group. It is also obvious that $\alpha$ should not be reduced below the uncertainty in velocity orientations resulting from the errors in the observables.\
After we identified substructure using this procedure, it is necessary to assign to each peak a statistical significance, i.e. a measure of its probability that it is not a mere chance fluctuation. For this, we need a null-hypothesis (NH) that all substructure is merely due to fluctuations due to counting of discrete events in a mesh. In our case, we use as NH a trivariate Gaussian distribution whose centroid and extent are given by the individual means and standard deviation of the individual $U$, $V$ and $W$ distributions of the sample. No correlation between the individual components is assumed, so no vertex deviation is considered. We then performed Monte Carlo sampling of the NH to construct 5,000 synthetic samples of equal size to the real one. At the end we built $(\theta, \phi)$ maps of the expected median density of sources at each gridpoint under the hypothesis that no substructure really exists. We used these values, gridpoint by gridpoint, to establish the statistical significance of the peaks in the real sample, as we describe below.
![image](f5.pdf){width="65.00000%"}
Structure separation
--------------------
We applied the CMS, as previously described (see Section \[s:analysis:ss:cone\]), to our sample of RAVE CAS, in order to obtain the distribution of stars that satisfied the dot product condition in inequality \[constriction\]. For our purposes, we chose the value $\alpha=3^\circ$, and our grid was constructed as follows: first, we made a $\cos(b)$ correction on latitude, to assure uniform coverage; then we used Nyquist sampling in order to reduce the step to half the resolution of the grid so no stars are left out of the counting. A map of the distribution of our CAS sample in the $(\theta, \phi)$ grid is shown in Figure \[ncount\].
![image](f6a.pdf){width="0.45\linewidth"} ![image](f6b.pdf){width="0.5\linewidth"} ![image](f6c.pdf){width="0.5\linewidth"}
In order to obtain a more reliable identification of possible YNMG from the CMS, we developed and implemented a method to identify structures within the map described above, based on the statistical significance of the counts at each position.\
Using the Monte Carlo realizations, we used the median of the count density at each position in the mesh for the NH case as a central estimator. We used the median because it is a more robust indicator for the distributions in each mesh point, which are in most cases positively skewed. Then, we obtained a deviation by calculating the absolute difference between the median and the value corresponding to the 90 percentile, also at each position of the mesh. The corresponding median and deviation maps are shown in the top-right and top-left panels of Figure \[med-dev\].
We define a **reliability** range at each position in the mesh as: $$S = \frac{observed \ data \ - \ median \ (NH)}{deviation \ (NH)}
\label{signif}$$
This way, we define high significance on all points as $S > 1.5$ and low significance as $1<S<1.5$.\
A map that shows the final distribution of CMS counts using the significance estimator, is shown in the bottom panel of Figure \[med-dev\]. We found a total of 646 stars located in mesh positions with high significance. These sources represent a new sample of solar type CAS candidates to be members of recently disaggregated young star clusters.
Uncertainty cone
----------------
It was necessary to corroborate that our choice for the cone opening value, $\alpha$, was adequate for the detection of groups with similar velocity vector orientation in the RAVE CAS sample. This is, we need to make sure that $\alpha$ is consistently wide enough compared to the uncertainty in the angle between $\hat{L}$ and $\vec{v}$. For this purpose, we constructed an error cone from the uncertainties in the $U$, $V$ and $W$ components. Using the following transformation expressions from the velocity to the cone space:
$$\label{conoerror11}
\theta = \arctan{(\frac{W}{\sqrt{U^2 + V^2 }})},$$
$$\label{conoerror21}
\phi = \arctan{(\frac{V}{U})},$$
we determined the error propagation to first order, which provides $\delta \theta$ and $\delta \phi$. This way we can determine $\delta \alpha$ as the equivalent radius of the area $\delta \theta \times \delta \phi$, which corresponds to the opening of the cone formed with the uncertainties. A detailed derivation of $\delta \alpha$ is included on Appendix \[App1\]. If $\delta\alpha$ is consistently smaller than the characteristic aperture $\alpha$ of the map (in our case $3^\circ$.), we can trust that the aperture used is adequate for finding common kinematics between stars in our sample. We confirmed that more than 90% of the stars in our sample have an opening of the uncertainly cone smaller than $\alpha$, indicating that the $\alpha$ value we use is reliable.
Results \[s:results\]
=====================
Moving Groups \[s:mg\]
----------------------
After analyzing the significant regions with the method described above, we need to know if our selected stars have been already identified as members of YNMGs. For this purpose, we cross matched our list with a compilation of known members of YNMG including results from: @Torres2006 [@Torres2008] ($\beta$ Pictoris, Columba, Tucana-Horologium), @Dawson2012 [@Dawson2013] ($\eta$ Chameleontis), @Ducourant2014 (TW Hydrae), @Elliott2014 (AB Doradus, Argus, $\beta$ Pictoris, Carina, Columba, $\eta$ Chameleontis, Octans, TW Hydrae), @Galvez2010 [@Galvez2014] (Castor), @Malo2013 [@Malo2014] ($\beta$ Pictoris, TW Hydrae, Tucana-Horologium, Columba, Carina, Argus, AB Doradus), @Riedel2014 ($\eta$ Chameleontis, TW Hydrae, $\beta$ Pictoris, Octans, Tucana Horologium, Columba, Carina, Argus, AB Doradus), @Gagne2015 [@Gagne2018] (Argus, Columba, $\beta$ Pictoris, AB Doradus, Carina, TW Hydrae, Tucana Horologium), @Desilva2009 [@Desilva2013] ($\eta$ Chameleontis, TW Hydrae, $\beta$ Pictoris, Octans, Tucana-Horologium, Columba, Carina, Argus, AB Doradus), @Cruz2009 (AB Doradus, $\beta$ Pictoris, TW Hydrae), @Kraus2014 (Tucana-Horologium), @Makarov2000 (Carina), @Moor2013 (Columba, Carina, Argus, AB Doradus, $\beta$ Pictoris), @Murphy2015 (Octans), @Shkolnik2012 (AB Doradus, $\beta$ Pictoris, Carina, Castor, $\eta$ Chameleontis, Columba, TW Hydrae, Tucana-Horologium) and @zuckerman2004 (AB Doradus, $\eta$ Chameleontis). The catalog contains information for over 2300 members of associations at distances smaller than $\sim200$ $\mathrm{pc}$, with revised positions and information about which YNMG they each belong. We found a total of 75 matches with 10 known YNMGs, which are listed in Table 1 of Appendix \[App3\]. In Figure \[moving-groups\] we show a map, in Galactic coordinates, of these stars. The remaining 571 sources have no matches with recent literature on YNMG.
To corroborate the reliability of our technique, we made the same analysis for a sample of 275 stars from BANYAN IV [@Malo2014]. We found that with our method we recovered 173 stars of the initial sample which have $S>1.5$ and 270 sources with $S>1$. Determination of membership for each candidate using typical methods [e.g. spectroscopy; see for instance @Binks2015] is beyond the scope of this study. In the ideal case, we should produce a table with individual membership probabilities based on known properties of YMNG groups, like average radial velocity, average distance, etc. However, this kind of information is difficult to recollect in a consistent way for most YNMGs. We only were able to make an acceptable comparison with the Beta Pictoris group [distance=18-40 pc, $\mathrm{<v_r>=60}$km/s, extension = 40 pc. @Malo2013; @Moor2013], were 80 stars in our remaining candidate sample coincide within the uncertainties with the characteristics of this group. For this reason, we chose instead to analyze the location of our CMS significant CAS in the HR diagram, focusing on the ages of the candidates, in order to highlight possibly young sources.\
![image](f7a.pdf){width="0.8\linewidth"} ![image](f7b.pdf){width="0.12\linewidth"}
![image](f8.pdf){width="0.7\linewidth"}
HR diagrams \[s:diagrams-HR\]
-----------------------------
The remaining 571 stars identified with our method, may still have kinematics in common even though they are not associated to any known YNMG. For instance, a number of these stars could be unidentified members or new groups. However, the list may also contain a fraction of contaminant sources. This mainly comes from the fact that chromospheric activity is not exclusive of young stars; some types of evolved sources in the sub-giant and giant branch [@Oz2018], and spectroscopic binaries [@Fekel2002] may present chromospheric emission and be selected as RAVE CAS. This occurs because chromospherical activity can be present before and after the main sequence phase [@Frasca2015]. As mentioned in Section 1, our main goal is to select members of recently dispersed young clusters, so we need to know which sources are consistent with young ages (10-10$^2$ Myr).
The original goal of this work is the identification of YNMG member candidates through their kinematic signature, but estimating the individual ages of our candidates to isolate probably young stars, helps us to depurate our sample and to reinforce the results.
Combining the $T_{eff}$ and $A_{v}$ from the RAVE DR5 catalog with the distances from [@Luri2018] and using the optical $V$ magnitudes from APASS DR9 [@Henden2016], we constructed the HR diagram for the CMS candidate sample. This diagram allowed us to estimate the ages and masses of the candidates working with a method similar to that used by [@Suarez2017], which basically interpolates the $\mathrm{T_{eff}}$ and $\mathrm{L_{bol}}$ into the stellar models from [@Bressan2012] and [@Marigo2017] to estimate masses and ages, as well as their uncertainties. For our purposes, in order to remove most of the contamination by evolved sources, we limited our candidate sample to those stars with $\mathrm{M_V}\geqslant 4.5$ mag. After this cutoff, we selected those stars younger than 100 Myr. The resulting clean sample contains 290 candidates, which represents the $\sim$50% of the sample of YNMG member candidates. From the remain 280 removed stars, 70 stars are $H\alpha$-emitters. This percentage is consistent with the fraction of CAS in RAVE with high H$\alpha$ emission (see Section \[s:sample\]). This show that the use of HR diagrams allows to depurate the YNMG member candidate selection.
In Figure \[HR\] we show the HR diagram for the members and candidates of YNMGs. We can see that all of member and candidates lie between the 1 and 100 Myr isochrones from [@Marigo2017] and between the 0.5 and 1.6 $M_{\odot}$ evolutionary tracks from [@Bressan2012].
Discussion and Summary \[s:discussion\]
=======================================
From our analysis, we found that the kinematic identification of YNMGs in the RAVE CAS sample, directly from the velocity ellipsoid projections, was not satisfactory. This is likely because the sample is not robust enough or, perhaps because the number of CAS sources that we can identify as known members of individual YNMGs is small. It may be possible that for a larger sample of stars, that included sources located in a larger volume, such method could be implemented successfully and its velocity ellipsoid could distinguish between groups with different kinematic signature.
Precisely this problem was the origin of our idea of the alternative method, the CMS we present in this paper. Our method uses directly the orientation of the velocity vectors in the velocity space to identify groups of stars with a common kinematic signature. This made our method distinc from others typically used.
While the idea behind the CMS is relatively simple, we showed that it can be a reliable tool, useful for the identification of kinematically cold groups. Moreover, the CMS appears to work well with small groups in relatively small samples like the RAVE CAS. In this sense, our method is not exclusive for groups of young stars. Our CMS, in principle, can be applied to any catalog of sources that have reliable observational parameters to construct PV6 vectors. This makes our method very well suited for other applications. We applied our method to samples of known groups and detected those overdensities on the $(\theta, \phi)$ map.
With our method, we successfully identified 75 members of YNMGs previously reported in the literature (see appendix \[App3\]). For the remaining sources in the sample, our analysis of the HR diagram, indicates that a significant number of sources are consistent with ages between 1 and 100 Mys, indicating that the RAVE CAS sample possibly contains dozens of young stars that belong to known or even new YNMG in the solar neighborhood. Follow-up work should focus on determination of membership for the stars in Table \[tab:results\], based on youth signatures from spectroscopy.
ACKNOWLEDGMENTS.
VRP and GS acknowledge support from a graduate studies fellowship from CONACYT/UNAM Mexico. VRP, CRZ and GS acknowledges support from program UNAM-DGAPA-PAPIIT IN108117, Mexico. LA acknowledges support from UNAM-DGAPA-PAPIITT IG100125 project.
This paper makes use of RAVE data. Funding for RAVE has been provided by: the Australian Astronomical Observatory; the Leibniz-Institut fuer Astrophysik Potsdam (AIP); the Australian National University; the Australian Research Council; the French National Research Agency; the German Research Foundation (SPP 1177 and SFB 881); the European Research Council (ERC-StG 240271 Galactica); the Istituto Nazionale di Astrofisica at Padova; The Johns Hopkins University; the National Science Foundation of the USA (AST-0908326); the W. M. Keck foundation; the Macquarie University; the Netherlands Research School for Astronomy; the Natural Sciences and Engineering Research Council of Canada; the Slovenian Research Agency; the Swiss National Science Foundation; the Science & Technology Facilities Council of the UK; Opticon; Strasbourg Observatory; and the Universities of Groningen, Heidelberg and Sydney. The RAVE web site is at https://www.rave-survey.org.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
This work has made use of data from the European Space Agency (ESA) mission [*Gaia*]{} (<https://www.cosmos.esa.int/gaia>), processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement.
Some figures in this paper, as well as distance estimates were obtained with the fabulous TOPCAT software, which is described in [@topcat].
Allen, L., Megeath, S. T., Gutermuth, R., et al. 2007, Protostars and Planets V, 361
, T., [Valenzuela]{}, O., [Pichardo]{}, B., [et al.]{} 2009, , 700, L78
, T., [Helmi]{}, A., [Figueras]{}, F., & [Romero-G[ó]{}mez]{}, M. 2012, in European Physical Journal Web of Conferences, Vol. 19, European Physical Journal Web of Conferences, 05002
, T. L., & [Bailer-Jones]{}, C. A. L. 2016, , 832, 137
, C. A. L. 2015, , 127, 994
Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Mantelet, G., & Andrae, R. 2018, arXiv:1804.10121
Binks, A. S., Jeffries, R. D., & Maxted, P. F. L. 2015, , 452, 173
Bressan, A., Marigo, P., Girardi, L., et al. 2012, , 427, 127
Cruz, K., Looper, D., Prato, L., & Kirkpatrick, J. D. 2009, NOAO Proposal,
Dawson, P., Scholz, A., Ray, T. P., et al. 2013, , 429, 903
da Silva, L., Torres, C. A. O., de La Reza, R., et al. 2009, , 508, 833
De Silva, G. M., D’Orazi, V., Melo, C., et al. 2013, , 431, 1005
Dawson, P., Scholz, A., Ray, T. P., et al. 2012, arXiv:1211.4484
Ducourant, C., Teixeira, R., Galli, P. A. B., et al. 2014, , 563, A121
Elliott, P., Bayo, A., Melo, C. H. F., et al. 2014, VizieR Online Data Catalog, 356,
Fekel, F. C., Henry, G. W., Eaton, J. A., Sperauskas, J., & Hall, D. S. 2002, , 124, 1064
Frasca, A., Biazzo, K., Lanzafame, A. C., et al. 2015, , 575, A4
Gagn[é]{}, J., Faherty, J. K., Cruz, K. L., et al. 2015, , 219, 33
Gagn[é]{}, J., Mamajek, E. E., Malo, L., et al. 2018, , 856, 23
Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, , 595, A2
Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, arXiv:1804.09365
G[á]{}lvez-Ortiz, M. C., Clarke, J. R. A., Pinfield, D. J., et al. 2010, , 409, 552
G[á]{}lvez-Ortiz, M. C., Kuznetsov, M., Clarke, J. R. A., et al. 2014, , 439, 3890
Henden, A. A., Templeton, M., Terrell, D., et al. 2016, VizieR Online Data Catalog, 2336,
Hoogerwerf, R., & Aguilar, L. A. 1999, , 306, 394
Kraus, A. L., Shkolnik, E. L., Allers, K. N., & Liu, M. C. 2014, , 147, 146
Koppelman, H. H., Virginiflosia, T., Posti, L., Veljanoski, J., & Helmi, A. 2018, arXiv:1804.07530
, A., [Kordopatis]{}, G., [Steinmetz]{}, M., [et al.]{} 2017, , 153, 75
, C. J., & [Lada]{}, E. A. 2003, , 41, 57
Luri, X., Brown, A. G. A., Sarro, L. M., et al. 2018, arXiv:1804.09376
Malo, L., Doyon, R., Lafreni[è]{}re, D., et al. 2013, , 762, 88
Malo, L., Doyon, R., Feiden, G. A., et al. 2014, , 792, 37
Makarov, V. V., & Urban, S. 2000, , 317, 289
Marigo, P., Girardi, L., Bressan, A., et al. 2017, , 835, 77
, G., [Zwitter]{}, T., [Munari]{}, U., [et al.]{} 2010, , 140, 184
, G., [Zwitter]{}, T., [Bienaym[é]{}]{}, O., [et al.]{} 2012, , 200, 14
Mo[ó]{}r, A., Szab[ó]{}, G. M., Kiss, L. L., et al. 2013, , 435, 1376
Murphy, S. J., & Lawson, W. A. 2015, VizieR Online Data Catalog, 744,
zdarcan, O., & Dal, H. A. 2018, arXiv:1801.06087
Pelupessy, F. I., & Portegies Zwart, S. 2012, , 420, 1503
Riedel, A. R., Finch, C. T., Henry, T. J., et al. 2014, , 147, 85
Riedel, A. R., Blunt, S. C., Lambrides, E. L., et al. 2017, , 153, 95
Roweis, S. T., & Saul, L. K. 2000, science, 290, 2323
, F., & [Bijaoui]{}, A. 1997, Experimental Astronomy, 7, 129
Sch[ö]{}nrich, R. 2012, , 427, 274
Sicilia-Aguilar, A., Hartmann, L. W., Hern[á]{}ndez, J., Brice[ñ]{}o, C., & Calvet, N. 2005, , 130, 188
Shkolnik, E. L., Anglada-Escud[é]{}, G., Liu, M. C., et al. 2012, , 758, 56
Stelzer, B., & Neuh[ä]{}user, R. 2000, , 361, 581
Su[á]{}rez, G., Downes, J. J., Rom[á]{}n-Z[ú]{}[ñ]{}iga, C., et al. 2017, , 154, 14
Taylor, M. B. 2005, Astronomical Data Analysis Software and Systems XIV, 347, 29
Traven, G., Zwitter, T., van Eck, S., et al. 2015, VizieR Online Data Catalog, 358,
Torres, C. A. O., Quast, G. R., da Silva, L., et al. 2006, , 460, 695
Torres, C. A. O., Quast, G. R., Melo, C. H. F., & Sterzik, M. F. 2008, Handbook of Star Forming Regions, Volume II, 5, 757
, M., [Zwitter]{}, T., [Matijevi[č]{}]{}, G., [et al.]{} 2013, , 776, 127
Zuckerman, B., & Song, I. 2004, , 42, 685
Zwitter, T., Kos, J., [Ž]{}erjal, M., & Traven, G. 2016, ASPC, 507, 201
Uncertainty cone {#App1}
================
The following error propagation is a linear approximation to obtain the suitable $\alpha$ aperture for the CMS. Starting from the definitions of $\theta$ and $\phi$ from the transformation of the velocity space $$\theta = \arctan{(\frac{W}{\sqrt{U^2 + V^2 }})},$$
$$\phi = \arctan{(\frac{V}{U})},$$
we calculated the error propagation at first order: $$\delta \phi = \delta U (\frac{UW}{(\sqrt{U^2 + V^2}) (U^2 + V^2 + W^2)}) + \delta V (\frac{VW}{(\sqrt{U^2 + V^2}) (U^2 + V^2 + W^2)}) + \delta W (\frac{\sqrt{U^2 + V^2}}{U^2 + V^2 + W^2})$$ and $$\delta \theta = \delta U (\frac{U}{\sqrt{U^2 + V^2}}) + \delta V (\frac{U}{\sqrt{U^2 + V^2}})$$ Considering that $\delta \alpha$ is the equivalent radius of the segment formed by $\delta \theta \times \delta \phi$, then $\delta \alpha$ is calculated: $$\delta \alpha = \sqrt{\frac{|\delta \theta| \times |\delta \phi|}{\pi}}$$\
YNMG detected members in the CAS RAVE Sample {#App3}
============================================
[lccccccccccccc]{} 23093711-0225551 & 23:09:37.10 & -02:25:55.0 & 10.933 & 8.567 & 60.842 & -45.963 & -11.21 & 4000.0 & 0.215 & 0.734 & 12.27 & CAR\
23215251-6942118 & 23:21:52.50 & -69:42:12.0 & 10.008 & 8.657 & 41.368 & -32.023 & 5.6 & 5248.0 & 0.816 & 1.045 & 19.08 & Unknown\
05023042-3959129 & 05:02:30.40 & -39:59:13.0 & 10.654 & 8.727 & 35.104 & -23.752 & 26.329 & 4561.0 & 0.218 & 0.752 & 39.81 & ABDMG\
22463348-3928451 & 22:46:33.50 & -39:28:45.0 & 9.528 & 8.21 & 75.105 & -3.24 & -1.64 & 5175.0 & 0.839 & 1.08 & 16.49 & Unknown\
05332558-5117131 & 05:33:25.60 & -51:17:13.0 & 11.805 & 8.986 & 42.755 & 26.101 & 18.825 & 4000.0 & 0.153 & 0.706 & 35.37 & THA\
03241504-5901125 & 03:24:15.00 & -59:01:13.0 & 12.108 & 9.547 & 43.729 & 7.913 & 19.376 & 4000.0 & 0.239 & 0.734 & 10.04 & COL\
05451623-3836491 & 05:45:16.30 & -38:36:49.0 & 11.016 & 9.561 & 10.622 & 7.65 & 30.688 & 5230.0 & 1.256 & 1.245 & 10.85 & COL\
08430040-5354076 & 08:43:00.40 & -53:54:08.0 & 11.099 & 9.755 & -23.293 & 22.849 & 12.148 & 5249.0 & 0.91 & 1.092 & 16.57 & ARG\
13544209-4820578 & 13:54:42.10 & -48:20:58.0 & 11.024 & 9.285 & -31.868 & -23.39 & 0.338 & 5000.0 & 0.683 & 1.047 & 16.27 & Unknown\
11594226-7601260 & 11:59:42.30 & -76:01:26.0 & 11.139 & 9.14 & -41.025 & -6.19 & 14.435 & 3996.0 & 0.465 & 0.735 & 3.491 & ECh\
\[tab:members\]
YNMG Candidates in the CAS RAVE Sample {#App2}
======================================
[lcccccccccccc]{} 23123243-0240516 & 23:12:32.46 & -02:40:51.9 & 12.915 & 10.8 & 17.44 & -4.568 & 11.995 & 4000.0 & 0.21 & 0.732 & 13.05\
21323568-5558015 & 21:32:35.70 & -55:58:01.6 & 12.473 & 10.987 & -18.23 & -12.874 & 69.72 & 5750.0 & 1.924 & 1.238 & 14.73\
23581157-3850073 & 23:58:11.58 & -38:50:07.5 & 11.689 & 10.104 & -28.779 & -30.763 & 4.097 & 4964.0 & 0.432 & 0.879 & 29.25\
00220533-4050257 & 00:22:05.34 & -40:50:25.7 & 13.19 & 11.444 & -5.68 & -19.514 & 4.565 & 5000.0 & 0.727 & 1.072 & 15.12\
02523097-5447531 & 02:52:30.99 & -54:47:53.3 & 13.295 & 11.19 & 0.693 & -30.811 & -0.92 & 4500.0 & 0.191 & 0.717 & 52.92\
00263489-6545359 & 00:26:34.90 & -65:45:36.0 & 11.289 & 9.611 & 27.489 & 4.21 & -14.657 & 4755.0 & 0.321 & 0.829 & 32.38\
05213171-3641084 & 05:21:31.73 & -36:41:08.5 & 12.919 & 10.749 & -1.924 & 17.613 & -11.656 & 4000.0 & 0.341 & 0.731 & 5.331\
10573417+0048243 & 10:57:34.16 & +00:48:24.2 & 13.533 & 11.348 & -40.667 & 16.654 & -28.338 & 4465.0 & 0.178 & 0.706 & 60.11\
12530218-1549546 & 12:53:02.15 & -15:49:54.6 & 14.267 & 11.37 & -3.745 & 17.96 & -5.038 & 3814.0 & 0.128 & 0.704 & 29.81\
21480570-0127397 & 21:48:05.71 & -01:27:39.9 & 11.642 & 10.017 & 14.8 & 12.746 & -1.185 & 5003.0 & 0.336 & 0.8 & 43.78\
\[tab:results\]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In models with large extra dimensions all gauge singlet fields can in principle propagate in the extra dimensional space. We have investigated possible constraints on majoron models of neutrino masses in which the majorons propagate in extra dimensions. It is found that astrophysical constraints from supernovae are many orders of magnitude stronger than previous accelerator bounds. Our findings suggest that unnatural types of the “see-saw" mechanism for neutrino masses are unlikely to occur in nature, even in the presence of extra dimensions.'
author:
- 'Steen Hannestad , Petteri Ker[ä]{}nen , Francesco Sannino '
title: A supernova constraint on bulk majorons
---
Introduction
============
Over the past few years there has been an enormous interest in models with extra dimensions beyond the 3+1 dimensions of the standard model. Such extra dimensions can be large if the gauge fields of the standard model are constrained to a 3+1 dimensional brane in the higher dimensional bulk space. This idea is particularly interesting because it may be connected with a low quantum gravity scale, perhaps only slightly above the scale of electroweak symmetry breaking. In Refs. [@add98; @Antoniadis; @add99] such a model was proposed in which the standard model brane is embedded in a bulk space of dimension $\delta$ which is toroidally compactified. Because gravity is allowed to propagate in the full bulk space it will look weaker to an observer confined to the brane. By Gauss’ law the effective four-dimensional Planck scale, $M_P$, can be related to the true 4+$\delta$ dimensional energy scale of gravity, $M$, by $$M_P^2 = (2 \pi R)^\delta M^{\delta+2},$$ where $R$ is the common radius of all the extra dimensions. If $R$ is sufficiently large $M$ can be as small as the electroweak scale. Effectively gravity is weak because the field lines leak into the extra dimensions.
One of the most interesting features of these models is the presence of a tower of new modes for particles propagating in the bulk. The momentum of particles propagating in extra dimensions is discretised because the extra dimensions are compact. To an observer on the brane this effectively looks like a tower of new states (Kaluza-Klein) for each bulk particle, with a 4D mass related to the extra dimensional momentum. The energy spacing between these states is then in general of order $R^{-1}$ which can be very small.
For the graviton this property has been used to put tight constraints on the possible radii of the extra dimensions. However, any particle which is a singlet in the standard model could in principle propagate in the bulk. Examples of this are sterile neutrinos and axions.
Some authors, for example, have investigated the possibility that right handed neutrinos might propagate in the extra dimensions and that their resultant suppressed couplings to the usual left handed neutrinos could account for the low scale of neutrino masses [@DDG; @ADDM; @Mohapatra:1999zd; @Mohapatra:1999af; @Dvali:1999cn; @Barbieri:2000mg; @Mohapatra:2000wn; @Ioannisian:mu]. Other authors [@FP; @MN; @DK; @C; @BDN; @Abazajian:2000hw; @Ioannisian:1999cw] have studied the constraints on such models due to experiment and observation. An alternate approach [@MPP], corresponding to the conventional see-saw mechanism, has also been investigated by some authors. In this approach a Higgs singlet, which carries no standard model gauge quantum numbers, is allowed to propagate in the extra dimensions and give mass to right handed neutrinos which, for simplicity, are assumed to live on the brane.
Recently in Ref. [@MNSS] a model was studied in which the lepton number is spontaneously broken and the associated Goldstone boson (the “Majoron") is present (see also [@MRS]). This model was used to compute the decay rate for the intermediate vector boson $Z$ to two neutrinos and the Majoron (denoted by $J$) or one of its “Kaluza-Klein" excitations [@MNSS]. Using the accurately known value of the $Z$ width [@rpp] bounds on the dimensionless coupling of the two neutrinos to the Majoron were obtained allowing for any number of extra dimensions and any intrinsic mass scale (see also [@Carone:2002ss]). The related neutrinoless double beta decay process $n+n\rightarrow p+p+e^{-}+e^{-}+(J)$ has already been treated in a model of the present type [@MPP]. A detailed discussion of supernova constraints in the 3 $+$ 1 dimensional theory has very recently been given in [@TPV].
In this paper we use this model [@MNSS] to compute processes of astrophysical interest. We first see that when compact extra dimensions are present new processes become relevant and can heavily affect supernovae dynamics. We then show that supernovae constraints on the dimensionless couplings are by many order of magnitude more stringent than the accelerators bounds. Our findings seem to discourage unnatural see-saw models of neutrino masses still allowed by accelerator bounds.
We summarize in section II the Majoron model extended to the extra-dimensional brane-world . In section III we briefly review the accelerator constraints. The Supernovae constraints are presented in section IV. In the concluding section V we also discuss the possibility of the extra-dimensional Majorons to be source of dark matter.
A Majoron Model in Extra Dimensions
===================================
In the original Majoron model of Chikashige, Mohapatra and Peccei [@CMP] the lepton number associated with massive Majorana neutrinos is spontaneously broken. Here, the notations of Ref. [@SV] and Ref. [@MNSS] for the $3+1$ and the extra dimensional case will be followed, respectively. In addition to the usual Higgs doublet $$\phi = \left(
\begin{array}{c}
\phi ^{+} \\ \phi ^{0}
\end{array}
\right) \qquad l=0 \label{usualhiggs}$$ which has lepton number $l$ equal to zero, the model contains an electrically neutral complex singlet field $$\Phi \qquad l=-2 \ . \label{singlethiggs}$$ It is required that the Higgs potential constructed from $\Phi$ and $\phi$ conserves lepton number. The vacuum expectation values are: $$\begin{aligned}
\langle\Phi \rangle &=&\langle \Phi^{\ast} \rangle \ , \nonumber \\
\langle\phi ^{0}\rangle &=&\langle{\phi^{0}}^{\dagger}
\rangle=\lambda \approx 2^{-\frac{1}{4}}G_{F}^{-\frac{1}{2}}\ ,
\label{vacvalues}\end{aligned}$$ where $G_F$ is the Fermi constant and $\langle\Phi\rangle$ (whose non-zero value violates lepton number) sets a new scale in the theory.
The three light physical two component neutrinos $\nu_{1},\nu_2,\nu_3$ acquire Majorana masses $m_1,m_2,m_3$ which are of order $\epsilon^2 {\cal M}_{H}$ with $$\epsilon = {\cal O} \left( \frac{\cal D}{{\cal M}_H}\right) \ .
\label{eps}$$ According to the a standard “see-saw mechanism" [@ss] $\displaystyle{{\cal D}/{\lambda}}$ represents the “Dirac-type" coupling constants for the bare light neutrinos while the $3\times
3$ matrix $\displaystyle{{{\cal M}_H}/{\langle\Phi\rangle}}$ represents the Majorana type coupling constants for the bare heavy (or “right handed") neutrinos. Assuming the heavy scale ${\cal
M}_H$ to be substantially larger than the energy in play in the following we focus our attention only on the light neutrinos. The coupling of the Majoron $J$, identified as $J={\rm Im}\,\Phi$, to the physical neutrino fields $\nu_1,\nu_2,\nu_3$ in $3+1$ dimensions is: $${\cal L}_{J}=i\frac{J}{2}\sum_{a,b=1}^3 \nu^T_{a}\sigma_2 g_{ab}
\nu_b + {\rm h.c.} \ . \label{lj1}$$ It turns out [@SV] that the coupling constants have the expansion: $$g_{ab}=-\frac{1}{\langle\Phi \rangle}m_a \delta_{ab} + {\cal
O}\left(\epsilon^4 {\cal M}_{H} \right) \ , \label{gexp}$$ where the leading term is diagonal in generation space. It is convenient to express this leading term using four component ordinary Dirac spinors $$\psi_{a} =\left(
\begin{array}{c}
\nu _{a } \\ 0\end{array} \right) \ , \label{4comp}$$ in a $\gamma_5$ diagonal representation of the Dirac matrices the relevant Lagrangian term reads: $${\cal L}_{J}=i\frac{J}{2\langle\Phi \rangle}\sum_{a=1}^3 m_a
\left(\psi^T_a C^{-1}\frac{1+\gamma_5}{2}\psi_a +
\bar{\psi}_a\frac{1-\gamma_5}{2}C \bar{\psi}^T_a\right) \ .
\label{lj2}$$ Here $C$ is the charge conjugation matrix of the Dirac theory.
The generalization of the present model to the case where the field $\Phi$ propagates in $\delta$ extra dimensions has been carried out in detail in Ref. [@MNSS]. These extra dimensions, denoted as $y_{i}$ with $i=1,\ldots \delta$, are assumed to be toroidally compactified with a radius $R_i$. For simplicity we assume that all the radii $R_i$ are equal to the same value $R$. $\Phi\left(x,y\right)$ continues to carry the “engineering dimension" one as it would in $3+1$ dimensional space-time and via a Fourier expansion with respect to the compactified coordinates we have: $$\begin{aligned}
\Phi\left(x,y\right)={\rm Norm}\sum_{n_1,\ldots,n_{\delta}}
\Phi_{n_1,\ldots, n_{\delta}}\left(x\right)e^{\frac{i}{R}\left(n_1
y_1+ \cdots \right)} \ , \label{normalization}\end{aligned}$$ where $\displaystyle{{\rm Norm}=\left[2\pi M
R\right]^{-\frac{\delta}{2}}}$ and $M$ represents the intrinsic scale of the new theory.
Each general Kaluza-Klein field receives a mass squared increment $$\Delta m^2_{n_1,n_2,\ldots}=\frac{1}{R^2}\left(n_1^2+n_2^2+\cdots
n_{\delta}^2 \right) \ . \label{increment}$$ The zero-mass Majoron $J_{0,0,\ldots,0}(x)$ corresponds to the previously studied $3+1$ massless Majoron. The fields ${\rm Re}\, \Phi_{n_1,n_2,\ldots}(x)$ are expected to receive a substantial increment from the pure Higgs sector of the theory [@MNSS] and will be neglected in the following.
The intrinsic scale $M$ and the compactification radius $R$ can be related to each other when assuming in the “brane" model the graviton to propagate in the full $(3+\delta)+1$ dimensional space-time. Then the ordinary form of Newtons’ gravitation law is only an approximation valid at distances much greater than $R$. The Newtonian gravitational constant (inverse square of the Planck mass $M_P$) is obtained as a phenomenological parameter from $$\left(\frac{M_P}{M}\right)^2=\left(2\pi M
R\right)^{\delta}=\frac{1}{\left(\rm Norm\right)^2} \ .
\label{Planck}$$ Considering $M_P$ as an experimental input (and approximating $R_1=R_2=\cdots =R_{\delta}$), shows via (\[Planck\]) that $M$ is the only free parameter introduced to describe the extra dimensional aspect of the present simple theory when $\delta$ is fixed.
We expect ${\cal M}_H/\langle\Phi \rangle$ to be very roughly of the order unity and $\langle\Phi \rangle$ of the order of $M$. Finally the Yukawa interactions of the Majoron and its Kaluza-Klein excitations with the light neutrinos are described by (c.f. (\[lj2\])): $$\begin{aligned}
{\cal L}_{J}&=&\sum_{a=1}^3 \sum_{n_1,\ldots,n_{\delta}} i
g_{aa;n_1,\ldots,n_{\delta}} J_{n_1,\ldots,n_{\delta}} \times
\nonumber
\\&&\times\left(\psi^T_a C^{-1}\frac{1+\gamma_5}{2}\psi_a +
\bar{\psi}_a\frac{1-\gamma_5}{2}C \bar{\psi}^T_a\right) \ ,
\label{lj3}\end{aligned}$$ to leading order in the neutrino masses, $m_a$ with $$g_{aa;n_1,\ldots,n_{\delta}}\equiv g_{aa}=\frac{m_a}{2\langle\Phi
\rangle} \frac{M}{M_P} \ .$$ The vacuum expectation value $\langle \Phi \rangle$ (unless unnatural fine tuning is considered) is very roughly of the order of $M$, leading $g_{aa}$ to be naturally of the order of $\displaystyle{{m_a}/{M_P}}\approx 10^{-28}$ regardless of the number of extra dimensions [@MNSS]. Clearly the exact value of this universal coupling crucially depends on the unknown dynamics behind the spontaneous breaking of the lepton number [@MNSS].
Review of Constraints from Accelerators
=======================================
In [@MNSS] it has been shown that the accurately known value of the Z width can provide information about the coupling of two neutrinos to the Majoron. Both the $3+1$ dimensional case and the case in which one adopts a “brane" world picture with the Majoron free to experience the extra dimensions have been studied. Bounds on the dimensionless coupling constants were obtained, allowing for any number of extra dimensions and any intrinsic mass scale. If the uncertainty of the $Z$’s invisible width $1.7 \times
10^{-3}$ GeV is roughly taken as an indication of the maximum allowed value for the total width into a Majoron and two neutrinos the following bounds on $\mid g_{aa} \mid$ were obtained in [@MNSS] for $M_S = 10^4~GeV$: $\mid g_{aa} \mid <3.4 \times 10^{-12}$, $\mid g_{aa} \mid < 2.3
\times10^{-10}$, $\mid g_{aa} \mid < 1.5\times10^{-8}$ for $\delta=2,3,4$, respectively. These are much stronger bounds than the one obtained for the model in $3+1$ space-time dimensions which is $\mid g_{aa} \mid < 0.11$.
If a technically natural see-saw model is adopted, the predicted coupling constants are far below these upper bounds. In addition, for this natural model, the effect of extra dimensions is to decrease the predicted partial Z width, the increase due to many Kaluza-Klein excitations being compensated by the decrease of their common coupling constant.
We shall see in the following that constraints from supernovae are much more stringent than the ones above.
Constraints from supernovae
===========================
[*Supernova cooling —*]{} The proto-neutron stars created by core-collapse supernovae are born with core temperatures of 30-50 MeV. The main cooling mechanism for these stars is thermal surface emission of neutrinos on a timescale of a few seconds. The total energy emitted is the order a few $10^{53}$ erg, with roughly equal amounts in all flavours of neutrinos and antineutrinos. This emission has been observed from SN1987A by Kamiokande [@Hirata:ad], IMB [@Bratton:ww] and Baksan [@baksan], all observing $\bar\nu_e$ events. The results from SN1987A are compatible with theoretical supernova models and therefore put a constraint on any non-standard cooling mechanism carrying away too much energy, and since majorons from the KK-tower will be produced in the supernova and carry away energy 1987A data can be used to constrain $g_{aa}$. This has been done several times in the literature for the 3+1 dimensional majoron models [@Aharonov:ee; @Aharonov:ik; @Choi:1987sd; @Grifols:1988fg; @Berezhiani:gf; @Berezhiani:za; @Choi:1989hi; @Chang:1993yp; @Pilaftsis:1993af; @Berezhiani:1994jc; @Kachelriess:2000qc; @Tomas:2001dh]. For these models, only a relatively small band in parameter space is excluded for the following reason: For large $g_{aa}$ the majorons are tightly coupled inside the neutron star, and only escape via surface emission, like neutrinos. Therefore they cannot carry away most of the energy and supernovae do not yield significant constraints. On the other hand, for small $g_{aa}$ the majorons do escape freely, but they are not produced in significant numbers. The end result is that a band of roughly $3 \times 10^{-7}
< g_{aa} < 2 \times 10^{-5}$ is excluded [@Kachelriess:2000qc].
For the $3+1+\delta$ models the situation is different because each KK-mode is very weakly coupled. Therefore we are always in the limit where majorons escape freely once they are produced, and do not need to worry about possible surface emission.
A fairly robust constraint on such “bulk emission” is the one proposed by Raffelt [@Raffelt:wa], that the emissivity of the neutron star medium should be $$\epsilon \lesssim 10^{19} \,\, {\rm erg} \, {\rm g}^{-1} \, {\rm s}^{-1}.$$
In the 3+1 dimensional case where majorons are strictly massless the neutrino pair annihilation $\nu \bar\nu \to J$ is not allowed kinematically and the most important processes are $\nu \bar\nu
\to JJ$ and $\nu \to \bar\nu J$. However, in the present case, most of the emitted majorons have mass of order $T$, and the simple pair-annihilation is by far the most important process. The squared and spin-averaged matrix element for this process (per single Kaluza-Klein excitation) is $$|A|^2 = \frac{1}{2} g_{aa}^2 p_1 \cdot p_2.$$ The emissivity per volume of the medium is then $$\begin{aligned}
Q(m_J) & = & \int {d\tilde{p}_1}{d\tilde{p}_2}{d\tilde{p}_J}(2\pi)^4
f_\nu(p_1)f_\nu(p_2)
\nonumber \\
&& \,\, \times |A|^2 \delta^4(p_1+p_2-p_J) (E_1+E_2),\end{aligned}$$ where $d\tilde p = d^3 {\bf p}/(2\pi)^3 2E$ and $f_\nu$ are the thermal distributions of the incoming neutrinos, $f_\nu \equiv e^{-p_\nu/T}$. Doing the integral gives $$Q(m_J) = \frac{1}{128 \pi^3} g_{aa}^2 T^5 x^4 K_2(x) \,\, , \,\,\,
x \equiv m_J/T \ ,$$ where $K_2(x)$ is a Bessel function. However, we also need to sum over the KK-tower in order to obtain the total emissivity $$\begin{aligned}
Q & = &\frac{\pi^{\delta/2}}{\Gamma(\delta/2) (2\pi)^{\delta}}
\frac{M_{P}^2 T^{5+\delta}}{M^{2+\delta}}g_{aa}^2 \int_0^\infty dx x^3 K_2(x) \\
& = & \delta \pi^{(\delta-6)/2} 2^{\delta-5}
\frac{M_{P}^2 T^{5+\delta}}{(2\pi)^\delta M^{2+\delta}}
g_{aa}^2 \Gamma(3+\delta/2)\end{aligned}$$ Using the rough equality $Q = \epsilon \langle \rho \rangle$, $\langle \rho \rangle \simeq 4 \times 10^{14} \,\, {\rm g} \, {\rm cm}^{-3}$, this can be translated into a bound on $g_{aa}$ and $M$ which is $$g_{aa} \lesssim X M_{\rm TeV}^{1+\delta/2} T_{30}^{-(5+\delta)/2},
\label{Eq:cooling}$$ with the following values for $X$ $$X = \cases{1.0 \times 10^{-21} & for $\delta=2$ \cr 2.1 \times
10^{-17} & for $\delta=4$ \cr 4.6 \times 10^{-13} & for
$\delta=6$}$$ In the above, $T_{30} = T/(30 \,\, {\rm MeV})$. In all figures we use $T_{30} = 1$. The bounds are shown in Fig. 1 as a function of $M$. However, from considerations of graviton emission, strong bounds on $M$ already exist [@Cullen:1999hc; @Barger:1999jf; @hanhart; @hanhart2; @Hannestad:2001jv; @Hannestad:2001xi]. The most recent, and most restrictive, limits are those from Ref. [@Hannestad:2001xi]. These are shown as the thicker lines in the left part of the figure. Values of $M$ in this range are already excluded from graviton emission arguments [@Hannestad:2001xi].
[*Old neutron star excess heating —*]{} When considering graviton emission a much stronger bound on $M$ can be obtained by considering the decays of KK-gravitons produced in supernovae. Gravitons have a significant branching ratio into photons and the decays therefore produce gamma rays. All cosmological supernovae have therefore by the present produced a diffuse cosmic gamma background. Comparing this with observations of the diffuse gamma background by the EGRET instrument [@egret] yields a bound on $M$ significantly stronger than found from the supernova cooling argument [@Hannestad:2001jv]. More interestingly, the tightest constraint comes from observations of old, isolated neutron stars.
Most of the KK-modes emitted from the proto-neutron star have masses of order $3T$ and therefore also relatively low velocities. This again means that a large fraction (roughly one half) of the KK-modes have velocities lower than the escape velocity, and that neutron stars retain a halo of KK-modes with a typical radius of 2-3 $R_{NS}$. When these gravitons decay they heat the neutron star and can lead to excess surface emission from old neutron stars. This argument applies equally to KK-gravitons and majorons. For gravitons it was used in Ref. [@Hannestad:2001xi] to put an extremely strong limit on $M$, $M > 1600$ TeV $(\delta=2)$, 70 TeV $(\delta=3)$.
The KK-majorons only decay into neutrinos and not photons. However, the typical energy of the emitted neutrinos is 50-100 MeV, and the neutron star is not transparent to neutrinos of such high energy. Therefore the neutrinos hitting the neutron star surface will heat it just like photons do. The surface luminosity of the neutron star at late times should therefore reach a constant level corresponding to the energy deposited by decay neutrinos. This luminosity is given by $$L_{NS} = f_{J} F_{J} E_{TOT} \langle \Gamma_J \rangle \frac{R_{NS}^2}{R_{halo}^2},$$ where $f_{J}$ is the fraction of the total supernova energy emitted in majorons, $F_{J}$ is the fraction of the emitted majorons remaining bound to the neutron star, taken to be $1/2$, $E_{TOT}$ is the total SN energy, taken to be $3 \times 10^{53}$ erg, $\langle \Gamma \rangle$ is the average majoron decay rate, and $R_{halo}$ is the typical radius of the majoron halo, taken to be 2 $R_{NS}$. The decay rate for non-relativistic majorons is given by $$\Gamma_J = \frac{1}{64 \pi} g_{aa}^2 m_J.
\label{Eq:decay}$$ $f_{J}$ is a function of $M$ and $g_{aa}$, and can be found from the above cooling bound. The cooling bound, to a good approximation, corresponds to half the total energy being emitted in majorons ($f_{J} \simeq 1/2$). For gravitons the strongest bound comes from the neutron star PSR J0953+0755 [@0953; @psc96; @ll99], which is the oldest neutron star for which the thermal surface temperature has been measured. Its total surface luminosity is estimated to be $L \sim 10^{-5} L_\odot$ [@ll99]. For majorons this neutron star also yields a strong constraint on $g_{aa}$ which is roughly $$g_{aa} \lesssim Y M_{\rm TeV}^{(2+\delta)/4} T_{30}^{-(6+\delta)/4},
\label{Eq:heating}$$ with the following values for $Y$ $$Y = \cases{3.3 \times 10^{-22} & for $\delta=2$ \cr 4.8 \times
10^{-20} & for $\delta=4$ \cr 7.0 \times 10^{-18} & for
$\delta=6$}$$ In all cases this bound is significantly stronger than the cooling bound, just as it is for gravitons. The bounds are summarized in Fig. \[fig2\]. However, there is a limit to the applicability of the bound. The age of the neutron star PSR J0953+0755 is estimated to be $\tau = 1.7 \times 10^7$ yr [@ll99]. If the majorons decay faster than this, they will have vanished already and cannot act as a heating source. From Eq. (\[Eq:decay\]) one finds a lifetime of $$\tau = 1.3 \times 10^{-21} \, g_{aa}^{-2} \, \left\langle
\frac{100\, {\rm MeV}}{m_J} \right\rangle \,\, {\rm s},$$ giving a rough upper limit on $g_{aa}$ of $1.5 \times 10^{-18}$, above which the limit of Eq. (\[Eq:heating\]) does not apply.
![Upper bounds on $g_{aa}$ from the supernova cooling bound, Eq. (\[Eq:cooling\]), for various $\delta$ and $M$. The bottom curve is for $\delta=2$, the middle for $\delta=4$, and the top for $\delta=6$. The thick lines at low $M$ correspond to the excluded region of $M$ from graviton effects.[]{data-label="fig1"}](fig1.ps){width="7truecm"}
![Upper bounds on $g_{aa}$ from the neutron star heating bound, Eq. (\[Eq:heating\]), for various $\delta$ and $M$. The bottom curve is for $\delta=2$, the middle for $\delta=4$, and the top for $\delta=6$. The thick lines at low $M$ correspond to the excluded region of $M$ from graviton effects. The horizontal line corresponds to the upper limit of applicability of the neutron star heating limit.[]{data-label="fig2"}](fig2.ps){width="7truecm"}
Another possible problem is that majorons could be reabsorbed when they pass through the neutron star on a timescale much shorter than the age of the neutron star [@hr02]. The most relevant process for reabsorption is inverse bremsstrahlung, $J NN \to NN$ which is induced via the electroweak interactions [@CMP]. Since the majoron is a pseudo-scalar one can estimate the rate for this process in the same way as for axions [@Raffelt:wa]. The result is that reabsorption only happens on a timescale much longer than the lifetime of the neutron star PSR J0953+0755.
Discussion
==========
Majorons as dark matter?
------------------------
By the same neutrino pair annihilation process as in supernovae, majorons are also produced in the early universe. This means that in principle one might obtain non-trivial bounds on $g_{aa}$ from considering cosmological production of majorons. For gravitons such considerations lead to extremely stringent bounds on $M$ and the maximum temperature, $T_{RH}$, of the radiation dominated epoch after inflation [@hs99; @Hannestad:2001nq; @Fairbairn:2001ct; @Fairbairn:2001ab]. If the fundamental scale $M$ is close to 1 TeV then the reheating temperature needs to be extremely low, typically in the MeV regime. However, there is a lower limit to how low $T_{RH}$ can be without disturbing big bang nucleosynthesis. Detailed calculations have shown that this limit is roughly $T_{RH} \gtrsim 0.7$ MeV [@Kawasaki:1999na; @Kawasaki:2000en]. The number density of majorons in the universe can be found by solving the integrated Boltzmann equation $$\dot{n}_J = - 3 H n_J + \frac{g_{aa}^2}{128 \pi^3} m_J^3 T K_1(m_J/T),$$ which applies to a single majoron mode with mass $m_J$. $n_J$ is the number density of the single majoron mode $J$ and $H$ is the Hubble parameter. By summing over all KK-modes of the majoron in the same way as it was done for gravitons in Refs. [@hs99; @Hannestad:2001nq], one finds a present day density of $$\begin{aligned}
\rho_J & = & 8.3 \times 10^{-24} \frac{\pi^{\delta/2}}{\Gamma(\delta/2)}
\left(\frac{M_P}{T_{RH}}\right)^2
\left(\frac{T_{RH}}{M}\right)^{\delta+2} \,\, {\rm GeV}^4 \nonumber
\vspace*{0.5cm} \\
&& \,\, \times \, g_{aa}^2 \,
\int_0^\infty dz z^{\delta-1} \int_z^\infty dq q^3 K_1(q)\end{aligned}$$ Requiring that this density is smaller than the critical density, $\rho_c = 8.1 \times 10^{-47} h^2 \,\, {\rm GeV}^4$, then yields an upper bound on $g_{aa}$ as a function of $M$ and $T_{RH}$.
However, there is again a limit to the applicability of the bound because for large $g_{aa}$ the majorons will have decayed by the present. For the decay rate given in Eq. (\[Eq:decay\]) and the requirement that $\tau \gtrsim 10^{10} \,\, {\rm yr}$ one finds $$g_{aa} \lesssim 3 \times 10^{-19} T_{RH,{\rm MeV}}^{-1},$$ assuming that the typical majoron mass is $\sim 3 T_{RH}$, a fairly good approximation. In Fig. 3 we show the contours of $g_{aa}$ corresponding to critical density. Also shown is the lower bound on $M$ as a function of $T_{RH}$ from considering the decay of gravitons produced in the early universe. From this argument anything to the left of the thick grey lines in the figure is excluded. The limits used are the ones from Ref. [@Hannestad:2001nq] which are the most restrictive cosmological limits, combined with the limits from Ref. [@Hannestad:2001xi] which for low $T_{RH}$ can be more restrictive. Finally, we also show the upper bound on $g_{aa}$ from the above equation. In the case where the line from the graviton bound is to the right of the decay lifetime bound, majorons cannot possibly contribute to critical density. This is seen to be the case for both $\delta=2$ and $\delta=6$, and indeed for all values of $\delta$ from 2 to 6. Therefore the conclusion is that no non-trivial bound on $g_{aa}$ comes from cosmology, and further that majorons cannot contribute the dark matter of the universe [@valle]. Already, gravitons are excluded as a dark matter candidate because if they were to contribute critical density the photons produced by their decay would have been clearly visible.
![Contours of $\log(g_{aa})$ which corresponds to $\rho_J = \rho_c$. The upper panel is for $\delta=2$ and the lower for $\delta=6$. The thick line to the right is the lower bound on $M$ coming from considering graviton emission. The thick vertical line to the left correponds to the maximum $g_{aa}$ for which majorons live longer than the age of the universe.[]{data-label="fig4"}](fig3.ps "fig:"){width="7truecm"} ![Contours of $\log(g_{aa})$ which corresponds to $\rho_J = \rho_c$. The upper panel is for $\delta=2$ and the lower for $\delta=6$. The thick line to the right is the lower bound on $M$ coming from considering graviton emission. The thick vertical line to the left correponds to the maximum $g_{aa}$ for which majorons live longer than the age of the universe.[]{data-label="fig4"}](fig4.ps "fig:"){width="7truecm"}
Conclusions
-----------
In the present paper we have shown that supernova constraints on brane-world scenarios for neutrino masses are many orders of magnitude stronger than the accelerator bound [@MNSS]. Even so the constraints found are somewhat weaker than what is naturally expected in see-saw models of neutrino mass.
We have also shown that bulk majorons cannot act as the dark matter in the universe, at least not within the present scenario with equal radii of all the extra dimensions.
Finally, our findings suggest that unnatural types of the “see-saw" mechanism for neutrino masses are unlikely to occur in nature, even in the presence of extra dimensions.
We wish to thank Jukka Maalampi, Georg Raffelt, Joseph Schechter, and Jose Valle for valuable comments. The work of F.S. is supported by the Marie–Curie fellowship under contract MCFI-2001-00181.
[99]{} N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett., 263 (1998). I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. B [**436**]{}, 257 (1998). N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Rev. D [**59**]{}, 086004 (1999).
K. R. Dienes, E. Dudas and T. Gherghetta, Nucl. Phys.[**B557**]{},25(1999).
N. Arkani-Hamed, S. Dimopoulos, G. Dvali and J. March-Russell, hep-ph/9811448.
R. N. Mohapatra, S. Nandi and A. Perez-Lorenzana, Phys. Lett. B [**466**]{}, 115 (1999) \[arXiv:hep-ph/9907520\]. R. N. Mohapatra and A. Perez-Lorenzana, Nucl. Phys. B [**576**]{}, 466 (2000) \[arXiv:hep-ph/9910474\]. G. R. Dvali and A. Y. Smirnov, Nucl. Phys. B [**563**]{}, 63 (1999) \[arXiv:hep-ph/9904211\]. R. Barbieri, P. Creminelli and A. Strumia, Nucl. Phys. B [**585**]{}, 28 (2000) \[arXiv:hep-ph/0002199\]. R. N. Mohapatra and A. Perez-Lorenzana, Nucl. Phys. B [**593**]{}, 451 (2001) \[arXiv:hep-ph/0006278\]. A. Ioannisian and J. W. Valle, Phys. Rev. D [**63**]{}, 073002 (2001). A. E. Faraggi and M. Pospelov, Phys. Lett. [**B458**]{},237(1999).
G. C. McLaughlin and J. N. Ng, Phys. Lett. [**B470**]{},157(1999).
A. Das and O. Kong, Phys. Lett. [**B470**]{},149(1999).
C. D. Carone, Phys. Rev. [**D61**]{},015008(2000).
T. Banks, M. Dine and A. E. Nelson, JHEP 9906: 014(1999).
K. Abazajian, G. M. Fuller and M. Patel, arXiv:hep-ph/0011048. A. Ioannisian and A. Pilaftsis, Phys. Rev. D [**62**]{}, 066001 (2000) \[arXiv:hep-ph/9907522\]. R. N. Mohapatra, A. Perez-Lorenzano and C. A. de S. Pires, Phys. Lett. [**B491**]{},143(2000).
S. Moussa, S. Nasri, F. Sannino and J. Schechter, hep-ph/0108128. To appear in Phys. Rev. D.
E. Ma, M. Raidal and U. Sarkar, Phys. Rev. Lett. [**85**]{},3769(2000).
Review of Particle Physics. D.E. Groom et al. Eur. Phys. J. [**C15**]{},1(2000).
C. D. Carone, J. M. Conroy and H. J. Kwee, Phys. Lett. B [**538**]{}, 115 (2002) \[arXiv:hep-ph/0204045\]. R. Tomas, H. P[ä]{}s and J. W. F. Valle, hep-ph/0103017.
Y. Chikashige, R. N. Mohapatra and R. D. Peccei, Phys. Lett. [**B98**]{},265(1981).
J. Schechter and J.W.F. Valle, Phys. Rev. [**D25**]{}, 774 (1982).
T. Yanagida, Proc. of the Workshop on Unified Theory and Baryon Number in the Universe, ed. by O. Sawada and A. Sugamato (KEK Report 79-18,1979), p 95; M. Gell-Mann, P. Ramond and R. Slansky in Supergravity, eds P. van Niewenhuizen and D. Z. Freedman (North Holland, 1979); R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. **44**, 912 (1980).
K. S. Hirata [*et al.*]{}, Phys. Rev. D [**38**]{} (1988) 448. C. B. Bratton [*et al.*]{} \[IMB Collaboration\], Phys. Rev. D [**37**]{} (1988) 3361. E. N. Alexeyev [*et al.*]{}, Pis’ma Zh. Eksp. Teor. Fiz. [**45**]{} (1987) 461 \[JETP Lett. (1987) 589\].
Y. Aharonov, F. T. Avignone and S. Nussinov, Phys. Rev. D [**37**]{}, 1360 (1988). Y. Aharonov, F. T. Avignone and S. Nussinov, Phys. Rev. D [**39**]{}, 985 (1989). K. Choi, C. W. Kim, J. Kim and W. P. Lam, Phys. Rev. D [**37**]{}, 3225 (1988). J. A. Grifols, E. Masso and S. Peris, Phys. Lett. B [**215**]{}, 593 (1988). Z. G. Berezhiani and M. I. Vysotsky, Phys. Lett. B [**199**]{}, 281 (1987). Z. G. Berezhiani and A. Y. Smirnov, Phys. Lett. B [**220**]{}, 279 (1989). K. Choi and A. Santamaria, Phys. Rev. D [**42**]{}, 293 (1990). S. Chang and K. Choi, Phys. Rev. D [**49**]{}, 12 (1994) \[arXiv:hep-ph/9303243\]. A. Pilaftsis, Phys. Rev. D [**49**]{}, 2398 (1994) \[arXiv:hep-ph/9308258\].
Z. G. Berezhiani and A. Rossi, Phys. Lett. B [**336**]{}, 439 (1994) \[arXiv:hep-ph/9407265\]. M. Kachelriess, R. Tomas and J. W. Valle, Phys. Rev. D [**62**]{}, 023004 (2000) \[arXiv:hep-ph/0001039\]. R. Tomas, H. P[ä]{}s and J. W. Valle, Phys. Rev. D [**64**]{}, 095005 (2001) \[arXiv:hep-ph/0103017\].
G. G. Raffelt, “Stars As Laboratories For Fundamental Physics: The Astrophysics Of Neutrinos, Axions, And Other Weakly Interacting Particles,” [*Chicago, USA: Univ. Pr. (1996) 664 p*]{}.
S. Cullen and M. Perelstein, Phys. Rev. Lett. [**83**]{}, 268 (1999) \[arXiv:hep-ph/9903422\]. V. D. Barger, T. Han, C. Kao and R. J. Zhang, Phys. Lett. B [**461**]{}, 34 (1999) \[arXiv:hep-ph/9905474\]. C. Hanhart, D. R. Phillips, S. Reddy and M. J. Savage, Nucl. Phys. B [**595**]{}, 335 (2001) \[arXiv:nucl-th/0007016\]. C. Hanhart, J. A. Pons, D. R. Phillips and S. Reddy, Phys. Lett. B [**509**]{}, 1 (2001) \[arXiv:astro-ph/0102063\].
S. Hannestad and G. Raffelt, Phys. Rev. Lett. [**87**]{}, 051301 (2001) \[arXiv:hep-ph/0103201\]. S. Hannestad and G. G. Raffelt, Phys. Rev. Lett. [**88**]{}, 071301 (2002) \[arXiv:hep-ph/0110067\]. D. A. Kniffen [*et al.*]{}, Astron. Astrophys. Suppl. [**120**]{}, 615 (1996).
J. D. Pilkington [*et al.*]{}, Nature [**218**]{}, 126 (1968).
G. G. Pavlov, G. S. Stringfellow and F. A. Cordova, Astrophys. J. [**467**]{}, 370 (1996).
M. B. Larson and B. Link, Astrophys. J. [**521**]{}, 271 (1999).
The same problem exists for gravitons. However, an actual calculation for gravitons again turns out to give a reabsorption timescale much longer than the age of PSR 0953+755 (S. Hannestad and G. Raffelt, in preparation).
L. J. Hall and D. R. Smith, Phys. Rev. D [**60**]{}, 085008 (1999) \[arXiv:hep-ph/9904267\]. S. Hannestad, Phys. Rev. D [**64**]{}, 023515 (2001) \[arXiv:hep-ph/0102290\]. M. Fairbairn, Phys. Lett. B [**508**]{}, 335 (2001) \[arXiv:hep-ph/0101131\]. M. Fairbairn and L. M. Griffiths, JHEP [**0202**]{}, 024 (2002) \[arXiv:hep-ph/0111435\]. M. Kawasaki, K. Kohri and N. Sugiyama, Phys. Rev. Lett. [**82**]{}, 4168 (1999) \[arXiv:astro-ph/9811437\].
M. Kawasaki, K. Kohri and N. Sugiyama, Phys. Rev. D [**62**]{}, 023506 (2000) \[arXiv:astro-ph/0002127\]. It should be noted that 3+1 dimensional majoron models might still be of relevance for the dark matter problem. V. Berezinsky and J. W. F. Valle, Phys. Lett. [**B318**]{}, 360 (1993).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The notion of geometrical duality is discussed in the context of both Brans-Dicke theory and general relativity. It is shown that, in some particular solutions, the spacetime singularities that arise in usual Riemannian general relativity may be avoided in its dual representation (Weyl-type general relativity). This dual representation provides a singularity-free picture of the World that is physicaly equivalent to the canonical general relativistic one.'
address: ' Departamento de Fisica. Universidad Central de Las Villas. Santa Clara. CP: 54830 Villa Clara. Cuba '
author:
- 'Israel Quiros[^1]'
title: Dual geometries and spacetime singularities
---
Introduction
============
To our knowledge Dicke was first who raised questions about the physical significance of Riemannian geometry in relativity due to the arbitrariness in the metric tensor resulting from the indefiniteness in the choice of units of measure[@bdk; @dk]. Actually, Brans-Dicke (BD) theory with a changing dimensionless gravitational coupling constant: $Gm^2\sim\phi^{-1}$ ($m$ is the inertial mass of some elementary particle and $\phi$ is the scalar BD field, $\hbar=c=1$), can be formulated in two equivalent ways for either $m$ or $G$ could vary with position in spacetime. The choice $G\sim\phi^{-1}$, $m=const.$, leads to the Jordan frame (JF) BD formalism, that is based on the Lagrangian[@bdk]:
$$L^{BD}[g,\phi]=\frac{\sqrt{-g}}{16\pi}(\phi R - \frac{\omega}{\phi} {g^{nm}}\nabla_n \phi \nabla_m \phi) + L_{matter}[g],$$
where $R$ is the curvature scalar, $\omega$ is the BD coupling constant, and $L_{matter}[g]$ is the Lagrangian density for ordinary matter minimally coupled to the scalar field.
On the other hand, the choice $m\sim\phi^{-\frac{1}{2}}$, $G=const.$, leads to the Einstein frame (EF) BD theory based on the Lagrangian[@dk]:
$$L^{BD}[\hat g,\hat \phi]=\frac{\sqrt{-\hat g}}{16\pi}(\hat R - (\omega + \frac{3}{2}) \hat {g^{nm}}{\hat \nabla}_n \hat \phi {\hat \nabla}_m \hat \phi) + \hat L_{matter}[\hat g,\hat \phi],$$
where now, in the EF metric $\bf{\hat g}$, the ordinary matter is nonminimally couppled to the scalar field $\hat \phi \equiv \ln\phi$ through the Lagrangian density $\hat L_{matter}[\hat g,\hat \phi]$.
Both JF and EF formulations of BD gravity are equivalent representations of the same physical situation[@bdk] since they belong to the same conformal class. The EF Lagrangian (1.2) is equivalent to the JF Lagrangian (1.1) in respect to the conformal rescaling of the spacetime metric $\bf g\rightarrow\bf{\hat g}=\phi\bf g$. In the coordinate basis this transformation can be written as:
$$\hat {g_{ab}}= \phi {g_{ab}},$$
where $\phi$ is a nonvanishing smooth function.
The conformal rescaling (1.3) can be interpreted geometrically as a particular transformation of the physical units (a scalar factor applied to the units of time, length and reciprocal mass)[@dk]. Any dimensionless number (for example $Gm^2$) is invariant under (1.3). Experimental observations are unchanged too under these transformations since spacetime coincidences are not affected by them, i.e. spacetime measurements are not sensitive to the conformal rescalings of the metric. This means that, concerning experimental observations both formulations based on varying $G$ (JFBD) and varying $m$ (EFBD) respectively are indistinguishable. These are physically equivalent representations of a same physical situation.
The same line of reasoning can be applied to the case suggested by Magnano and Sokolowski, involving the conformally related Lagrangians[@mag]:
$$L^{GR}[g,\phi]=\frac{\sqrt{-g}}{16\pi}(\phi R - \frac{\omega}{\phi} {g^{nm}}\nabla_n \phi \nabla_m \phi) + L_{matter}[g,\phi],$$
and
$$L^{GR}[\hat g,\hat \phi]=\frac{\sqrt{-\hat g}}{16\pi}(\hat R - (\omega + \frac{3}{2}) \hat {g^{nm}}{\hat \nabla}_n \hat \phi {\hat \nabla}_m \hat \phi) + \hat L_{matter}[\hat g],$$
where now, unlike the situation we encountered in usual BD gravity, ordinary matter is minimally coupled in the EF (magnitudes with hat), while it is nonminimally coupled in the JF. Both Lagrangians (1.4) and (1.5) represent equivalent pictures of the same theory: general relativity (GR). Actually, it can be seen that the theory linked with the Lagrangian (1.5) is just GR with a scalar field as an additional source of gravity. In particular, it can be verified that both weak equivalence principle (WEP) and strong equivalence principle (SEP) hold in this case[@iq]. We shall call the theory derivable from (1.5) as Einstein frame general relativity(JFGR), while its conformally equivalent representation based on the Lagrangian (1.4) we call Jordan frame general relativity(JFGR).
The field equations derivable from the Lagrangian (1.5) are:
$${\hat G_{ab}}=8\pi{\hat T_{ab}}+(\omega+\frac{3}{2})({\hat \nabla}_a \hat \phi {\hat \nabla}_b \hat \phi- \frac{1}{2} \hat g_{ab} \hat g^{nm} {\hat \nabla}_n \hat \phi {\hat \nabla}_m \hat \phi),$$
$${\hat {\Box}} \hat \phi=0,$$
and the conservation equation:
$$\hat \nabla_n \hat T^{na}=0,$$
where $\hat G_{ab} \equiv \hat R_{ab}-\frac{1}{2} \hat g_{ab} \hat R$, ${\hat {\Box}}\equiv \hat {g^{nm}}{\hat \nabla}_n {\hat \nabla}_m$, and $\hat T_{ab}=\frac{2}{\sqrt{-\hat g}}\frac{\partial}{\partial \hat g^{ab}}(\sqrt{-\hat g}\hat L_{matter})$ are the components of the stress-energy tensor for ordinary matter.
Now we shall list some features of the JFGR theory that constitute its main disadvantages. The BD scalar field is nonminimally coupled both to scalar curvature and to ordinary matter so the gravitational constant $G$ varies from point to point ($G\sim \phi^{-1}$). At the same time the material test particles don’t follow the geodesics of the geometry since they are acted on by both the metric field and the scalar field. This leads that test particles inertial masses vary from point to point in spacetime in such a way as to preserve the constant character of the dimensionless gravitational coupling constant $Gm^2$, i.e. $m \sim \phi^\frac{1}{2}$. The most serious objection to the Jordan frame formulation, however, is associated with the fact that the kinetic energy of the scalar field is not positive definite in this frame. This is usually linked with the formulation of the theory in unphysical variables [@fgn]. In section III we shall show that the indefiniteness in the sign of the energy density in the Jordan frame is only apparent. On the contrary, once the scalar field energy density is positive definite in the Einstein frame it is so in the Jordan one.
In the present paper we shall focus on those aspects of the Jordan frame formulation of general relativity with an extra scalar field that represent some advantage of this formulation in respect to its conformal EF formulation. It is respecting the transformation properties of the Lagrangian (1.4) under particular transformations of units and the issue of spacetime singularities. In this frame (JF) $R_{mn}k^n k^m$ is negative definite for any non-spacelike vector $\bf k$. This means that the relevant singularity theorems may not hold. This is in contradiction with the Einstein frame formulation of GR where $\hat R_{mn}k^n k^m$ is non-negative and then the occurence of spacetime singularities is inevitable. Then the singularities that can be present in the EFGR may be smoothed out and, in some cases, avoided in the Jordan frame[@fgn].
To the best our knowledge, only the Einstein frame formulation of general relativity (canonical GR and, consequently, Riemann manifolds with singularities it leads) have been paid attention in the literature. This historical omission is the main motivation for the present work.
The paper has been organized as follows. In Sec. II we present the notion of geometrical duality in BD gravity and GR theory. In Sec. III the JF formulation of general relativity is presented in detail. Secs. IV and V are aimed at the study of particular solutions to GR theory that serve as illustrations to the notion of geometrical duality. For simplicity we shall focus mainly in the value $\omega=-\frac{3}{2}$ for the BD coupling constant. In this case EFGR reduces to canonical Einstein’s theory. In particular the Schwarzschild solution is studied in Sec. IV, while flat Friedman-Robertson-Walker (FRW) cosmology for perfect fluid ordinary matter with a barotropic equation of state is studied in Sec. V. Finally a physical discussion on the meaning of geometrical duality is given in section VI.
Geometrical duality
===================
Usually the JF formulation of BD gravity is linked with Riemann geometry[@bdk]. It is directly related to the fact that, in the JFBD formalism, ordinary matter is minimally coupled to the scalar BD field through $L_{matter}[g]$ in (1.1). This leads that point particles follow the geodesics of the Riemann geometry. This geometry is based upon the parallel transport law $d\xi^a=-\gamma^a_{mn}\xi^m dx^n$, and the length preservation requirement $dg(\xi,\xi)=0$ where, in the coordinate basis $g(\xi,\xi)=g_{nm}\xi^n\xi^m$, $\gamma^a_{bc}$ are the affine connections of the manifold, and $\xi^a$ are the components of an arbitrary vector $\bf {\xi}$.
The above postulates of parallel transport and legth preservation in Riemann geometry lead that the affine connections of the manifold coincide with the Christoffel symbols of the metric $\bf g$:$\gamma^a_{bc}=\Gamma^a_{bc}=\frac{1}{2}g^{an}(g_{nb,c}+g_{nc,b}-g_{bc,n})$. Under the rescaling (1.3) the above parallel transport law is mapped into:
$$d\xi^a=-\hat \gamma^a_{mn}\xi^m dx^n,$$
where $\hat \gamma^a_{bc}=\hat \Gamma^a_{bc}-\frac{1}{2}({\hat \nabla}_b \hat \phi\delta^a_c+{\hat \nabla}_c \hat \phi\delta^a_b-\hat \nabla^a \hat \phi \hat g_{bc})$ are the affine connections of a Weyl-type manifold given by the length transport law:
$$d \hat g(\xi,\xi)=dx^n \hat \nabla_n \hat \phi \hat g(\xi,\xi).$$
In this case the affine connections of the manifold don’t coincide with the Christoffel symbols of the metric and, consequently, one can define metric and affine magnitudes and operators on the Weyl-type manifold.
Summing up. Under the rescaling (1.3) Riemann geometry with normal behaviour of the units of measure is mapped into a more general Weyl-type geometry with units of measure varying length in spacetime according to (2.2). At the same time, as shown in section I, JF and EF Lagrangians (of both BD and GR theories) are connected too by the conformal rescaling of the metric (1.3) (together with the scalar field redefinition $\phi \rightarrow \hat \phi= \ln\phi$). This means that, respecting conformal transformation (1.3) JF and EF formulations of the theory on the one hand, and Riemann and Weyl-type geometries on the other, form classes of conformal equivalence. These classes of conformal gravity theories on the one hand, and conformal geometries on the other, can be uniquely linked only after the coupling of the matter fields to the metric has been specified.
In BD theory, for example, matter minimally couples in the JF so the test particles follow the geodesics of the Riemann geometry in this frame, i.e. JFBD theory is naturally linked with Riemann geometry. This means that EFBD theory (conformal to JF one) should be linked with the geometry that is conformal to the Riemann one (the Weyl-type geometry). For general relativity with an extra scalar field just the contrary is true. In this case matter minimally couples in the Einstein frame and then test particles follow the geodesics of the Riemann geometry precisely in this frame, i.e. EFGR is naturally linked with Riemann geometry and, consequently Jordan frame GR (conformal to EFGR) is linked with Weyl-type geometry.
The choice of the unit of length of the geometry is not an experimental issue (for a classical discussion on this subject we refer the reader to [@edd]). Moreover, the choice of the spacetime geometry itself is not an experimental issue. We can explain this fact by using a simple argument. The experimental measurements (that always deal with dimensionless numbers) are invariant under the rescaling (1.3) that can be interpreted as a particular units transformation[@bdk; @dk]. Then physical experiment is insensitive to the rescaling (1.3). The fact that both Riemann and Weyl-type geometries belong to the same equivalence class in respect to the transformation (1.3) completes this explanation. Actually, this line of reasoning leads that the members in one conformal class are experimentally indistinguishable.
The same is true for the Jordan frame and Einstein frame formulations of the given classical theory of gravity. The choice of one or another representation for the description of the given physical situation is not an experimental issue. Then a statement such like: ’the JF formulation (or any other formulation) of the given theory (BD or GR theory) is the physical one’ is devoid of any physical, i.e. experimentally testable meaning. Such a statement can be taken only as an independent postulate of the theory. This means that the discussion about which conformal frame is the physical one[@mag; @fgn; @yg] is devoid of interest. It is a non-well-posed question.
An alternative approach can be based on the following postulate. Conformal representations of a given classical theory of gravity are physically equivalent. This postulate leads that the geometrical representation of a given physical situation through general relativity (or BD and Scalar-Tensor(ST) theories in general) produces not just one unique picture of the physical situation but it generates a whole equivalence class of all conformally related pictures. This fact we call as ’geometrical duality’. In this sense Riemann and Weyl-type geometries, for instance, are dual to each other. They provide different geometrical pictures originating from the same physical situation. These different geometrical representations are equally consistent with the observational evidence since they are experimentally indistinguishable. The choice of one or the other picture for the interpretation of the given physical effect is a matter of philosophical prejudice or, may be, mathematical convenience. The word duality is used here in the same context as in [@am], i.e. it has only a semantic meaning and has nothing to do with the notion of duality in string theory.
The rest of this paper is based, precisely, upon the validity of the postulate on the physical equivalence of conformal representations of a given classical theory of gravity. In what follows we shall illustrate the notion of geometrical duality in the context of general relativity with an extra scalar field.
Jordan frame general relativity
===============================
The formulation of general relativity to be developed in the present section is not a complete geometrical theory. Gravitational effects are described here by a scalar field in a Weyl-type manifold, i.e. the gravitational field shows both tensor (spin-2) and scalar (spin-0) modes. In this representation of the theory the redshift effect, for instance, should be interpreted as due in part to a change of the gravitational potential (the metric coefficients) from point to point in spacetime and, in part, as due to a real change in the energy levels of an atom over the manifold.
The field equations of the Jordan frame GR theory can be derived, either directly from the Lagrangian (1.4) by taking its variational derivatives respect to the dynamical variables or by conformally mapping eqs.(1.6-1.8) back to the JF metric according to (1.3), to obtain:
$$G_{ab}=\frac{8\pi}{\phi} T_{ab}+\frac{\omega}{\phi^2}(\nabla_a \phi \nabla_b \phi- \frac{1}{2} g_{ab} g^{nm} \nabla_n \phi \nabla_m \phi)+\frac{1}{\phi}(\nabla_a \nabla_b \phi-g_{ab} \Box \phi),$$
and
$$\Box \phi=0,$$
where $T_{ab}=\frac{2}{\sqrt{-g}}\frac{\partial}{\partial g^{ab}}(\sqrt{-g} L_{matter})$ is the stress-energy tensor for ordinary matter in the Jordan frame. The energy is not conserved because the scalar field $\phi$ exchanges energy with the metric and with the matter fields. The corresponding dynamic equation is:
$$\nabla_n T^{na}=\frac{1}{2} \phi^{-1} \nabla^a \phi T,$$
The equation of motion of an uncharged, spinless mass point that is acted on by both the JF metric field $\bf g$ and the scalar field $\phi$,
$$\frac{d^2x^a}{ds^2}=-\Gamma^a_{mn} \frac{dx^m}{ds} \frac{dx^n}{ds}-\frac{1}{2} \phi^{-1} \nabla_n \phi(\frac{dx^n}{ds}\frac{dx^a}{ds}-g^{an}),$$
does not coincide with the geodesic equation of the JF metric. This (together with the more complex structure of the equation (3.1) for the metric field in respect to the corresponding equation (1.6)) introduces additional complications in the dynamics of the matter fields.
We shall point out that the different solutions to the wave equation (3.2) generate different Weyl-type geometrical pictures that are dual to the Einstein frame one.
One of the most salient features of the Jordan frame GR theory is that, in general, the energy conditions do not hold due, on the one hand to the term with the second covariant derivative of the scalar field in the righthand side (r.h.s.) of eq.(3.1) and, on the other to the constant factor in the second term that can take negative values. This way the r.h.s. of eq.(3.1) may be negative definite leading that some singularity theorems may not hold and, as a consequence, spacetime singularities that can be present in canonical Riemannian GR (given by eqs.(1.5-1.8)), in Weyl-type GR (JFGR) spacetimes may become avoidable.
In the following sections we shall illustrate this feature of GR theory in some typical situations where the BD coupling constant is taken to be $\omega=-\frac{3}{2}$. In this case, in the EF the scalar field stress-energy tensor ($\frac{\phi}{8\pi}$ times the second term in the righthand side of eq.(1.6)) vanishes so we recover the canonical Einstein’s GR theory with ordinary matter as the only source of gravity . Then the EF scalar field $\hat \phi$ (fulfilling the field equation (1.7)) is a non interacting (nor with matter nor with curvature), massless, uncharged, and spinless, ’ghost’ field (it is an unphysical field). Nevertheless it influences the physics in the JF. Then its functional form in the EF must be taken into account. For $\omega > -\frac{3}{2}$, $\hat \phi$ is a physical field in the EF.
The fact that the Jordan frame formulation does not lead to a well defined energy-momentum tensor for the scalar field is the most serious objection to this representation of the theory[@fgn]. For this reason we shall briefly discuss on this. The kinetic energy of the JF scalar field is negative definite or indefinite unlike the Einstein frame where for $\omega > -\frac{3}{2}$ it is positive definite. This implies that the theory does not have a stable ground state (that is necessary for a viable theory of classical gravity) implying that it is formulated in unphysical variables[@fgn].
We shall point out that, although in this frame the r.h.s. of eq.(3.1) does not have a definite sign (implying that some singularity theorems may not hold), the scalar field stress-energy tensor can be given the canonical form. In fact, as pointed out in reference [@ss], the terms with the second covariant derivatives of the scalar field contain the connection, and hence a part of the dynamical description of gravity. For instance, a new connection was presented in [@ss] that leads to a canonical form of the scalar field stress-energy tensor in the JF.
We can obtain the same result as in ref.[@ss] if we rewrite equation (3.1) in terms of affine magnitudes in the Weyl-type manifold (see section II). In this case the affine connections of the JF (Weyl-type) manifold $\gamma^a_{bc}$ are related with the Christoffel symbols of the JF metric through: $\gamma^a_{bc}=\Gamma^a_{bc}+\frac{1}{2} \phi^{-1}(\nabla_b \phi\delta^a_c+\nabla_c \phi\delta^a_b-\nabla^a \phi g_{bc})$. We can define the ’affine’ Einstein tensor $^\gamma G_{ab}$ given in terms of the affine connections of the manifold $\gamma^a_{bc}$ instead of the Christoffel symbols of the Jordan frame metric $\Gamma^a_{bc}$. Equation (3.1) can then be rewritten as:
$$^\gamma G_{ab}=\frac{8\pi}{\phi} T_{ab}+\frac{(\omega+\frac{3}{2})}{\phi^2}(\nabla_a \phi \nabla_b \phi- \frac{1}{2} g_{ab} g^{nm} \nabla_n \phi \nabla_m \phi),$$
where now $\frac{\phi}{8\pi}$ times the second term in the r.h.s. of this equation has the canonical form for the scalar field stress-energy tensor. We shall call this as the ’true’ stress-energy tensor for $\phi$, while $\frac{\phi}{8\pi}$ times the sum of the 2nd and 3rd terms in the r.h.s. of eq.(3.1) we call as the ’effective’ stress-energy tensor for the BD scalar field $\phi$. Then once the scalar field energy density is positive definite in the Einstein frame it is so in the Jordan frame. This way the main physical objection against this formulation of general relativity is removed.
Another remarkable feature of the Jordan frame GR theory is that it is invariant in form under the following conformal transformations (as pointed out in ref.[@bdk; @dk] these can be interpreted as transformations of physical units):
$$\tilde g_{ab}=\phi^2 g_{ab},$$
$$\tilde \phi=\phi^{-1},$$
and
$${\tilde g_{ab}}=f g_{ab},$$
$$\tilde \phi=f^{-1}\phi,$$
where $f$ is some smooth function given on the manifold. In both cases the invariance in form of the equations (3.1-3.4) can be verified by direct substitution of (3.6) and (3.7) or (3.8) and (3.9) in these equations. Also Jordan frame GR based on the Lagrangian (1.4) is invariant in respect to the more general rescaling[@iq] (first presented in [@far]):
$$\tilde {g_{ab}}=\phi^{2\alpha}{g_{ab}},$$
and the scalar field redefinition:
$$\tilde \phi=\phi^{1-2\alpha}.$$
This transformation is accompanied by a redefinition of the BD coupling constant:
$$\tilde \omega=\frac{\omega-6\alpha(\alpha-1)}{(1-2\alpha)^2},$$
with $\alpha \neq \frac{1}{2}$. The case $\alpha = \frac{1}{2}$ constitute a singularity in the transformations (3.10-3.12).
The conformal invariance of a given theory of gravitation (i.e. its invariance under a particular transformation of physical units) is a very desirable feature of any classical gravity theory that would correctly describe our real world in the large scale. As it was pointed out by Dicke[@dk], it is obvious that the particular values of the units of mass, length and time employed are arbitrary so the physical laws must be invariant under these transformations. This simple argument suggests that the Jordan frame formulation of general relativity with an extra scalar field that is based on the Lagrangian (1.4) is a better candidate for such ultimate classical theory of gravitation than the other classical theories that are given by the Lagrangians (1.1), (1.2) and (1.5) respectively. In fact, the Lagrangian (1.4) is invariant in respect to the particular transformations of the physical units studied here ((3.6,3.7), (3.8,3.9) and (3.10-3.12)) while the Lagrangians (1.1), (1.2) and (1.5) are not invariant under these transformations.
In the following section we shall discuss on geometrical duality among singular Schwarzschild (EF) vacuum solution and the corresponding non singular JF solution and, in section V, we shall illustrate this kind of duality for flat, perfect fluid Friedman - Robertson - Walker (FRW) cosmologies. A similar discussion on conformal transformations between singular and non singular spacetimes in the low-energy limit of string theory can be found in [@stw] for axion - dilaton black hole solutions in $D=4$ and in [@cew] for classical FRW axion - dilaton cosmologies. For spurious black hole in the classical approximation see [@fik].
Geometrical duality and Schwarzschild black hole
================================================
In this section, for simplicity, we shall interested in the static, spherically symmetric solution to Riemannian general relativity (Einstein frame GR) with $\omega=-\frac{3}{2}$ for material vacuum, and in its dual Weyl-type picture (Jordan frame GR). In the EF the field equations (1.6-1.8) can be written, in this case, as:
$$\begin{aligned}
\hat R_{ab}=0, \nonumber\\
{\hat {\Box}} \hat \phi=0.\end{aligned}$$
The corresponding solution , in Schwarzschild coordinates,looks like($d\Omega^2=d\theta^2+\sin^2 \theta d\varphi^2$):
$$d\hat s^2=-(1-\frac{2m}{r}) dt^2+ (1-\frac{2m}{r})^{-1} dr^2+ r^2 d\Omega^2,$$
and
$$\hat \phi=q \ln (1-\frac{2m}{r}),$$
where $m$ is the mass of the point, located at the coordinate beginning, that generates the gravitational field and $q$ is an arbitrary real parameter. As seen from eq.(4.2) the static, spherically symmetric solution to eq.(4.1) is just the typical Schwarzschild black hole solution for vacuum. The corresponding solution for JFGR can be found with the help of the conformal rescaling of the metric (1.3) and the scalar field redefinition $\phi=e^{\hat \phi}=(1-\frac{2m}{r})^q$:
$$ds^2=-(1-\frac{2m}{r})^{1-q} dt^2+(1-\frac{2m}{r})^{-1-q} dr^2+\rho^2d\Omega^2,$$
where we have defined the proper radial coordinate $\rho=r(1-\frac{2m}{r})^{-\frac{q}{2}}$. In this case the curvature scalar is given by:
$$R=-\frac{3}{2} \phi^{-2} g^{nm} \nabla_n \phi \nabla_m \phi=-\frac{6 m^2 q^2}{r^4}(1-\frac{2m}{r})^{q-1}.$$
The real parameter $q$ labels different spacetimes ($M,g^{(q)}_{ab},\phi^{(q)}$), so we obtained a class of spacetimes {$(M,g^{(q)}_{ab},\phi^{(q)})/q\in \Re$} that belong to a bigger class of known solutions[@alac]. These known solutions are given, however, for an arbitrary value of the coupling constant $\omega$.
We shall outline the more relevant features of the solution given by (4.4). For the range $-\infty<q<1$ the Ricci curvature scalar (4.5) shows a curvature singularity at $r=2m$. For $-\infty<q<0$ this represents a timelike, naked singularity at the origin of the proper radial coordinate $\rho=0$. We shall drop these spacetimes for they are not compatible with the cosmic censorship conjecture[@rp]. Situation with $q=0$ is trivial. In this case the conformal transformation (1.1) coincides with the identity transformation that leaves the theory in the same frame. For $q>0$, the limiting surface $r=2m$ has the topology of an spatial infinity so, in this case, we obtain a class of spacetimes with two asymptotic spatial infinities one at $r=\infty$ and the other at $r=2m$, joined by a wormhole with a throat radius $r= (2+q)m$, or the invariant surface determined by $\rho_{min} =q(1+\frac{2}{q})^{1+\frac{q}{2}} m$. The wormhole is asymmetric under the interchange of the two asymptotic regions ($r=\infty$ and $r=2m$)[@vh].
This way, Weyl-type spacetimes dual to the Riemannian Schwarzschild black hole one (line element (4.2)) are given by the class {$(M,g^{(q)}_{ab},\phi^{(q)})/q>0$} of wormhole (singularity free)spacetimes.
Although in the present paper we are interested in the particular value $\omega=-\frac{3}{2}$ of the BD coupling constant, it will interesting however to discuss, briefly, what happen for $\omega>-\frac{3}{2}$. In this case there is a physical scalar in the Einstein frame (see eq.(1.6)). The corresponding EF solution to eqs.(1.6) and (1.7) is given by[@alac]:
$$d\hat s^2=-(1-\frac{2m}{pr})^p dt^2+ (1-\frac{2m}{pr})^{-p} dr^2+\hat \rho^2 d\Omega^2,$$
and
$$\hat \phi=q \ln (1-\frac{2m}{pr}),$$
where $p^2+(2\omega+3)q^2=1$, $p>0$. For non-exotic scalar matter ($\omega \geq -\frac{3}{2}$), $0<p\leq 1$. In eq.(4.6) we have used the definition $\hat \rho=(1-\frac{2m}{pr})^\frac{1-p}{2}r$ for the EF proper radial coordinate. There is a time-like curvature singularity at $r=\frac{2m}{p}$, so the horizon is shrunk to a point. Then in the EF the validity of the cosmic censorship hypothesis and, correspondingly, the ocurrence of a black hole are uncertain[@alac].
The JF solution conformally equivalent to (4.6) is given by:
$$ds^2=-(1-\frac{2m}{pr})^{p-q} dt^2+ (1-\frac{2m}{pr})^{-p-q} dr^2+ \rho^2 d\Omega^2,$$
where the JF proper radial coordinate $\rho=r(1-\frac{2m}{pr})^{\frac{1-p-q}{2}}$ was used. In this case, when $\omega$ is in the range $0 < \omega+3 < \frac{1+p}{2(1-p)}$, the Weyl-type JF geometry shows again two asymptotic spatial infinities joined by a wormhole. The particular value $p=1$ corresponds to the case of interest $\omega=-\frac{3}{2}$.
The singularity-free character of the Weyl-type geometry should be tested with the help of a test particle that is acted on by the JF metric in eq.(4.8) and by the scalar field $\phi=(1-\frac{2m}{pr})^q$. Consider the radial motion of a time-like test particle($d\Omega^2=0$). In this case the time-component of the motion equation (3.4) can be integrated to give:
$$\dot t^2=-C_1^2(1-\frac{2m}{pr})^{q-2p},$$
where $C_1^2$ is some integration constant and the overdot means derivative with respect to the JF proper time $\tau$($d\tau^2=-ds^2$). The integration constant can be obtained with the help of the following initial conditions: $r(0)=r_0$, $\dot r(0)=0$, meaning that the test particle moves from rest at $r=r_0$. We obtain $C_1^2=-(1-\frac{2m}{pr_0})^p$. Then the proper time taken for the particle to go from $r=r_0$ to the point with the Schwarzschild radial coordinate $r_0\leq r < \frac{2m}{p}$ is given by:
$$\tau=\int_r^{r_0}\frac{r^{\frac{q}{2}} dr}{\sqrt{(1-\frac{2m}{pr_0})^p-(1-\frac{2m}{pr})^p}(r-\frac{2m}{p})}.$$
While deriving this equation we have used eq.(4.8) written as:$-1=(1-\frac{2m}{pr})^{p-q} d\dot t^2- (1-\frac{2m}{pr})^{-p-q} d\dot r^2$. The integral in the r.h.s. of eq.(4.10) can be evaluated to obtain($q\neq 2$):
$$\tau>\frac{(\frac{2m}{p})^{\frac{q}{2}}(1-\frac{2m}{pr_0})^{\frac{p}{2}}}{1-\frac{q}{2}}[(r_0-\frac{2m}{p})^{1-\frac{q}{2}}-(r-\frac{2m}{p})^{1-\frac{q}{2}}],$$
and
$$\tau>\frac{2m}{p} \ln(\frac{r_0-\frac{2m}{p}}{r-\frac{2m}{p}}),$$
for $q=2$. For $q\geq 2$ the proper time taken by the test particle to go from $r=r_0$ to $r=\frac{2m}{p}$ is infinite showing that the particle can never reach this surface (the second spatial infinity of the wormhole). Then the time-like test particle does not see any singularity.
If we consider the scalar field $\phi$ as a perfect fluid then we find that its ’true’ energy density (the (0,0) component of $\frac{\phi}{8\pi}$ times the second term in the r.h.s. of eq.(3.5)) as measured by a comoving observer is given by:
$$\mu^\phi=\frac{2m^2 q^2 (\omega+\frac{3}{2})}{8\pi p^2 r^4}(1-\frac{2m}{pr})^{p+2q-2},$$
while its ’effective’ energy density (the (0,0) component of $\frac{\phi}{8\pi}$ times the sum of the second and third terms in the r.h.s. of eq.(3.1)) is found to be:
$$\mu^\phi_{eff}=\frac{2m^2 q(q(\omega+1)-p)}{8\pi p^2 r^4}(1-\frac{2m}{pr})^{p+2q-2}.$$
These are everywhere non-singular for $q\geq\frac{2-p}{2}$ ($0<p\leq 1$) in the range $2m\leq r<\infty$. The ’true’ BD scalar field energy density $\mu^\phi$ is everywhere positive definite for $\omega>-\frac{3}{2}$ for all $q$ and $0<p\leq 1$. This means that the scalar matter is non-exhotic and shows a non-singular behaviour evrywhere in the given range of the parameters involved. The scalar field ’effective’ energy density $\mu^\phi_{eff}$ is everywhere positive definite only for $q>\frac{p}{\omega+1}$.
Summing up. With the help of time-like test particles that are acted on by both the metric field and the scalar field we can test the absence of singularities (and black holes) in Weyl-type spacetimes of the class {$M, g_{ab}^{(q)}, \phi^{(q)}/ q\geq 2$}. These are dual to Riemannian (singular) spacetimes ($M, \hat g_{ab}$) given by (4.2). Pictures with and without singularity are different, but physically equivalent (dual) geometrical representations of the same physical situation. Experimental evidence on the existence of a black hole (enclosing a singularity), obtained when experimental data is interpreted on the grounds of Riemann geometry (naturally linked with Einstein frame GR theory with $\omega=-\frac{3}{2}$) can serve, at the same time, for evidence on the existence of a wormhole when the same experimental data is interpreted on the grounds of the Weyl-type geometry (linked with Jordan frame GR) dual to it.
Although the wormhole picture is not simpler than its conformal black hole one, it is more viable because these geometrical objects (Jordan frame wormholes) are invariant respecting transformations (3.6-3.12) that can be interpreted as particular transformations of physical units. As noted by Dicke[@dk], these transformations should not influence the physics if the theory is correct. The Einstein frame Schwarzschild black hole, for his part, does not possess this invariance. More discussion on this point will be given in section VI.
Geometrical duality in cosmology
================================
Other illustrations to the notion of geometrical duality come from cosmology. In the Einstein frame the FRW line element for flat space can be written as:
$$d\hat s^2=-dt^2+\hat a(t)^2(dr^2+r^2d\Omega^2),$$
where $\hat a(t)$ is the EF scale factor. Suppose the universe is filled with a perfect-fluid-type matter with the barotropic equation of state (in the EF): $\hat p=(\gamma-1)\hat \mu$, $0 < \gamma < 2$. Taking into account the line element (5.1) and the barotropic equation of state, the field equation (1.6) can be simplified to the following equation for determining the EF scale factor:
$$(\frac{\dot {\hat a}}{\hat a})^2=\frac{8\pi}{3} \frac{(C_2)^2}{\hat a^{3\gamma}},$$
while, after integrating eq.(1.7) once, we obtain for the EF scalar:
$$\dot {\hat \phi}=\frac{C_1}{\hat a^3},$$
where $C_1$ and $C_2$ are arbitrary integration constants. The solution to eq.(5.2) is found to be:
$$\hat a(t)=(A)^{\frac{2}{3\gamma}} t^{\frac{2}{3\gamma}},$$
where $A\equiv \sqrt{6\pi}\gamma C_2$. Integrating eq.(5.3) gives:
$${\hat \phi}^{\pm}(t)=\hat \phi_0 \mp B t^{1-\frac{2}{\gamma}},$$
where $B\equiv \frac{\gamma C_1}{(2-\gamma)A^{\frac{2}{\gamma}}}$.
The JF scale factor $a^{\pm}(t)=\hat a(t) \exp[-\frac{1}{2}\hat \phi^\pm (t)]$ is given by the following expression:
$$a^{\pm}(t)=\frac{A^\frac{2}{3\gamma}}{\sqrt{\phi_0}} t^\frac{2}{3\gamma} \exp[\pm \frac{B}{2} t^{1-\frac{2}{\gamma}}].$$
The proper time $t$ in the EF and $\tau$ in the JF are related through:
$$(\tau-\tau_0)^{\pm}=\frac{1}{\sqrt{\phi_0}}\int \exp[\pm\frac{B}{2} t^{1-\frac{2}{\gamma}}] dt.$$
For big $t$ ($t\rightarrow +\infty$) this gives $(\tau-\tau_0)^{\pm}\sim t$. Then $t\rightarrow +\infty$ implies $\tau\rightarrow +\infty$ for both ’+’ and ’-’ branches of our solution, given by the choice of the ’+’ and ’-’ signs in eq.(5.5).
For $t\rightarrow 0$, the r.h.s. of eq.(5.7) can be transformed into:
$$-\frac{\gamma}{\sqrt{\phi_0}}(2-\gamma)\int \frac{\exp[\pm Dx]}{x^\frac{2}{2-\gamma}} dx,$$
where we have defined $x\equiv t^{1-\frac{2}{\gamma}}$ and $D\equiv \frac{\gamma C_1}{2(\sqrt{6\pi}\gamma C_2)^\frac{\gamma}{2}(2-\gamma)}$. If we take the ’-’ sign in the exponent under integral (5.8) then, for $t\rightarrow 0$ ($x\rightarrow\infty$), $\tau\rightarrow\tau_0$. If we take the ’+’ sign, for his part, integral (5.8) diverges for $t\rightarrow 0$ so $\tau\rightarrow -\infty$ in this last case.
In the ’-’ branch of our solution the evolution of the universe in the Jordan frame is basically the same as in the Einstein frame. The flat FRW perfect-fluid-filled universe evolves from a cosmological singularity at the beginning of time $t=0$ ($\tau=\tau_0$ in the JF) into an infinite size universe at the infinite future $t=+\infty$ ($\tau=+\infty$ in the JF). It is the usual picture in canonical general relativity where the cosmological singularity is unavoidable.
However, in the ’+’ branch of the solution the JF flat FRW perfect-fluid-filled universe evolves from an infinite size at the infinite past ($\tau=-\infty$) into an infinite size at the infinite future ($\tau=+\infty$) through a bounce at $t^*=[\frac{3}{4} \frac{\gamma C_1}{(\sqrt{6\pi}\gamma C_2)^{2\gamma}}]^{\frac {\gamma}{2-\gamma}}$ where it reaches its minimum size $a^*=\frac{1}{\sqrt{\phi_0}}[\sqrt{\frac{3}{32\pi}}\frac{C_1}{C_2} e]^{\frac{2}{3(2-\gamma)}}$. Then the Jordan frame universe is free of the cosmological singularity unlike the Einstein frame universe where the cosmological singularity is unavoidable. The more general case of arbitrary $\omega > -\frac{3}{2}$ is studied in [@qbc].
If we model the JF scalar field $\phi$ as a perfect fluid then, in the Jordan frame its ’true’ energy density (as measured by a comoving observer) will be given by the following expression:
$$\mu^\phi_\pm=\frac{(\omega+\frac{3}{2}) (C_1 \phi_0)^2}{16\pi A^\frac{4}{\gamma} t^\frac{4}{\gamma}} \exp[\mp 2B t^{1-\frac{2}{\gamma}}],$$
while the ’effective’ energy density of $\phi$ as seen by a cosmological observer is given by:
$$\mu^\phi_{eff,\pm}=\frac{(\omega+3) (C_1 \phi_0)^2}{16\pi A^\frac{4}{\gamma} t^\frac{4}{\gamma}} \exp[\mp 2B t^{1-\frac{2}{\gamma}}](1-\frac{4A^\frac{2}{\gamma}t^{\frac{2}{\gamma}-1}}{(\omega+3)\gamma C_1}).$$
In the ’+’ branch of the JF solution both $\mu^\phi$ and $\mu^\phi_{eff}$ are finite for all times. In this case the ’true’ energy density (equation (5.9)) evolves from zero value at $t=0$ ($\tau=-\infty$) into a zero value at the infinite future ($\tau=+\infty$), through a maximum (finite) value at some intermediate time. It is positive definite for all times. This means that the scalar matter is non-exhotic and non-singular for all times. The ’effective’ scalar field energy density (5.10) evolves from zero at $t=0$ ($\tau=-\infty$) into zero at $t^*=[(\omega+1)(2-\gamma)]^\frac{\gamma}{2-\gamma}$ through a maximum (finite value) at some prior time. In this range of times $\mu^\phi_{eff}$ is positive definite. Then it further evolves from a zero value at $t^*$ into a zero value at $t=+\infty$ ($\tau=+\infty$) through a maximum absolute value at some time $t^*<t<+\infty$. In this range of times $\mu^\phi_{eff}$ is negative definite.
For the perfect fluid barotropic ordinary matter we found that the energy density in the ’plus’ branch of the Jordan frame solution is given by:
$$\mu=(\frac{C_2}{A})^2 \frac{\phi_0}{t^2} \exp[-B t^{1-\frac{2}{\gamma}}].$$
It evolves from zero at $t=0$ ($\tau=-\infty$)into zero density at $t=+\infty$ ($\tau=+\infty$) through a maximum value $\mu^*= e^{-2}[A^{6-\gamma}(\frac{2(2-\gamma)}{\gamma C_1})^{2\gamma}]^\frac{1}{2-\gamma}$ at $t^*=(\frac{\gamma C_1}{2(2-\gamma)})^\frac{\gamma}{2-\gamma}\frac{1}{A^\frac{2}{2-\gamma}}$, i.e. it is bounded for all times. This means that the energy density as measured by a comoving observer is never singular (it is true for the sum of (5.9) and (5.11) as well as for the sum of (5.10) and (5.11)).
The Jordan frame or the Einstein frame afterall?
================================================
In this section we are going to discuss about the physical implications of the viewpoint developed in the present paper. Our proposal is based on the postulate that the different conformal formulations of general relativity are physically equivalent. Among these conformal representations of GR the Jordan frame and the Einstein frame formulations are distinguished.
For the purpose of the present discussion we shall take the static, spherically symmetric solution presented in section IV. In the Einstein frame for $\omega=-\frac{3}{2}$ this is the typical Schwarzschild black hole solution. A time-like singularity at the origin of the Schwarzschild radial coordinate is enclosed by an event horizon at $r=2m$. To a distant observer, an observer falling into the black hole asymptotically approaches the event horizon but he never crosses the surface $r=2m$. The same is true for a distant observer in the Jordan frame because spacetime coincidences are not affected by a aconformal transformation of the metric. This means that to a distant observer, the black hole never forms neither in the EF nor in the JF. The situation is dramatically different for an observer falling into the black hole (in the JF we have a wormhole instead of a black hole). In the Einstein frame he crosses the event horizon and inevitably hits the singularity at $r=0$ in a finite proper time. Unlike this, in the Jordan frame this observer never sees any singularity (he never crosses the surface $r=2m$). For $\omega>-\frac{3}{2}$, in the Einstein frame to a distant observer, an observer falling into the singular point $r=\frac{2m}{p}$ will reach it in a finite time. In this case the singularity at $\frac{2m}{p}$ is naked so it is seen by the distant observer. The same is true for a distant observer in the Jordan frame. He finds that the observer falling into the surface $r=\frac{2m}{p}$ (the begining of the JF proper radial coordinate $\rho=0$) will reach it in a finite time. However, in this case, the surface $r=\frac{2m}{p}$ is non-singular for $q\geq2-p$ ($0<p\leq1$) because the curvature scalar $R=-\frac{6m^2 q^2}{p^2 r^4}(1-\frac{2m}{pr})^{p+q-2}$ is finite at $\frac{2m}{p}$. In the Einstein frame, the observer falling into the singular point $r=\frac{2m}{p}$ hits the singularity in a finite proper time. In the Jordan frame the falling observer never meets any singularity. Moreover, it takes an infinite proper time to the falling observer to reach the surface $r=\frac{2m}{p}$.
Summing up. The physics as seen by a distant observer is the same in the Einstein and in the Jordan frames since spacetime coincidences are unchanged under the conformal rescaling of the metric. Unlike this, the physics seems dramatically different to the falling observer in the Einstein frame. He hits the singularity in a finite proper time. In the Jordan frame the falling observer never meets any singularity. It is very striking because, according to our proposal, both situations are physically equivalent (they are equally consistent with the observational evidence). However, the falling observer is part of the physical reality and the physical reality is unique(it is a prime postulate in physics).
We can not pretend to give a final answer to this paradoxical situation since, we feel it is a very deep question. We shall however conjecture on this subject. Two explanations to this striking situation are possible. The first one is based on the fact that Einstein’s theory is a classical theory of spacetime and near of the singular point we need a quantum theory (this theory has not been well stablished at present). When a viable quantum gravity theory will be worked out it may be that this singularity is removed. In the Jordan frame no singularity occurs (for $q\geq2$) and, consequently, we do not need of any quantum theory for describing gravitation. This explanation is in agreement with a point of view developed in reference [@shojai]. According to this viewpoint, to bring in the quantum effects into the classical gravity theory one needs to make only a conformal transformation. If we start with Einstein’s classical theory of gravitation then we can set in the quantum effects of matter by simply making a conformal transformation into, say, the Jordan frame. In this sense the Jordan frame formulation of general relativity already contains the quantum effects (Jordan frame GR represents a unified description of both gravity and the quantum effects of matter).
The second possibillity is more radical and has been already outlined above in this paper. The Einstein frame formulation is not invariant under the particular transformations of the units of time, length and mass studied in section III. It is very striking since the physical laws should be invariant under the transformations of units. Unlike this The Jordan frame formulation of general relativity is invariant in respect to these transformations. This means that the picture without singularity is more viable than the one with them, i.e. spacetime singularities are not physical. These are fictitious entities due to a wrong choice of the formulation of the theory.
We recall that these are just conjectures and we hope to discuss further on this point in future works.
[**AKNOWLEDGEMENT**]{}
We thank A. G. Agnese and D. Wands for helpful comments. We also aknowlege the unknown referees for recommendations and criticism and MES of Cuba for financing.
[99]{} C. Brans and R. H. Dicke, Phys. Rev. **124**, 925(1961). R. H. Dicke, Phys. Rev.**125**, 2163(1962). A. Albrecht and J. Magueijo, Phys. Rev. D **59**, 043516(1999), astro-ph/9811018. G. Magnano and L. M. Sokolowski, Phys. Rev. D **50**, 5039(1994); G. Magnano, ’Talk given at the XII Italian conference on general relativity and gravitation’, Trieste, Sep. 26-30, 1994, gr-qc/9511027; L. M. Sokolowski in ’Proceedings of the 14th International Conference on General Relativity and Gravitation’, Firenze, Italy 1995, M. Francaviglia, G. Longhi, L. Lusanna, E. Sorace (eds.), 337(World Scientific, 1997), gr-qc/9511073. I. Quiros, gr-qc/9904004. V. Faraoni, E. Gunzig and P. Nardone, IUCAA 24/98, gr-qc/9811047 (to appear in ’Fundamentals of Cosmic Physics’). A. Eddington, ’Space, time and gravitation’, Chapter Prologue: What is geometry? (Cambridge at the University Press, 1953). Y. M. Cho, Phys. Rev. Lett. **68**, 3133(1992); M. Rainer and A. Zhuk, Phys. Rev. D **54**, 6186(1996); Y. Gong and Y. Z. Zhang, Europhys. Lett. **31**, 7(1995); Y. Gong, gr-qc/9809015. V. Faraoni, Phys. Lett. A **245**, 26(1998). D. I. Santiago and A. S. Silbergleit, gr-qc/9904003. A. Shapere, S. trivedi and F. Wilczec, Mod. Phys. Lett., A **6**, 2677(1991); A. Sen, Mod. Phys. Lett., A **6**, 2023(1991) E. J. Copeland, R. Easther and D. Wands, Phys. Rev. D **58**, 374(1997), hep-th/9701082. T. Fujiwara, Y. Igarashi and J. Kubo, Int. J. Mod. Phys., A **9**, 4811(1994) A. Agnese and M. La Camera, Phys. Rev. D **31**, 1280(1985); Phys. Rev. D **51**, 2011(1995); A. Tomimatsu in H. Sato and T. Nakamura (eds.), ’Gravitational collapse and relativity’, 417(World Scientific,1986). R. Penrose, Nuovo Cimento **1**, 533(1965). M. Visser and D. Hochberg, gr-qc/9710001. I. Quiros, R. Bonal and R. Cardenas, gr-qc/9908075. F. Shojai, A. Shojai and M. Golshani, Mod. Phys. Lett. A. **13**, No. 34, 2725(1998); Mod. Phys. Lett. A. **13**, No. 36, 2915(1998).
[^1]: israel@uclv.etecsa.cu
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A conjecture by Aharoni and Berger states that every family of $n$ matchings of size $n+1$ in a bipartite multigraph contains a rainbow matching of size $n$. In this paper we prove that matching sizes of $\left(\frac 3 2 + o(1)\right) n$ suffice to guarantee such a rainbow matching, which is asymptotically the same bound as the best known one in case we only aim to find a rainbow matching of size $n-1$. This improves previous results by Aharoni, Charbit and Howard, and Kotlar and Ziv.'
author:
- Dennis Clemens
- Julia Ehrenmüller
bibliography:
- 'rainbowmatchings.bib'
title: An improved bound on the sizes of matchings guaranteeing a rainbow matching
---
=1
Introduction
============
In this paper we are concerned with the question which sizes of $n$ matchings in a bipartite multigraph suffice in order to guarantee a rainbow matching of size $n$.
One motivation for considering these kinds of problems is due to some well known conjectures on Latin squares. A *Latin square* of order $n$ is an $n \times n$ matrix in which each symbol appears exactly once in every row and exactly once in every column. A *partial transversal* in a Latin square is a set of entries with distinct symbols such that from each row and each column at most one entry is contained in this set. We call a partial transversal of size $n$ in a Latin square of order $n$ simply *transversal*. A famous conjecture of Ryser [@ryser1967] states that for every odd integer $n$ any Latin square of order $n$ contains a transversal. The conjecture is known to be true for $n \leq 9$. Omitting the restriction to odd numbers yields a false statement. Brualdi [@brualdi1991; @denes1974] and Stein [@stein1975] independently formulated the following conjecture for all orders $n$.
\[conj:brualdi\] For every $n \geq 1$ any Latin square of order $n$ has a partial transversal of size $n-1$.
A natural way to transfer this problem to graphs is the following. Let $L=(\ell_{i,j})_{i,j\in[n]}$ be a Latin square of order $n$. We define $G_{L}:=(A\cup B, E)$ as the complete bipartite edge-coloured graph with partite sets $A=\{a_1,\ldots,a_n\}$ and $B=\{b_1,\ldots,b_n\}$, where $a_ib_j$ is coloured $\ell_{i,j}$. That is, $A$ and $B$ represent the columns and rows of $L$, respectively. Moreover, a transversal of $L$ corresponds to a perfect matching in $G_L$ that uses each edge colour exactly once, which we call a *rainbow matching* of size $n$. Using this notion, Conjecture \[conj:brualdi\] is equivalent to the following: For every $n\geq 1$ any complete bipartite edge-coloured graph, the colour classes of which are perfect matchings, contains a rainbow matching of size $n-1$. One may wonder whether this might even be true in the more general setting of bipartite edge-coloured multigraphs.
Following Aharoni, Charbit and Howard [@aharoni2015], we define $f(n)$ to be the smallest integer $m$ such that every bipartite edge-coloured multigraph with exactly $n$ colour classes, each being a matching of size at least $m$, contains a rainbow matching of size $n$. Aharoni and Berger [@aharoni2009] conjectured the following generalization of Conjecture \[conj:brualdi\].
\[conj:aharoni\] For every $n\geq 1$ we have $f(n) = n+1$.
The first approaches towards this conjecture are given by the bounds $f(n)\leq \left\lfloor \frac 7 4 n\right\rfloor$ due to Aharoni, Charbit and Howard [@aharoni2015] and $f(n) \leq \left\lfloor \frac 5 3 n\right\rfloor$ due to Kotlar and Ziv [@kotlar2014]. Here, we give an improved bound, which is asymptotically the same as the best known bound on the sizes of the colour classes in case we aim to find a rainbow matchings of size $n-1$ [@kotlar2014]. In particular, we prove the following.
\[thm:main\] For every ${\varepsilon}>0$ there exists an integer $n_0\geq 1$ such that for every $n \geq n_0$ we have $f(n)\leq \left(\frac{3}{2} + {\varepsilon}\right)n$.
Subsequently, we use the following notation. Let $G$ be a bipartite multigraph with partite sets $A$ and $B$ and let $R$ be a matching in $G$. For a set $X \subseteq A$ we denote by $N_G(X|R):= \{y \in B: \exists xy\in R \text{ with } x \in X\}$ the neighbourhood of $X$ with respect to $R$. For the sake of readability, we omit floor and ceiling signs and do not intend to optimize constants in the proofs.
Proof of Theorem \[thm:main\]
=============================
In this section we give a proof of Theorem \[thm:main\] the idea of which can be summarized as follows. We start with assuming for a contradiction that a maximum rainbow matching in the given graph $G=(A\cup B,E)$ is of size $n-1$. A rainbow matching of this size is known to exist [@kotlar2014]. We fix such a matching $R$ and find two sequence $e_1,\ldots,e_k$ and $g_1,\ldots,g_k$ of edges, the first consisting of edges from $R$ and the second consisting of edges outside $R$. We then show that either we can switch between some of the edges from the edge sequences to produce a rainbow matching of size $n$ (see the proofs of the Claims \[claim1\], \[claim2\] and \[claim3\]), or the matchings represented by the edges $e_1,\ldots,e_k$ need to touch at least $n$ vertices in $B$ that are saturated by $R$, both leading to a contradiction. To make the second case more precise we additionally introduce in the proof certain sequences $X_1,\ldots,X_k\subseteq A$ and $Y_1,\ldots,Y_k\subseteq B$.
Let ${\varepsilon}>0$ be given and whenever necessary we may assume that $n$ is large enough. Let ${{\mathcal F}}=\{F_0,\ F_1,\ \ldots,\ F_{n-1}\}$ be a family of $n$ matchings of size at least $(3/2 +\varepsilon)n$ in a bipartite multigraph $G=(A\cup B,E)$ with partite sets $A$ and $B$. We aim to find a rainbow matching of size $n$.
For a contradiction, let us assume that there is no such matching. As shown in [@kotlar2014], there must exist a rainbow matching $R$ of size $n-1$. We may assume without loss of generality that none of the edges of $F_0$ appears in $R$. Let $t$ be the smallest positive integer with $1/(2t-1)\leq \varepsilon$. Moreover, let $X\subseteq A$ and $Y\subseteq B$ be the sets of vertices that are saturated by $R$, i.e. incident with some edge of $R$.
In the following we show that for every $k \in [t]$ we can construct sequences
1. $e_1,\ldots,e_k$ of $k$ distinct edges $e_i=x_iy_i$ in $R$ with $x_i\in X$ and $y_i\in Y$,
2. $g_1,\ldots,g_k$ of $k$ distinct edges $g_i=z_iy_i$ with $z_i\in A\setminus X$,
3. $ X_1, \ldots, X_k$ of subsets of $X$,
4. $ Y_1, \ldots, Y_k$ of subsets of $Y$,
and an injective function $\pi: \{0,1,\ldots,k\} \rightarrow \{0,1,\ldots,n-1\}$ with $\pi(0):=0$ such that the following properties hold:
1. \[color\_e\] for each $ i\in [k]$ we have $e_i\in F_{\pi(i)}$,
2. \[color\_g\] for each $ i\in [k]$ we have $g_i\in \bigcup_{j=0}^{i-1} F_{\pi(j)}$,
3. \[disjoint\] $(e_1\cup\ldots\cup e_k) \cap (X_k\cup Y_k)=\varnothing$,
4. \[sizes\] $|X_k|=|Y_k|=s_k:=2k\varepsilon n + k(7-3k)/2$,
5. \[many\_colors\] for each $ i\in [k]$ and each $j\in\{0,\ldots,n-1\}$ it holds that if $R$ contains an edge of the matching $F_j$ between $X_i$ and $Y_i$, then there is also an edge of $F_j$ between $x_i$ and $B\setminus Y$,
6. \[good\_edges\] for each $i\in [k]$ and each $w\in Y_i\setminus Y_{i-1}$ there exists a vertex $v\in A\setminus (X\cup \{z_1,\ldots,z_{i-1}\})$ such that $vw\in F_{\pi(i-1)}$ (where $Y_0 := \varnothing$), and
7. \[different\_endpoints\] for each $i\in [k]$ and each $j\in [i-1]$ it holds that if $g_i\in F_{\pi(j)}$, then $z_i\in A\setminus (X\cup \{z_1,\ldots,z_j\})$.
Before we start with the construction, let us first observe that by Property \[sizes\] we have a set $Y_t\subseteq Y$ which satisfies $2t\varepsilon n + t(7-3t)/2= |Y_t| \leq |Y| <n$. However, for large enough $n$ and by the choice of $t$ we have that $2t\varepsilon n + t(7-3t)/2 > n$, a contradiction.
In order to find the sequences described above, we proceed by induction on $k$. For the base case, let us argue why we find edges $e_1$, $g_1$, sets $X_1$, $Y_1$, and an injective function $\pi$ with Properties \[color\_e\]-\[different\_endpoints\]. First observe that $F_0$ does not have any edges between $A\setminus X$ and $B \setminus Y$, by assumption on $R$. As $|F_0| \geq(3/2+{\varepsilon})n$, there are at least $(1/2+{\varepsilon})n+1$ edges of $F_0$ between $A\setminus X$ and $Y$. Let $N_0 \subseteq Y$ denote a set of size $(1/2+{\varepsilon})n+1$ such that for every vertex $w \in N_0$ there exists a vertex $v\in A\setminus X$ such that $vw \in F_0$. Furthermore, let $X_1':= N_G(N_0|R)$ and let ${{\mathcal R}}_1 := \{F_j \in {{\mathcal F}}: F_j \cap R[N_0, X_1']\neq \varnothing\}$.
Let $F$ be any matching in ${{\mathcal R}}_1$, let $vw$ be the unique edge in $R[N_0, X_1'] \cap F$ and let $z \in A\setminus X$ be the unique vertex such that $zw \in F_0$. Notice that there cannot be any edge $g$ of $F$ between $A\setminus (X \cup \{z\})$ and $B\setminus Y$, since otherwise $(R\setminus \{vw\})\cup \{zw,g\}$ would give a rainbow matching of size $n$, in contradiction with $R$ being a maximum rainbow matching. Therefore, there are at least $(1/2+{\varepsilon})n + 1$ edges of $F$ between $B \setminus Y$ and $X \cup \{z\}$. Since $|X_1'| = (1/2+{\varepsilon})n+1$, there are at least $2{\varepsilon}n +2$ edges of $F$ between $B\setminus Y$ and $X_1'$. Since this is true for any $F \in {{\mathcal R}}_1$, we know by the pigeonhole principle that there is a vertex $x_1 \in X_1'$ and a subset $X_1 \subseteq X_1'$ of size $2{\varepsilon}n +2$ such that, for every $F_j\in {{\mathcal F}}$, if $F_j \cap R[X_1, N_G(X_1|R)]\neq \varnothing$ then $F_j$ has an edge between $x_1$ and $B\setminus Y$. Note that $x_1\notin X_1$. Let $e_1= x_1y_1$ be the unique edge in $R$ incident with $x_1$ and let $g_1 = z_1y_1$ be the unique edge of $F_0$ incident with $y_1 \in N_0$. Set $\pi(1)$ to the unique index $j\in [k]$ such that $e_1\in F_j$. One can easily verify that $e_1 = x_1y_1$, $g_1 = z_1y_1$, $X_1$, $Y_1 := N_G(X_1|R)$, and $\pi$ satisfy Properties \[color\_e\]-\[different\_endpoints\].
For the induction hypothesis let us assume that for some $k \in [t-1]$ the above sequences are given with Properties \[color\_e\]-\[different\_endpoints\]. We now aim to extend these by edges $e_{k+1}, g_{k+1}$, sets $X_{k+1}, Y_{k+1}$, and a value $\pi(k+1)$ while maintaining Properties \[color\_e\]-\[different\_endpoints\]. We start with some useful claims.
\[claim1\] $F_{\pi(k)}$ has no edge between $A\setminus (X\cup \{z_1,\ldots,z_k\})$ and $B\setminus Y$.
Assume for a contradiction that there exists an edge $g\in F_{\pi(k)}$ between the sets $A\setminus (X\cup \{z_1,\ldots,z_k\})$ and $B\setminus Y$. (See Figure \[fig:cl1\] for an illustration.) By Property \[color\_g\] we find a sequence $k>j_1>j_2>\ldots>j_s=0$ with $1\leq s\leq k$ such that $$\begin{aligned}
g_k & \in F_{\pi(j_1)}\ , \\
g_{j_i} & \in
F_{\pi(j_{i+1})} \text{\ \ for }i< s.\end{aligned}$$ Moreover, according to Property \[different\_endpoints\] we know that $z_k,z_{j_1},\ldots,z_{j_{s-1}}$ are distinct, and thus, also using Property \[color\_e\], we conclude that $$(R\setminus \{e_k,e_{j_1},\ldots,e_{j_{s-1}}\})\cup \{g_k,g_{j_1},\ldots,g_{j_{s-1}},g\}$$ forms a rainbow matching which is larger than $R$, a contradiction.
\[htbp\] ![*Example with $g_{j_2}\in F_{\pi(0)}$ ($s=3$). The dotted edges $\{e_k,e_{j_1},e_{j_2}\}$ are replaced by the edges $\{g_k,g_{j_1},g_{j_2},g\}$ to obtain a larger rainbow matching.*](cl1 "fig:")\[fig:cl1\]
\[claim2\] $F_{\pi(k)}$ has no edge between $A\setminus (X\cup \{z_1,\ldots,z_k\})$ and $Y_k$.
Assume for a contradiction that there is an edge $g\in F_{\pi(k)}$ between the sets $A\setminus (X\cup \{z_1,\ldots,z_k\})$ and $Y_k$. (See Figure \[fig:cl2\] for an illustration.) Let $e$ be the unique edge in $R$ which is adjacent to $g$. Observe that $e$ lies between $X_k$ and $Y_k$ by assumption. Let $j\in [n-1]$ be such that $e\in F_j$. By Property \[disjoint\] we have $e\notin \{e_1,\ldots,e_k\}$. Thus, using Property \[color\_e\] and the fact that $R$ is a rainbow matching, we can conclude that $j\notin \{\pi(i):1\leq i\leq k\}$. Now, by Property \[many\_colors\] it holds that there is an edge $\overline{e}\in F_j$ between $x_k$ and $B\setminus Y$. Moreover, by Properties \[color\_g\] and \[different\_endpoints\], we find a sequence $k>j_1>j_2>\ldots>j_s=0$ with $1\leq s\leq k$ such that $$\begin{aligned}
g_k & \in F_{\pi(j_1)}\ , \\
g_{j_i} & \in
F_{\pi(j_{i+1})} \text{\ \ for }i< s\end{aligned}$$ and all vertices $z_k,z_{j_1},\ldots,z_{j_{s-1}}$ are distinct. Therefore, using Property \[color\_e\], we conclude that $$(R\setminus \{e_k,e_{j_1},\ldots,e_{j_{s-1}},e\})\cup \{g_k,g_{j_1},\ldots,g_{j_{s-1}},\overline{e},g\}$$ forms a rainbow matching which is larger than $R$, a contradiction.
\[htbp\] ![*Example with $g_{j_2}\in F_{\pi(0)}$ ($s=3$). The dotted edges $\{e_k,e_{j_1},e_{j_2},e\}$ are replaced by the edges $\{g_k,g_{j_1},g_{j_2},\overline{e},g\}$ to obtain a larger rainbow matching.*](cl2 "fig:")\[fig:cl2\]
\[existence\_Nk\] The matching $F_{\pi(k)}$ has at least $\left(\frac{1}{2}+\varepsilon\right)n+1-2k$ edges between $A\setminus (X\cup \{z_1,\ldots,z_k\})$ and $Y\setminus (Y_k\cup \{y_1,\ldots,y_k\})$.
As $|F_{\pi(k)}|\geq (3/2+\varepsilon)n$ and $|X\cup \{z_1,\ldots,z_k\}|\leq n-1+k$, we conclude that at least $(1/2+\varepsilon)n+1-k$ edges of $F_{\pi(k)}$ are incident with vertices in $A\setminus (X\cup \{z_1,\ldots,z_k\})$. Each of these edges intersects $Y\setminus Y_k$ by the previous claims and thus the statement follows.
In the following, let $N_k\subseteq Y\setminus (Y_k\cup \{y_1,\ldots,y_k\})$ be a set of size $1/2+\varepsilon)n+1-2k$ such that for each vertex $w\in N_k$ there is a vertex $v\in A\setminus (X\cup \{z_1,\ldots,z_k\})$ with $vw\in F_{\pi(k)}$. Such a set exists by the previous corollary. Moreover, let $$Y_{k+1}':=Y_k\cup N_k$$ and let $X_{k+1}':= N_G(Y_{k+1}'|R)$ be the neighbourhood of $Y_{k+1}'$ with respect to $R$. By Property \[sizes\], and as $N_k\cap Y_k=\varnothing$, we obtain $$\begin{aligned}
|X_{k+1}'|=|Y_{k+1}'|&=2k\varepsilon n + \frac{k(7-3k)}{2} + \left(\frac{1}{2}+\varepsilon\right)n+1-2k \nonumber \\
& = \frac{1}{2}n + (2k+1)\varepsilon n + \frac{-3k^2+3k+2}{2}\ . \label{sizeXY}\tag{$\ast$}\end{aligned}$$
We now look at all matchings that have an edge in $R$ between $X_{k+1}'$ and $Y_{k+1}'$. Formally, we consider $${{\mathcal R}}_{k+1}:=\big\{F_j\in {{\mathcal F}}: F_j\cap R[X_{k+1}',Y_{k+1}']\neq \varnothing\big\}\ .$$
\[claim3\] Every $F_j\in {{\mathcal R}}_{k+1}$ has at least $s_{k+1}$ edges between $X_{k+1}'$ and $B\setminus Y$.
The main argument is similar to that of Claim \[claim1\] - Corollary \[existence\_Nk\]. For $F_j\in {{\mathcal R}}_{k+1}$ let $f=vw$, with $v\in X_{k+1}',\ w\in Y_{k+1}'$, denote the unique edge in $F_j\cap R[X_{k+1}',Y_{k+1}']$. Since $Y_{k+1}':=Y_k\cup N_k$, we either have $w\in Y_k$ or $w\in N_k$. In particular, by Property \[disjoint\] from the hypothesis and by the definition of $N_k$, we know that $w\notin \{y_1,\ldots,y_k\}$, and therefore $j\notin \{\pi(i): 0\leq i\leq k\}$.
If $w\in Y_k$, then we find an integer $j_1\in [k]$ such that $w\in Y_{j_1}\setminus Y_{j_1-1}$ since $Y_k=\bigcup_{i\in [k]} Y_i\setminus Y_{i-1}$, and by Property \[good\_edges\] there is a vertex $z\in A\setminus (X\cup \{z_1,\ldots,z_{j_1-1}\})$ such that $zw\in F_{\pi(j_1-1)}$.\
If otherwise $w\in N_k$, we find a vertex $z\in A\setminus (X\cup \{z_1,\ldots,z_k\})$ such that $zw\in F_{\pi(k)}$, by construction of $N_k$. In either case, let us fix this particular vertex $z$. We now prove the claim by showing first that (i) $F_j$ has no edge between $A\setminus (X\cup \{z_1,\ldots,z_k,z\})$ and $B\setminus Y$, and then we conclude that (ii) the statement holds for $F_j$.
We start with the discussion of (i). So, assume that $F_j$ has an edge $\overline{f}$ between $A\setminus (X\cup \{z_1,\ldots,z_k,z\})$ and $B\setminus Y$.
If $w\in Y_k$, then by the definition of $z$ we have $zw\in F_{\pi(j_1-1)}$, with $j_1$ being defined above. We can assume that $j_1> 1$, as otherwise $zw\in F_0$ and thus $(R\setminus \{f\})\cup \{\overline{f},zw\}$ forms a full rainbow matching, in contradiction to our main assumption. But then, using Property \[color\_g\], we find a sequence $j_1-1>j_2>\ldots >j_s=0$ with $2\leq s<k$ such that $$\begin{aligned}
g_{j_1-1} & \in F_{\pi(j_2)}\ , \\
g_{j_i} & \in F_{\pi(j_{i+1})} \text{\ \ for }2\leq i\leq s-1\end{aligned}$$ and, by Property \[different\_endpoints\] and since $z\in A\setminus (X\cup \{z_1,\ldots,z_{j_1-1}\})$, all the vertices $z,z_{j_1-1},z_{j_2},\ldots,z_{j_{s-1}}$ are distinct. We thus find the rainbow matching $$(R\setminus \{e_{j_1-1},e_{j_2},\ldots,e_{j_{s-1}},f\})\cup \{g_{j_1-1},g_{j_2},\ldots,g_{j_{s-1}},\overline{f},zw\}$$ which is larger than $R$, a contradiction.
\[htbp\] ![*Example with $g_{j_2}\in F_{\pi(0)}$, in case $w\in Y_k$. The dotted edges $\{e_{j_1-1},e_{j_2},f\}$ are replaced by the edges $\{g_{j_1-1},g_{j_2},\overline{f},zw\}$ to obtain a larger rainbow matching.*](cl3a "fig:")\[fig:cl3a\]
If otherwise $w\in N_k$, then $zw\in F_{\pi(k)}$. Analogously we find a sequence $k>j_1>j_2>\ldots>j_s=0$ with $1\leq s\leq k$ such that $g_{k} \in F_{\pi(j_1)}$ and $g_{j_i} \in F_{\pi(j_{i+1})}$ for $i<s$, and we obtain a contradiction as $$(R\setminus \{e_{k},e_{j_1},\ldots,e_{j_s},f\})\cup \{g_{k},g_{j_1},\ldots,g_{j_s},\overline{f},zw\}$$ forms a rainbow matching which is larger than $R$. Thus, we are done with part (i).
Let us proceed with (ii): $F_j$ needs to saturate at least $(1/2 + \varepsilon )n+1$ vertices of $B\setminus Y$, as $|F_j|\geq (3/2 + \varepsilon )n$ and $|Y|\leq n-1$. Thus, by part (i), we have at least $(1/2 + \varepsilon )n+1$ edges of $F_j$ between $X\cup \{z_1,\ldots,z_k,z\}$ and $B\setminus Y$. Using (\[sizeXY\]), we further calculate that $$\begin{aligned}
|X\cup \{z_1,\ldots,z_k,z\}|-|X_{k+1}'| & \leq (n+k) - \left(\frac{1}{2}n + (2k+1)\varepsilon n + \frac{-3k^2+3k+2}{2} \right)\\
& = \frac{1}{2}n - (2k+1)\varepsilon n + \frac{3k^2-k-2}{2} \ .\end{aligned}$$ Thus, the number of edges in $F_j$ between $X_{k+1}'$ and $B\setminus Y$ needs to be at least $$\begin{aligned}
\left( \frac{1}{2} + \varepsilon \right)n+1 - \left( \frac{1}{2}n - (2k+1)\varepsilon n + \frac{3k^2-k-2}{2} \right) = s_{k+1}\ ,
$$ as claimed.
We now proceed with the construction of the edges $e_{k+1}, g_{k+1}$ and the sets $X_{k+1}, Y_{k+1}$, and afterwards we show that all required properties are maintained. The next corollary is by the pigeonhole principle an immediate consequence of Claim \[claim3\].
\[sequence\] There exists a vertex $x_{k+1}\in X_{k+1}'$, a set $X_{k+1}\subseteq X_{k+1}'$ of size $s_{k+1}$ and its neighborhood $Y_{k+1}\subseteq Y_{k+1}'$ with respect to $R$ such that the following holds for every $j\in[n-1]$: If $F_j\cap R[X_{k+1},Y_{k+1}]\neq \varnothing$, then $F_j$ has an edge between $x_{k+1}$ and $B\setminus Y$.
To extend the sequences, choose $X_{k+1}$ and $Y_{k+1}$ according to Corollary \[sequence\], and let $e_{k+1}=x_{k+1}y_{k+1}$ be the unique edge in $R$ that is incident with $x_{k+1}$. Note that $x_{k+1}\notin X_{k+1}$, as otherwise $x_{k+1}$ would need to be incident to two edges of the same matching $F_j$.
Observe that $y_{k+1}\notin \{y_1,\ldots,y_k\}$. Indeed, $y_{k+1}\in Y_{k+1}'=Y_k\cup N_k$, and by construction we have $N_k\cap \{y_1,\ldots,y_k\}=\varnothing$, while $Y_k\cap \{y_1,\ldots,y_k\}=\varnothing$ holds by Property \[disjoint\].
Now, let $e_{k+1}\in F_j$. As $e_{k+1}\in R\setminus \{e_1,\ldots,e_k\}$, we have $j\notin \{\pi(i):\ 0\leq i\leq k\}$. We extend the injective function $\pi$ with $\pi(k+1)=j$.
Finally, we choose $g_{k+1}$ as follows: If $y_{k+1}\in N_k$, then by construction of $N_k$ there is a vertex $z_{k+1}\in A\setminus (X\cup \{z_1,\ldots,z_k\})$ with $z_{k+1}y_{k+1}\in F_{\pi(k)}$. Otherwise, if $y_{k+1}\in Y_k$, then there is an $i\in [k]$ with $y_{k+1}\in Y_i\setminus Y_{i-1}$, and by Property \[good\_edges\] there is a vertex $z_{k+1}\in A\setminus (X\cup \{z_1,\ldots, z_{i-1}\})$ such that $z_{k+1}y_{k+1}\in F_{\pi(i-1)}$. In any case, we set $g_{k+1}:=z_{k+1}y_{k+1}$.
\[claim:properties\] The extended sequences satisfy Properties \[color\_e\]-\[different\_endpoints\].
Properties \[color\_e\] and \[color\_g\] follow immediately from the induction hypothesis and from the definition of $\pi(k+1)$ and $g_{k+1}$. By construction, we have $Y_{k+1}\subseteq Y_{k+1}' = Y_k \cup N_k$. By Property \[disjoint\] of the induction hypothesis and by the definition of $N_k$, we have $\{y_1,\ldots,y_k\} \cap Y_{k+1} = \varnothing$. It follows from the construction of $X_{k+1}$ (Corollary \[sequence\]) that $y_{k+1} \notin Y_{k+1}$. By symmetry, we have $\{e_1, \ldots, e_{k+1}\}\cap(X_{k+1}\cup Y_{k+1}) = \varnothing$, which shows Property \[disjoint\]. Properties \[sizes\] and \[many\_colors\] hold by Corollary \[sequence\] and by Property \[many\_colors\] of the induction hypothesis. Recall that $Y_{k+1} \setminus Y_{k} \subseteq N_k$. This means that for every $w\in Y_{k+1}\setminus Y_k$ there exists a vertex $v \in A\setminus (X\cup \{z_1,\ldots, z_{k}\})$ such that $vw \in F_{\pi(k)}$, proving Property \[good\_edges\]. Finally, Property \[different\_endpoints\] holds by the induction hypothesis and since we chose $z_{k+1}$ from a set $A\setminus (X\cup \{z_1,\ldots, z_{i-1}\})$ such that $z_{k+1}y_{k+1}\in F_{\pi(i-1)}$ for the appropriate $i \in [k+1]$. Consequently, all Properties \[color\_e\]-\[different\_endpoints\] are fulfilled by the extended sequences.
Claim \[claim:properties\] concludes the induction and thus the proof of Theorem \[thm:main\].
Open problems and concluding remarks
====================================
In this paper we proved that a collection of $n$ matchings of size $\left(3/2 + o(1)\right)n$ in a bipartite multigraph guarantees a rainbow matching of size $n$. One of the obstacles why our proof does not work for smaller values is that it is not clear what matching sizes are sufficient for guaranteeing a rainbow matching of size $n-1$. More generally, as suggested by Tibor Szabó (private communication), it would be interesting to determine upper bounds on the smallest integer $\mu(n,\ell)$ such that every family of $n$ matchings of size $\mu(n,\ell)$ in a bipartite multigraph guarantees a rainbow matching of size $n-\ell$. One can verify that $\mu(n,l) \leq \frac{l+2}{l+1} n$. Moreover, it holds that $\mu(n, \sqrt{n})\leq n$, which is a generalization (see e.g. [@aharoni2013]) of a result proved in the context of Latin squares by Woolbright [@woolbright], and independently by Brouwer, de Vries and Wieringa [@brouwer1978]. In order to approach Conjecture \[conj:aharoni\], one can also increase the number of matchings and fix their sizes to be equal to $n$ instead of considering families of $n$ matchings of sizes greater than $n$. Drisko [@drisko1998] proved that a collection of $2n-1$ matchings of size $n$ in a bipartite multigraph with partite sets of size $n$ guarantees a rainbow mathching of size $n$. He also showed that this result is sharp. This problem can be further investigated in the following two directions. Does the statement also hold if we omit the restriction on the sizes of the vertex classes? And how many matchings do we need to find a rainbow matching of size $n-\ell$ for every $\ell \geq 1$?
Finally, in case Conjecture \[conj:aharoni\] turns out to be true, it is of interest to see how sharp it is. As shown by Barat and Wanless [@barat2014], one can find constructions of $n$ matchings with $\left\lfloor \frac{n}{2}\right\rfloor -1$ matchings of size $n+1$ and the remaining ones being of size $n$ such that there is no rainbow matching of size $n$. We wonder whether the expression $\left\lfloor \frac{n}{2}\right\rfloor -1$ above could also be replaced by $(1-o(1))n$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its “wide” and “deep” parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.'
author:
- |
Huifeng Guo[^1]$^1$ , Ruiming Tang$^2$, Yunming Ye[^2]$^1$, Zhenguo Li$^2$, Xiuqiang He$^2$\
$^1$Shenzhen Graduate School, Harbin Institute of Technology, China\
$^2$Noah’s Ark Research Lab, Huawei, China\
$^1$huifengguo@yeah.net, yeyunming@hit.edu.cn\
$^2${tangruiming, li.zhenguo, hexiuqiang}@huawei.com
bibliography:
- 'complete.bib'
title: 'DeepFM: A Factorization-Machine based Neural Network for CTR Prediction'
---
[^1]: This work is done when Huifeng Guo worked as intern at Noah’s Ark Research Lab, Huawei.
[^2]: Corresponding Author.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Here we study vector bundles $E$ on the Hirzebruch surface $F_e$ such that their twists by a spanned, but not ample, line bundle $M = \mathcal {O}_{F_e}(h+ef)$ have natural cohomology, i.e. $h^0(F_e,E(tM)) >0$ implies $h^1(F_e,E(tM)) = 0$.'
address:
- |
Dept. of Mathematics\
University of Trento\
38050 Povo (TN), Italy
- |
Dip. Matematica, Università di Torino\
via Carlo Alberto 10, 10123 Torino, Italy
author:
- 'E. Ballico'
- 'F. Malaspina'
title: 'Vector bundles on Hirzebruch surfaces whose twists by a non-ample line bundle have natural cohomology'
---
[^1]
Introduction {#S1}
============
Let $F_e$, $e>0$, denote the Hirzebruch surface with a section with self-intersection $-e$. For any $L\in \mbox{Pic}(F_e)$ and any vector bundle $E$ on $F_e$ we will say that $E$ has property ££ (resp. £ ) with respect to $L$ if $h^1(F_e,E\otimes L^{\otimes m})=0$ for all $m \in \mathbb
{Z}$ (resp. for all $m\in \mathbb {Z}$ such that $h^0(F_e,E\otimes L^{\otimes m}) \ne 0$). We think that property £ is nicer for reasonable $L$. We take as a basis of $\mbox{Pic}(F_e) \cong \mathbb
{Z}^2$ a fiber $f$ of the ruling $\pi : F_e \to {\bf {P}}^1$ and the section $h$ of $\pi$ with negative self-intersection. Thus $h^2 = -e$, $h\cdot f = 1$ and $f^2 = 0$. We have $\omega _{F_e}
\cong \mathcal {O}_{F_e}(-2h-(e+2)f)$. $\mathcal {O}_{F_e}(\alpha h+\beta f)$ is spanned (resp. ample) if and only if $\alpha \ge 0$ and $\beta \ge \alpha e$ (resp. $\alpha > 0$ and $\beta > e\alpha$). The Leray spectral sequence of $\pi$ and Serre duality give that $h^1(F_e,\mathcal {O}_{F_e}(\gamma h+\delta f))=0$ if and only if either $\gamma \ge 0$ and $\delta \ge e\gamma -1$ or $\gamma =-1$ or $\gamma \le -2$ and $-\delta - e-2 \ge e(-\gamma -2)-1$ (i.e. $\delta \le e\gamma +e-1$). We consider as the test line bundle the spanned, but not ample, line bundle $M:= \mathcal {O}_{F_e}(h+ef)$. Notice that the linear system $\vert
M^{\otimes 2}\vert$ contains the sum of the effective divisor $h$ and the ample divisor $h+2ef$. Thus for every vector bundle $E$ on $F_e$ there is an integer $m_0(E)$ such that $h^0(F_e,E\otimes M^{\otimes m}) \ne 0$ for all $m \ge
m_0(E)$. We will see that property £ £ is too strong and not interesting (see Remarks \[a0\] and \[a1\]). We stress the property £ with respect to $M$ is quite different from similar looking properties (e.g. natural cohomology) with respect to an ample line bundle.(see Remarks \[a1\] and \[a2\] for the rank $1$ case). Obviously, properties £ and £ £ may be stated for arbitrary projective varieties. In dimension $n \ge
3$, one need to choose between vanishing of $h^1$ or vanishing of all $h^i$, $1 \le i \le n-1$. We considered here the example $(F_e,M)$, because it is geometrically significant. Indeed, let $\phi _M$ denote the morphism associated to the base point free linear system $\vert M\vert$. If $e=1$ the morphism $\phi _M$ is the blowing up $F_1 \to {\bf {P}}^2$. If $e\ge 2$, then $\phi _M : F_e \to {\bf {P}}^{e+1}$ contracts $h$ and its image is a cone over the rational normal curve of ${\bf {P}}^e$. Moreover, for any spanned and non-trivial line bundle $L$ on $F_e$ there is an effective divisor $D$ such that $L \cong M(D)$. For any spanned, but not ample line bundle $A$ on $F_e$ there is an integer $c \ge 0$ such that $A \cong M^{\otimes c}$. We prove the following results.
\[o4\] Fix integers $e \ge 1$, $r \ge 1$, $u, v$ such that $v \le e(u-r+1)-2$. Then there is no rank $r$ vector bundle $E$ on $F_e$ with property £ with respect to $M$ such that $c_1(E) = \mathcal {O}_{F_e}(uh+vf)$.
\[o2\] Fix integers $e, m, u, v$ such that $e\ge 1$, and $v \ge e(u-1)-1$ and $m \ge 0$. Set $\widetilde{a}:=
\sum _{i=0}^{u+2m-2} v+2m-1-ie$ and $\widetilde{b}:= \sum _{i=0}^{u+2m-1} v+2m -ie$. Fix any integer $s$ such that $\widetilde{a}
\le s \le \widetilde{b}$. Then there exists a rank $2$ vector bundle $E$ on $F_e$ with property £ with respect to $M$ such that $c_1(E) = \mathcal {O}_{F_e}(uh+vf)$ and $c_2(E):=s -e(u+m-1) + (1-m)(v+em)$. Set $R:= \mathcal
{O}_{F_e}(h+(e+1)f)$ and assume $m=0$, $u \ge 3$, and $v < 2eu$. Then we may find $E$ as above which is $R$-stable in the sense of Mumford-Takemoto and (under the additional condition $v \le 2eu -3$) such that $N\cdot M < c_1(E)\cdot M/2$ for all rank $1$ subsheaves $N$ of $E$.
The case “ $r=1$ ” of Theorem \[o4\] is obviosly true (use the cohomology of line bundles on $F_e$, i.e. Remark \[a1\] below). In this case the converse is true, i.e. $\mathcal
{O}_{F_e}(uh+vf)$ has property £ with respect to $M$ if and only if $v \ge eu-1$ (Remark \[a1\]). We were surprised thar for $r \ge 2$ there is no way to overcome this $c_1$-obstruction.
The assumptions of the last part of Theorem \[o2\] may be relaxed and instead of $R$ we may take an arbitrary ample divisor $H$. An interesting offshot of our proof of Theorem \[o2\] is that our examples are given by an extension (\[eqo9\]) and all locally free sheaves fitting in (\[eqo9\]) have property £ with respect to $M$ and (under the additional conditions listed in Theorem \[o2\]) are $R$-stable and $N\cdot M < c_1(E)\cdot M/2$ for all rank $1$ subsheaves $N$ of $E$.
In the case of direct sums of line bundles we will prove the following result.
\[o3\] Fix integers $e \ge 1$, $r \ge 2$ and $L_i\in \mbox{Pic}(F_e)$, $1 \le
i \le r$, say $L_i \cong
\mathcal {O}_{F_e}(u_ih+v_if)$. Set $E:= L_1\oplus \cdots \oplus L_r$. Up to a permutation of the factors of $E$ we may assume $u_1 \ge \cdots \ge u_r$ and that if $u_i = u_j$ for some $i<j$, then $v_i \ge v_j$. Set $m:= -u_1$ if $v_1 \ge eu_1$ and $m:= -u_1+1$ if $v_1 = eu_1-1$. The vector bundle $E$ has property £ with respect to $M$ if and only if $v_i \ge
eu_i-1$ for all $i$, and for each $i \in \{2,\dots ,r\}$ either $u_i-m \ge -1$ or $-1 \le v_i-eu_i \le e-1$.
We raise the following question.
\[o3.1\] Assume $e=1$ or $e=2$. Is it possible to describe all invariants $r,
c_1, c_2$ of vector bundles on $F_e$ with property £ with respect to $M$?
The proofs {#S2}
==========
For any sheaf $F$ we will often write $F(mM)$ instead of $F\otimes
M^{\otimes m}$.
\[a0\] The line bundle $\mathcal {O}_{F_e}(ch+df)$ is ample if and only if $c>0$ and $d > ec$. Hence any ample line bundle is spanned. Assume that $H:= \mathcal {O}_{F_e}(ch+df)$ is ample. The cohomology of line bundles on $F_e$ shows that for every $t\in \mathbb {Z}$ the line bundle $H^{\otimes t}$ has property £ £ with respect to $H$. Hence $\mathcal {O}_{F_e}$ has property £ £ with respect to any ample line bundle. Set $H':= \mathcal {O}_{F_e}(ch+(d+2)f)$. Taking $m_t:= -tc$ we see that no $H^{\otimes t}$, $t > 0$, has property £ £ with respect to the ample line bundle $H'$. Taking $m_t:= -tc$ we see that no $H^{\otimes t}$, $t
> 0$, has property £ £ with respect to $M$.
\[a1\] Here we study properties £ and £ £ with respect to $M$ for line bundles on $F_e$. Fix $L\in \mbox{Pic}(F_e)$, say $L \cong \mathcal {O}_{F_e}(u h + vf)$. First assume $v \ge eu$. We have $h^0(F_e,L(xM)) >0$ if and only if $x \ge -u$. Since $h^1(F_e,\mathcal {O}_{F_e}(ch+df)) = 0$ if $c \ge 0$ and $d \ge ec$, $L$ has property £ with respect to $M$. Now assume $v < eu$. We have $h^0(F_e,L(xM)) > 0$ if and only if $ex
\ge -v$. Since $h^1(F_e,\mathcal {O}_{F_e}(ch+df)) = 0$ if $c \ge 0$ if and only if $d \ge ec-1$, we get that $L$ has property £ with respect to $M$ if and only if $v \ge eu-1$. Take $m:= -u$. If $v=eu$, then we saw in the introduction that $L$ has property £ £ with respect to $M$ if and only if $e=1$. Notice that $h^1(F_e,\mathcal {O}_{F_e}((u-x)h + (v-ex)f)) = h^1(F_e,\mathcal
{O}_{F_e}((x-u-2)h+(ex-v-e-2)) >0$ when $x \ge -u-2$ if and only if $-eu -2e \le -v-e-1$, i.e. if and only if $v \le eu+e-1$. Notice that $h^1(F_e,\mathcal {O}_{F_e}((u+x)h + (v+ex)f)) =0$ for $x \ge -u $ if and only if $v \ge eu-1$. Notice that $h^1(F_e,\mathcal {O}_{F_e}(-h + (v-eu-e)f)) = 0$ for every $v\in
\mathbb {Z}$. Hence $L$ has property £ £ with respect to $M$ if and only if $eu - 1 \le v \le eu+e-1$.
\[a2\] Here we look at property £ with respect to the ample line bundle $R:= \mathcal {O}_{F_e}(h+(e+1)f)$ for line bundles on $F_e$. Fix $L\in \mbox{Pic}(F_e)$, say $L \cong \mathcal {O}_{F_e}(u h + vf)$. We heve $h^0(F_e,L(xR)) = 0$ if and only if $x \ge -u$ and $x(e+1) \ge -v$. We immediately see that if $v \ge (e+1)u$, then $L$ has property £ with respect to $M$. Now assume $v < (e+1)u$. Set $y:= \lceil -v/(e+1)\rceil$. We have $h^0(F_e,L(xR)) >0$ if and only if $x \ge y$. Fix an integer $x
\ge y$. Since $u+x \ge u+y \ge 0$, $h^1(F_e,\mathcal {O}_{F_e}(u+x)h+(v+(e+1)x)f)) >0$ if and only if $v+(e+1)x \le eu+ ex-2$. The strongest condition is obtained when $x=y$. We get that $L$ has property $\alpha $ with respect to $R$ if and only if either $v \ge (e+1)u$ or $v+ey \ge eu-1$, where $y:=
\lceil -v/(e+1)\rceil$.
\[a3\] $E_1\oplus E_2$ has property £ £ with respect to $L$ if and only if both $E_1$ and $E_2$ have property £ £ with respect to $L$. If $E_1\oplus E_2$ has property £ with respect to $L$, then the same is true for $E_1$ and $E_2$. Now we check that the converse is not true. Both $\mathcal {O}_{F_e}$ and $\mathcal {O}_{F_e}(-2h+(-e+4)f)$ have property £ with respect to $M$ (Remark \[a1\]). Since $h^1(F_e,\mathcal {O}_{F_e}(-2h+(-e+4)f)) =
h^1(F_e,\mathcal {O}_{F_e}(-2f)) = 1$, $\mathcal {O}_{F_e}\oplus \mathcal {O}_{F_e}(-2h+(-e+4)f)$ has not property £ with respect to $M$.
\[a3.0\] The definition of property £ may be given for an arbitrary torsion free sheaf, but not much may be said in the general case. Here we look at the rank $1$ case, because we will need it in the proofs of Theorems \[o4\] and \[o2\]. Let $A$ be a rank $1$ torsion free sheaf on $F_e$. Hence $A \cong \mathcal {I}_Z(uh+vf)$ for some zero-dimensional scheme $Z$ and some integers $u,v$. Since $Z$ is zero-dimensional, $h^1(F_e,\mathcal {O}_{F_e}((u+t)h+(v+et)f)) \le h^1(F_e,\mathcal
{I}_Z((u+t)h+(v+et)f))$ for all $t\in \mathbb {Z}$. Taking $t \gg 0$ we see that if $A$ has property £ with respect to $M$, then $v \ge eu-1$. When $v \ge eu-1$, for a general $Z$ (in the following sense) $A$ has property £ with respect to $M$ for the following reason. Fix an integer $z>0$. Since $F_e$ is a smooth surface, the Hilbert scheme $\mbox{Hilb}^z(F_e)$ of all length $z$ zero-dimensional subschemes of $F_e$ is irreducible and of dimension $2z$ ([@f]). Take a general $S\in \mbox{Hilb}^z(F_e)$, i.e. take $z$ general points of $F_e$. Since $h^0(F_e,\mathcal {I}_S\otimes
L) =
\max \{0,h^0(F_e,L)-z\}$ for every $L\in \mbox{Pic}(F_e)$, it is easy to check that if $v \ge eu-1$, then $\mathcal {I}_S(uh+vf)$ has property £ with respect to $M$. Now take $v = eu-1$, any integer $z>0$ and any zero-dimensional length $z$ subscheme $B$ of $h$. Twisting with $(-u+1)M$ we see that $\mathcal {I}_B(uh,(eu-1)f)$ has not property £ with respect to $M$. Now assume $v > eu$. Take a zero-dimensional length $z\ge 2$ scheme $W$ of a fiber of $\pi$. Twisting with $-uM$. We see that $\mathcal
{I}_W(uh+vf)$ has not property £ with respect to $M$. If $z \ge 3$ and $v = eu$, twisting with $(-u+1)M$ and using the same $W$ we get a sheaf without property £ with respect to $M$.
Property £ with respect to $M$ has the following open property.
\[a4\] Let $\{E_t\}_{t\in T}$ be a flat family of vector bundles on $F_e$ parametrized by an integral variety $T$. Assume the existence of $s\in T$ such that $E_s$ has property £ with respect to $M$. Then there exists an open neighborhood $U$ of $s$ in $T$ such that $E_t$ has property £ with respect to $M$ for all $t\in U$.
Let $m$ be the minimal integer such that $h^0(F,e,E_s(mM)) >0$. Thus $h^1(F_e,E_s(xM)) = 0$ for all $x \ge m$. By semicontinuity there is an open neighborhood $V$ of $s$ in $T$ such that $h^0(F_e,E_t((m-1)M)) = 0$ for all $t\in V$. By semicontinuity for every integer $x \ge m$ there is an open neighborhhod $V_x$ of $s$ in $T$ such that $h^1(F_e,E_t(xM))$ for all $t\in V_x$. Fix an irreducible $D\in \vert M\vert$. Hence $D \cong {\bf {P}}^1$. Since $D^2 >0$, there is an integer $a$ such that $h^1(D,E_s(aM)\vert D) = 0$. By semicontinuity there is an open neighborhood $V$ of $s$ in $T$ such that $h^1(D,E_t(aM)\vert D) = 0$ for every $t\in V$. Since $D^2>0$, $h^1(D,E_t(xM)\vert D) = 0$ for every $t\in V$ and every integer $x \ge a$. Fix an integer $x \ge a$. From the exact sequence $$0 \to E_t((x-1)M) \to E_t(xM) \to E_t(xM)\vert D \to 0$$ we get that if $h^1(F_e,E_t((x-1)M)) = 0$, then $h^1(F_e,E_t(xM))$. Hence we may take $U:= V\cap \bigcap _{x=m}^{\max \{a,m\}} V_x$.
If $E$ has property £ with respect to $M$, then each $L_i$ has property £ with respect to $M$ (Remark \[a3\]) and hence $v_i \ge eu_i-1$ for all $i$. Now we assume $v_i \ge eu_i-1$ for all $i$. Notice that $m$ is the minimal integer $t$ such that $h^0(F_e,E(t)) \ne 0$. Since $L_1$ has property £ with respect to $M$, $E$ has property £ with respect to $E$ if and only if $h^1(F_e,L_i(tM)) =0$ for all $t \ge m$ and all $i=2,\cdots ,r$. If $u_i-m \ge -1$, then $h^1(F_e,L_i(tM)) =0$ for all $t \ge
m$ because $v_i \ge eu_i-1$. Now assume $u_i-m \le -2$. We get $h^1(F_e,L_i(tM)) =0$ for any $t \ge m$ if and only if $-1 \le v_i-eu_i \le e-1$.
Here we discuss the set-up for the rank $2$ case. Consider an exact sequence $$\label{eqo1}
0 \to \mathcal {O}_{F_e}(D) \to E(mM) \to \mathcal {I}_Z(c_1+2mM-D) \to
0$$ in which $Z$ is a zero-dimensional scheme with length $s$ and either $D=0$ or $D = h$ or $D \in \vert zf\vert$ for some $1 \le z
\le e$ or $e \ge 2$ and $D \in \vert h+wf\vert$ for some $1 \le w \le e-1$. We have $c_1(E(mM)) = c_1+2nm$ and $c_2(E(mM))
= s + D\cdot c_1 +2mM\cdot D -D^2$. Thus $c_1(E) = c_1$ and $c_2(E) =
c_2$ by the choice of $s$ ([@h2], Lemma 2.1). Each $E$ fitting in (\[eqo1\]) is torsion free. To have some locally free $E$ fitting in (\[eqo1\]) a necessary condition is that $Z$ is a locally complete intersection. Notice that $h^1(F_e,\mathcal {O}_{F_e}(D)(-M))
= 0$ if $h$ is not a component of $D$. Hence a sufficient condition to have $h^0(F_e,E(mM)) >0$ and $h^0(F_e,E((m-1)M))=0$ is the equality $$\label{eqo2}
h^0(F_e, \mathcal {I}_Z(c_1+(2m-1)M-D))=0$$ and (\[eqo2\]) is a necessary condition if $h$ is not a component of $D$. Assume that $Z$ is a locally a complete intersection. The Cayley-Bacharach condition associated to (\[eqo1\]) is satisfied if $$\label{eqo3}
h^0(F_e,\mathcal {I}_{Z'}(c_1+2mM-2D-2h -(e+2)f)) =0$$ for every length $s-1$ closed subscheme of $s$ ([@c]). This condition is satisfied if $h^0(F_e, \mathcal {I}_Z(c_1+(2m-1)M-D))=0$, $Z_{red}\cap h =
\emptyset$ and no connected component of $Z$ is tangent to a fiber of the fiber of $\pi$, because the line bundle $\omega _{F_e}^\ast (-M)
= \mathcal {O}_{F_e}(h+2f)$ is base point free outside $h$ and the morphism associated to $\vert f\vert$ is the ruling; if $e =12$, then (\[eqo3\]) is satisfied if (\[eqo2\]) is satified, because $\mathcal {O}_{F_1}(h+2f)$ is very ample; if $e=2$ it is sufficient to assume $Z_{red}\cap h =
\emptyset$, because the morphism associated to $\mathcal {O}_{F_2}(h+2f)$ is an embedding outside $h$.
If $r=1$, then use Remark \[a1\]. Assume the existence of a rank two vector bundle $E$ with property £ with respect to $M$ and $c_1(E) = \mathcal
{O}_{F_e}(uh+vf)$. Let $m$ be the first integer such that $h^0(F_e,E(mM)) >0$. We get an exact sequence (\[eqo1\]) with $D\in \vert \mathcal {O}_{F_e}(xh+yf)$ with the convention $(x,y)=(0,0)$ if $D = \emptyset$. Hence either $(x,y) = (0,0)$ or $(x,y) = (1,0)$ or $x=0$ and $1 \le y \le e$ or $e \ge 2$, $x=1$, and $1 \le y \le
e-1$. Since $h^2(F_e,M^{\otimes z}(D)) = 0$ for all $z \ge 0$, (\[eqo1\]) and property £ for $E$ imply $h^1(F_e,\mathcal {I}_Z((u+2m-x+z)h+(v+2me-y+ze)f)=0$ for all $z \ge 0$. As in Remark \[a3.0\] we see that when $z \gg 0$ the last equality implies $v-y \ge e(u-x)-1$. If $v \le e(u-1)-2$ the last inequality is not satisfied for any choice of the pair $(x,y)$ in the previous list.
Fix a general $S \subset
F_e$ such that $\sharp (S)=s$. Let $E$ be any torsion free sheaf fitting in the following exact sequence: $$\label{eqo9}
0 \to \mathcal {O}_{F_e}((1-m)h-emf) \to E \to \mathcal
{I}_S((u+m-1)h+(v+em)f) \to 0$$ We have $c_1(E) = \mathcal {O}_{F_e}(uh+vf)$ and $c_2(E) = s -e(u+m-1)
+ (1-m)(v+em)$. By construction $h^0(F_e,E(mM)) \ne 0$. We have $h^0(F_e,E((m-1)M)) = 0$. If $h^0(F_e,\mathcal {I}_S((u+2m-2)h+(v+2em-e)f)=0$. Since $S$ is general, $h^0(F_e,\mathcal {I}_S((u+2m-2)h+(v+2em-e)f)=0$ if and only if $$\label{eqo10}
h^0(F_e,\mathcal {O}_{F_e}((u+2m-2)h+(v+2em-e)f)\le s$$ Since $S$ is general, every subset of it is general. Hence to check the Cayley-Bacharach condition and hence show the local freeness of a general $E$ given by the extension (\[eqo10\]) it is sufficient to prove check the following inequality: $$\label{eqo11}
h^0(F_e,\mathcal {O}_{F_e}((u+2m-5)h+(v+2em-2e-2)f))\le s-1$$ This is true, because we assumed $s \ge \widetilde{a}$ and $\widetilde{a} >
h^0(F_e,\mathcal {O}_{F_e}((u+2m-5)h+(v+2em-2e-2)f))$. Hence a general $E$ fitting in the extension (\[eqo10\]) is locally free. Since $\mathcal {O}_{F_e}(3h+e+2)$ has a a subsheaf the very ample line bundle $\mathcal {O}_{F_e}(h+e+2)$, (\[eqo11\]) is satisfied if (\[eqo10\]) is satisfied. The generality of $S$ implies that $h^1(F_e,\mathcal {I}_S((u+m-1+t)h+(v+em+et)f)=
0$ if and only if $h^1(F_e,\mathcal {O}_{F_e}((u+m-1+t)h+(v+em+et)f)=0$ and $h^0(F_e,\mathcal {O}_{F_e}((u+m-1+t)h+(v+em+et)f)
\ge s$. Notice that $\widetilde{a} = h^0(F_e,\mathcal
{O}_{F_e}((u+2m-2)h
+ (v+2me -e)f))$ and $\widetilde{b} = h^0(F_e,\mathcal
{O}_{F_e}((u+2m-1)h
+ (v+2me )f))$. Since $h^1(F_e,(u+m-1+t)u + (v+me+te)f)) = 0$ for all $t \ge 0$, $\widetilde{a} \le s \le \widetilde{b}$ and $S$ is general, any sheaf $E$ in (\[eqo9\]) has property £ with respect to $M$. Since a general extension (\[eqo9\]) has locally free middle term $E$, the proof of the first part of Theorem \[o2\] is over. Now assume $m=0$, $u \ge 3$,$v < 2eu$, and that $E$ is not $R$-stable, i.e. assume the existence of $N\in \mbox{Pic}(F_e)$ such that $N\cdot R \ge
c_1(E)\cdot R/2$ and an inclusion $j:
N \to E$; here to have $N$ locally free we use that $E$ is reflexive. Since $m=0$ and $u \ge 3$, $c_1(E)\cdot R >
2(\mathcal {O}_{F_e}(h))\cdot R$. Hence $j$ induces a non-zero map $N \to \mathcal {I}_S((u-1)h+vf)$. Any non-zero map $N \to \mathcal
{O}_{F_e}((u-1)h + vf)$ is associated to a unique non-negative divisor $\Delta \in \vert
\mathcal {O}_{F_e}((u-1)h + v)f)\otimes N^\ast \vert$. Since $j$ factors through $\mathcal {I}_S((u-1)h+vf)$, $h^0(F_e,\mathcal
{I}_S(\Delta ) >0$. We fixed $R$ and the integers $m, u, v$. There are only finitely many possibilities for the line bundle $\mathcal {O}_{F_e}(\Delta )$. Since $S$ is general, we get $h^0(F_e,\mathcal {O}_{F_e}(\Delta )) \ge s$. Write $N = \mathcal
{O}_{F_e}(\gamma h+\delta f)$ for some integers $\gamma ,\delta$. The inequality $N\cdot R \ge
c_1(E)\cdot R/2$ is equivalent to the inequality $$\label{eqo12}
2\gamma + 2\delta \ge u+v$$ We have $\mathcal {O}_{F_e}(\Delta ) = \mathcal {O}_{F_e}((u-1 -\gamma
)h+(v -\delta )f)$. Since $h^0(F_e,\mathcal {O}_{F_e}(\Delta )) \ge s $ and $s \le
\widetilde{b} = h^0(F_e,\mathcal {O}_{F_e}((u-1)h
+ (v )f))$, either $\gamma \le 0$ or $\delta \le 0$. Since $\Delta$ is effective, we also have $\gamma \le u-1$ and $\delta \le v$. First assume $\delta \le 0$. Hence $\gamma \ge (u+v)/2$. Since $ \gamma \le u-1$, we get $v \le u-2$. Since $v \ge eu -e$, we get a contradiction. Now assume $\gamma \le 0$. We get $\delta \ge (u+v)/2$. Consider the exact sequence $$\label{eqo13}
0 \to N \to E \to \mbox{Coker}(j) \to 0$$ Notice that $\mbox{Coker}(j)^{\ast \ast }\cong \mathcal
{O}_{F_e}((u-\gamma )h+(v -\delta )f)$. Since $\gamma \le 0$, $\delta \ge (u+v)/2$, and $v < 2eu$, we have $v-
\delta \le e(u -\gamma )-2$. In Remark \[a3.0\] we checked that $h^1(F_e,\mbox{Coker}(j)(tM)) > 0$ for $t \gg 0$. Since $h^2(F_e,L(tM))
=0$ for $t \gg 0$ and any $L\in \mbox{Pic}(F_e)$, the exact sequence (\[eqo13\]) gives that $E$ has not property £ with respect to $M$, contradicting the already proved part of Theorem \[o2\]. If instead of $R$ we use $M$ for the intersection product, instead of (\[eqo12\]) we only have the inequality $2 \delta \ge v$. Everything works in the same way with only minor numerical modifications.
\[o5\] There are at least $2$ well-known and related ways to obtain rank $r
\ge 3$ vector bundles as extensions. Instead of (\[eqo1\]) we may take the exact sequence $$\label{eqo4}
0 \to \oplus _{i=1}^{r-1} \mathcal {O}_{F_e}(D_i-m_iM) \to \mathcal
{I}_Z(uh+vf) \to 0$$ In [@hl], proof of Theorem 5.1.6, the following extension is used: $$\label{eqo5}
0 \to L_1 \to E \to \oplus _{i=2}^{r} \mathcal {I}_{Z_i}(u_ih+v_if) \to
0$$ The latter extension was behind the proof of Proposition \[o3\]. Both extensions can give several examples of vector bundles with or without property £ with respect to $M$. To prove Theorem \[o4\] we will use iterated extensions, i.e. increasing filtrations $E_i$, $1 \le i
\le r$, of $E$ such that $E_1$ is a line bundle, $E_r = E$ and each $E_i/E_{i-1}$ is a rank $1$ torsion free sheaf.
Assume the existence of a rank $r$ vector bundle $E$ with property £ with respect to $M$ and $c_1(E) = \mathcal
{O}_{F_e}(uh+vf)$. Let $m_1$ be the first integer such that $h^0(F_e,E(m_1M)) >0$. Fix a general $\sigma \in
H^0(F_e,E(m_1M))$. Since $h^0(F_e,E((m_1-1)M)) =0$, $\sigma$ induces an exact sequence $$\label{eqo6}
0 \to \mathcal {O}_{F_e}(-m_1M+D_1) \to E \to G_1 \to 0$$ with $F_1$ torsion free, $D_1$ of type $(x_1,y_1)$ and either $(x_1,y_1) = (0,0)$ or $(x_1,y_1) = (1,0)$ or $x_1=0$ and $1 \le y_1 \le e$ or $e \ge 2$, $x_1=1$, and $1 \le y_1
\le e-1$. Notice that $c_1(G_1) = \mathcal {O}_{F_e}((u+m_1-x_1)h + (v+em_1-y_1)f)$. Set $E_1:= \mathcal {O}_{F_e}(-m_1M+D_1)$. Since $h^2(F_e,\mathcal {O}_{F_e}((t-m_1)D+D_1)) = 0$ for all $t \ge
m_1$, property £for $m$ with respect to $M$ implies $h^1(F_e,F_1(tM)) = 0$ for all $t
\ge m_1$. Let $m_2$ be the first integer such that $m_2 \ge m_1$ and $h^0(F_e,F_1(m_2M)) >0$. A non-zero section of $H^0(F_e,G_1(m_2M))$ induces an exact sequence $$\label{eqo7}
0 \to \mathcal {I}_{Z_1}(-m_2M+D_2) \to G_1 \to G_2 \to 0$$ with $\mathcal {I}_{Z_1}$ zero-dimensional, $G_2$ torsion free and $D_2$ an effective divisor of type $(x_2,y_2)$ and either $(x_2,y_2) =
(0,0)$ or $(x_2,y_2) = (1,0)$ or $x_2=0$ and $1 \le y_2 \le e$ or $e \ge 2$, $x_2=1$, and $1 \le y_2
\le e-1$. Here we cannot claim that $Z_1 = \emptyset$, because $G_1$ is not assumed to be locally free. Notice that $c_1(G_2) = \mathcal {O}_{F_e}((u+m_1+m_2-x_1-x_2)h +
(v+em_1+em_2-y_1-y_2)f)$. Since $Z_1$ is zero-dimensional, $h^2(F_e,\mathcal {I}_{Z_1}\otimes L)=
h^2(F_e,L)$ for every $L\in \mbox{Pic}(F_e)$. Hence as in the first step we get $h^1(F_e,G_2(tM)) = 0$ for all $t \ge m_2$. If $r=3$, we are done as im the proof of the case $r=2$. If $r \ge 4$, we iterate the last step $r-3$ times.
[99]{}
F. Catanese, Footnotes to a theorem of Reider, in: Algebraic Geometry Proceedings, L’Aquila 1988 (ed. by A. J. Sommese, A. Biancofiore, E. L. Livorni), 64–74, Lecture Notes in Math. 1417, Springer, Berlin, 1990.
J. Fogarty, Algebraic families on an algebraic surface, Amer. J. Math. 90 (1968), 511–521.
R. Hartshorne, Stable reflexive sheaves, Math. Ann. 254 (1980), no. 2, 121–176.
D. Huybrechts and M. Lehn, The geometry of moduli spaces of sheaves, Friedr. Vieweg & Sohn, Braunschweig, 1997.
[^1]: The author was partially supported by MIUR and GNSAGA of INdAM (Italy).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Properties of the pure solitonic $\tau$-function and potential of the heat equation are studied in detail. We describe the asymptotic behavior of the potential and identify the ray structure of this asymptotic behavior on the $x$-plane in dependence on the parameters of the potential.'
author:
- |
M. Boiti${}^{*}$, F. Pempinelli${}^{*}$, and A. K. Pogrebkov$
{}^{\dag}$\
${}^{*}$Dipartimento di Fisica, Università del Salento and\
Sezione INFN, Lecce, Italy\
${}^{\dag}$Steklov Mathematical Institute, Moscow, Russia
date: 'PACS: 02.30Ik, 02.30Jr, 05.45Yv'
title: Properties of the solitonic potentials of the heat operator
---
Introduction
============
The Kadomtsev–Petviashvili (KP) equation was derived as a model for small-amplitude, long-wavelength, weakly two-dimensional waves in a weakly dispersive medium [@KP1970]. This equation is a (2+1)-dimensional generalization of the celebrated Korteweg–de Vries (KdV) equation, and from the beginning of the 1970s [@D1974; @ZS1974] it is known to be integrable. There are two inequivalent versions of the KP equation: KPI and KPII. Here we consider KPII equation $$(u_{t}-6uu_{x_{1}}+u_{x_{1}x_{1}x_{1}})_{x_{1}}=-3u_{x_{2}x_{2}}
\label{KPII}$$ (the KPI equation has an opposite sign in the r.h.s.), where $u=u(x,t)$, $x=(x_{1},x_{2})$ and subscripts $x_{1}$, $x_{2}$ and $t$ denote partial derivatives. The KPII equation is integrable since it can be expressed as compatibility condition $[\mathcal{L},\mathcal{T}]=0$ of the Lax pair $\mathcal{L}$ and $\mathcal{T}$, where operator $$\mathcal{L}(x,\partial_{x})=-\partial_{x_{2}}+\partial_{x_{1}}^{2}-u(x)\label{heatop}$$ defines the well known equation of heat conduction, or heat equation for short, and $$\mathcal{T}(x,\partial_{x},\partial_{t})=\partial_{t}+4\partial_{x_{1}}^{3}-6u\partial_{x_{1}}-3u_{x_{1}}-
3\partial_{x_{1}}^{-1}u_{x_{2}}.\label{Lax}$$
The spectral theory of the operator (\[heatop\]) was developed in [@BarYacoov; @Lipovsky; @Wickerhauser; @Grinevich0] in the case of a real potential $u(x)$ rapidly decaying at spatial infinity, which, however, is not the most interesting case, since the KPII equation was just proposed in [@KP1970] in order to deal with two dimensional weak transverse perturbation of the one soliton solution of the KdV. In fact, if $u_{1}^{}(t,x_{1}^{})$ obeys KdV, then $u(t,x_{1}^{},x_{2}^{})=u_{1}^{}(t,x_{1}^{}+\mu x_{2}^{}-3\mu_{}^{2}t)$ solves KPII for an arbitrary constant $\mu\in\operatorname{\mathbb{R}}$. In particular, KPII admits a one soliton solution of the form $$u(x,t)=-\dfrac{(\kappa_{1}-\kappa_{2})^{2}}{2}\text{\textrm{sech}}^{2}\Biggl[\dfrac{\kappa_{1}^{}-
\kappa_{2}^{}}{2}x_{1}+\dfrac{\kappa_{1}^{2}-\kappa_{2}^{2}}{2}x_{2}-2(\kappa_{1}^{3}-\kappa_{2}^{3})t\Biggr],
\label{1-sol}$$ where $\kappa_{1}$ and $\kappa_{2}$ are real, arbitrary constants.
A spectral theory of the heat operator (\[heatop\]) that also includes solitons has to be built. In the case of a potential $u(x)$ rapidly decaying at spatial infinity, according to [@BarYacoov; @Lipovsky; @Wickerhauser; @Grinevich0], the main tools in building the spectral theory of the operator (\[heatop\]), as in the one dimensional case, are the integral equations defining the Jost solutions. However, if the potential $u(x)$ does not decay at spatial infinity, as it is the case when line soliton solutions are considered (see, e.g., (\[1-sol\])), these integral equations are ill-defined and one needs a more general approach. In solving the analogous problem for the nonstationary Schrödinger operator, associated to the KPI equation, the extended resolvent approach was introduced [@BPP2006b]. Accordingly, a spectral theory of the KPII equation that also includes solitons has to be investigated using the resolvent approach. In this framework it was possible to develop the inverse scattering transform for a solution describing one soliton on a generic, rapidly decaying background [@BPPP2002], and to study the existence of the (extended) resolvent for (some) multisoliton solutions [@BPPP2009].
However, the general case of $N$ solitons is still open. In particular this is motivated by a complicated asymptotic behavior of the pure solitonic potential on the $x$-plane. Here we give a detailed description of this behavior in terms of the soliton parameters. The general form of the multisoliton potential was derived in [@BPPPr2001a] by means of the rational transformations of the scattering data and in [@BPPP2009] by means of the twisting transformation. In [@equivKPII] we proved that results of both these constructions coincide one with another and also with the form of the multisoliton potential given in terms of the $\tau$-function by Biondini, Kodama et al, see review article [@K] and references therein.
The paper is organized as follows. In Sec. 2 we write down the multisoliton potential. They are labeled by the two numbers (topological charges), $N_{a}$ and $N_{b}$, that obey condition $$N_{a},N_{b}\geq 1, \label{nanb}$$ and showing at large spaces $N_{b}$ “ingoing” rays and $N_{a}$ “outgoing” rays. By using notations of [@equivKPII], we give for these potentials two dual representations in terms of $\tau$-functions, which depend on $$\operatorname{\mathcal{N}}=N_{a}+N_{b}, \label{Nnanb}$$ (so that $\operatorname{\mathcal{N}}\geq 2$) real parameters $\kappa_n$ and on a $N_a\times N_b$ constant matrix $d$. In Sec. 3 we study the asymptotic behavior at large $x$ of the multisoliton potential in details and show that the round angle at the origin can be divided in $\mathcal{N}$ angular sectors such that along the directions of their bordering rays the potential has constant soliton like behavior, while along directions inside the sectors the potential has an exponential decaying behavior, which we derive explicitly. In particular, for a special subclass of the solitonic potentials we present explicitly the ray structure in terms of the parameters $\kappa_n$.
Multisoliton potentials
=======================
Let we have $\operatorname{\mathcal{N}}$ real parameters $$\kappa_{1}<\kappa_{2}<\ldots <\kappa_{\operatorname{\mathcal{N}}}, \label{kappas}$$ and introduce the functions $$K_{n}^{}(x)=\kappa_{n}^{}x_{1}^{}+\kappa_{n}^{2}x_{2}^{},\quad n=1,\ldots,\operatorname{\mathcal{N}}. \label{Kn}$$ Let $$e^{K(x)}=\operatorname{\rm diag}\{e^{K_{n}(x)}\}_{n=1}^{\operatorname{\mathcal{N}}}, \label{eK}$$ be a diagonal $\operatorname{\mathcal{N}}\times {\operatorname{\mathcal{N}}}$-matrix, $\operatorname{\mathcal{D}}$ a $\operatorname{\mathcal{N}}\times {N_{b}}$ constant real matrix with at least two nonzero maximal minors, and let $\operatorname{\mathcal{V}}$ be an “incomplete Vandermonde matrices,” i.e., the $N_{b}\times\operatorname{\mathcal{N}}$-matrix $$\operatorname{\mathcal{V}}=\left(\begin{array}{lll}
1 & \ldots & 1 \\
\vdots & & \vdots \\
\kappa_{1}^{N_{b}-1} & \ldots & \kappa_{\operatorname{\mathcal{N}}}^{N_{b}-1}
\end{array}\right) . \label{W}$$ Then, the soliton potential is given by $$u(x)=-2\partial_{x_{1}}^{2}\log \tau (x), \label{ux}$$ where the $\tau$-function can be expressed as $$\tau (x)=\det \bigl(\operatorname{\mathcal{V}}e^{K(x)}\operatorname{\mathcal{D}}\bigr), \label{tau}$$ see the review paper [@K], references therein, and [@equivKPII] where the same notations have been used.
There exists (see [@BC2], [@BPPP2009], and [@equivKPII]) a dual representation for the potential in terms of the $\tau$-function $$\tau'(x)=\det \left(\operatorname{\mathcal{D}}^{\,\prime}e^{-K(x)}\gamma \operatorname{\mathcal{V}}^{\,\prime}\right) , \label{tau'}$$ where $\operatorname{\mathcal{D}}^{\,\prime}$ is a constant real $N_{a}\times\operatorname{\mathcal{N}}$-matrix that like the matrix $\operatorname{\mathcal{D}}$ has at least two nonzero maximal minors and that is orthogonal to the matrix $\operatorname{\mathcal{D}}$ in the sense that $$\operatorname{\mathcal{D}}^{\,\prime}\operatorname{\mathcal{D}}=0, \label{d12}$$ being the zero in the r.h.s. a $N_{a}\times {N_{b}}$-matrix, and where $\operatorname{\mathcal{V}}^{\,\prime}$ is the $\operatorname{\mathcal{N}}\times {N_{b}}$-matrix $$\operatorname{\mathcal{V}}^{\,\prime}=\left(\begin{array}{lll}
1 & \ldots & \kappa_{1}^{N_{a}-1} \\
\vdots & & \vdots \\
1 & \ldots & \kappa_{\operatorname{\mathcal{N}}}^{N_{a}-1}\end{array}\right) , \label{sol'}$$ and $\gamma$ the constant, diagonal, real $\operatorname{\mathcal{N}}\times\operatorname{\mathcal{N}}$-matrix $$\gamma=\operatorname{\rm diag}\{\gamma_n\}_{n=1}^{\operatorname{\mathcal{N}}},\qquad
\gamma_n=\prod_{n'=1,n'\neq{n}}^{\operatorname{\mathcal{N}}}(\kappa_{n}-\kappa_{n'})^{-1}.\label{gamma}$$ This dual representation follows if one notices that $$\tau (x)=(-1)^{N_{a}N_{b}+N_{a}(N_{a}-1)/2}\left(\prod_{n=1}^{\operatorname{\mathcal{N}}}e^{K_{n}(x)}\right)
V(\kappa_{1},\ldots,\kappa_{N_{a}+N_{b}})\tau'(x), \label{tautau}$$ where $V$ denotes the Vandermonde determinant $$V(\kappa_{1},\ldots ,\kappa_{\operatorname{\mathcal{N}}})=\prod_{1\leq m<n\leq \operatorname{\mathcal{N}}}(\kappa_{n}-\kappa_{m}). \label{V}$$
Matrices $\operatorname{\mathcal{D}}$ and $\operatorname{\mathcal{D}}^{\,\prime}$ obey rather interesting properties [@equivKPII]. Taking into account that the matrix $\operatorname{\mathcal{D}}$ has nonzero maximal minors we can always write it in the form $$\operatorname{\mathcal{D}}=\pi\left(\begin{array}{c}
d_1\\d_{2}\end{array}\right),\label{block1}$$ where $\pi$ is a $\operatorname{\mathcal{N}}\times\operatorname{\mathcal{N}}$-permutation matrix such that the $N_b\times{N_b}$-matrix $d_2$ is nonsingular. Then, we can rewrite this equality in the form $$\operatorname{\mathcal{D}}=\pi\left(\begin{array}{c}
d \\E_{N_{b}}\end{array}\right)d_2\label{block}$$ where $d=d_{1}^{}d^{-1}_{2}$ is a constant real $N_{a}\times {N_{b}}$-matrix and $E_{N_{b}}$ the $N_{b}\times N_{b}$ unity matrix.
Now, we notice (see [@K]) that the expression for the potential in (\[ux\]), as well as condition (\[d12\]), are left unchanged if in (\[tau\]) and (\[tau’\]), respectively, we multiply the $\operatorname{\mathcal{N}}\times {N_{b}}$-matrix $\operatorname{\mathcal{D}}$ from the right by a nonsingular constant $N_b\times{N_b}$-matrix and the $N_{a}\times\operatorname{\mathcal{N}}$-matrix $\operatorname{\mathcal{D}}^{\,\prime}$ from the left by a nonsingular constant $N_a\times{N_a}$-matrix. We conclude that the matrix $d_2$ is unessential and that the matrix $\operatorname{\mathcal{D}}$ can be chosen to have a simple block structure. It is clear, however, that choice of matrixes $\pi$ and $d$ is not unique as the matrix $\operatorname{\mathcal{D}}$ can have different nonzero maximal minors.
Let us write now $\operatorname{\mathcal{D}}'\pi=(d'_{1},d'_{2})$, where $d_1'$ is $N_a\times{N_a}$-matrix and $d'_2$ is $N_a\times{N_b}$-matrix. Then by (\[d12\]) we get $d_{2}'=-d'_{1}d$, where $d$ is the same $N_{a}\times {N_{b}}$ matrix as in (\[block\]). Thus, for $\operatorname{\mathcal{D}}'$ we get the block structure $$\operatorname{\mathcal{D}}^{\,\prime}=d'_{1}( E_{N_{a}},-d)\pi^{\dag},\label{block'}$$ and thanks to equivalence of $\tau$ and $\tau'$ we get that $\det{d'_1}\neq0$ and, then, also $\operatorname{\mathcal{D}}^{\,\prime}$ can be chosen to have a simple block structure.
Moreover, by means of the block structures given in (\[block\]) and (\[block’\]) we get $${\operatorname{\mathcal{D}}}^{\dag}\operatorname{\mathcal{D}}=d^{\dag}_2(E_{N_{b}}+d^{\dag}d)d_2,\qquad \operatorname{\mathcal{D}}^{\,\prime}{\operatorname{\mathcal{D}}^{\,\prime}}^{\dag}=d'_{1}(E_{N_{a}}+dd^{\dag}){d'_1}^{\dag}, \label{d14}$$ where $\dag $ denotes the Hermitian conjugation of a matrix (in fact, transposition here). So both matrices ${\operatorname{\mathcal{D}}}^{\dag}\operatorname{\mathcal{D}}$ and $\operatorname{\mathcal{D}}^{\,\prime}{\operatorname{\mathcal{D}}^{\,\prime}}^{\dag}$ are positive and then $\operatorname{\mathcal{D}}$ and $\operatorname{\mathcal{D}}^{\,\prime}$ admit, respectively, left and right inverse (see [@G1990]). Precisely, we have $$\bigl(\operatorname{\mathcal{D}}\bigr)^{(-1)}\operatorname{\mathcal{D}}=E_{N_{b}},\qquad \operatorname{\mathcal{D}}^{\,\prime}\bigl(\operatorname{\mathcal{D}}^{\,\prime}\bigr)^{(-1)}=E_{N_{a}}, \label{d17}$$ with $$\bigl(\operatorname{\mathcal{D}}\bigr)^{(-1)}=({\operatorname{\mathcal{D}}}^{\dag}\operatorname{\mathcal{D}})^{-1}{\operatorname{\mathcal{D}}}^{\dag}, \qquad
\bigl(\operatorname{\mathcal{D}}^{\,\prime}\bigr)^{(-1)}=
{\operatorname{\mathcal{D}}^{\,\prime}}^{\dag}(\operatorname{\mathcal{D}}^{\,\prime}{\operatorname{\mathcal{D}}^{\,\prime}}^{\dag})^{-1}. \label{d15}$$ Products of these matrices in the opposite order give the real Hermitian $\operatorname{\mathcal{N}}\times\operatorname{\mathcal{N}}$-matrices $$\begin{aligned}
& P=\operatorname{\mathcal{D}}\bigl(\operatorname{\mathcal{D}}\bigr)^{(-1)}=\operatorname{\mathcal{D}}({\operatorname{\mathcal{D}}}^{\dag}\operatorname{\mathcal{D}})^{-1}\bigl(\operatorname{\mathcal{D}}\bigr)^{\dag}, \label{d19} \\
& P^{\,\prime}=\bigl(\operatorname{\mathcal{D}}^{\,\prime}\bigr)^{(-1)}\operatorname{\mathcal{D}}^{\,\prime}=
\bigl(\operatorname{\mathcal{D}}^{\,\prime}\bigr)^{\dag}(\operatorname{\mathcal{D}}^{\,\prime}{\operatorname{\mathcal{D}}^{\,\prime}}^{\dag})^{-1}\operatorname{\mathcal{D}}^{\,\prime},\label{d18}\end{aligned}$$ which are orthogonal projectors, i.e., $$P^{2}=P,\qquad (P^{\,\prime})^{2}=P^{\,\prime},\qquad PP^{\,\prime}=0=P^{\,\prime}P, \label{d22}$$ and complementary in the sense that $$P+P^{\,\prime}=E_{\operatorname{\mathcal{N}}}. \label{d23}$$ Orthogonality of the projectors follows from (\[d12\]) and the last equality from obvious relations of the kind $(E_{N_{b}}+d^{\dag}d)^{-1}d^{\dag}=d^{\dag}(E_{N_{a}}+dd^{\dag})^{-1}$.
Finally, let us mention that if all maximal minors of the matrix $\operatorname{\mathcal{D}}$ are nonzero, the permutation $\pi$ in (\[block\]) and (\[block’\]) can be arbitrarily chosen. Then, we can chose it equal to the identity, getting the simplified representations $$\operatorname{\mathcal{D}}=\left(\begin{array}{c}
d \\E_{N_{b}}\end{array}\right)d_2,\qquad
\operatorname{\mathcal{D}}^{\,\prime}=d'_{1}( E_{N_{a}},-d),\label{blockreg}$$ so that in this special case without loss of generality we choose $$\operatorname{\mathcal{D}}=\left(\begin{array}{c}
d \\E_{N_{b}}\end{array}\right),\qquad
\operatorname{\mathcal{D}}^{\,\prime}=( E_{N_{a}},-d).\label{block2}$$
In order to study the properties of the potential, it is convenient to use an explicit representation for the determinant (\[tau\]). By using the Binet–Cauchy formula for the determinant of a product of matrices we get $$\tau (x)=\sum_{1\leq n_{1}<n_{2}<\cdots <n_{N_{b}}\leq {\operatorname{\mathcal{N}}}}f_{n_{1},\ldots,n_{N_{b}}}
\prod_{l=1}^{N_{b}}e^{K_{n_{l}}(x)},\label{tauf}$$ with coefficients $f_{n_{1},n_{2},\ldots ,n_{N_{b}}}$ given by $$f_{n_{1},n_{2},\ldots ,n_{N_{b}}}=V(\kappa_{n_{1}}^{},\ldots ,\kappa_{n_{N_{b}}}^{})\operatorname{\mathcal{D}}(n_{1},\ldots ,n_{N_{b}}). \label{f}$$ Here we used notation (\[V\]) for the Vandermonde determinant and notation $$\operatorname{\mathcal{D}}(n_{1},\ldots ,n_{N_{b}})=\det \left(\begin{array}{lll}
\operatorname{\mathcal{D}}_{n_{1},1} & \dots & \operatorname{\mathcal{D}}_{n_{1},N_{b}} \\
\vdots & & \vdots \\
\operatorname{\mathcal{D}}_{ n_{N_{b}},1} & \dots & \operatorname{\mathcal{D}}_{n_{N_{b}},N_{b}}\end{array}\right) ,\label{Do}$$ for the maximal minors of the matrix $\operatorname{\mathcal{D}}$. Notice that these coefficients are invariant under permutations of the indices.
From (\[tauf\]) it follows directly that condition $$f_{n_{1},\ldots ,n_{N_{b}}}\geq 0,\quad \text{for all }1\leq n_{1}<n_{2}<\cdots
<n_{N_{b}}\leq {\operatorname{\mathcal{N}}}, \label{reg}$$ is sufficient (see [@K]) for the regularity of the potential $u(x)$, i.e., for the absence of zeros of $\tau(x)$ on the $x$-plane. Thanks to (\[kappas\]), (\[V\]) and (\[f\]) this condition is equivalent to the condition that all maximal minors of the matrix $\operatorname{\mathcal{D}}$ are non negative. In [@equivKPII] it was mentioned that condition (\[reg\]) is also necessary for the regularity of a potential under evolution with respect to an arbitrary number of higher times of the KP hierarchy. In [@K] it is suggested to decompose soliton solutions of KPII into subclasses, associated to the Schubert cell on Grassmanian. It is proved there that condition (\[reg\]) is necessary for the regularity of the all solutions associated to a cell. However, problem of finding of the conditions necessary for the regularity of the many soliton solution under evolution with respect to KPII only is still opened.
If the matrix $\operatorname{\mathcal{D}}$ can be chosen as in (\[block2\]), then introducing new $N_a\times{N_b}$-matrix $\widetilde{d}$ as $$\widetilde{d}_{n,l}=d_{N_a+1-n,l}(-1)^{l+1}, \quad n=1,\ldots,N_a,\quad l=1,\ldots,N_b,\label{dd2}$$ we easily get by (\[Do\]) that $$\begin{aligned}
&\operatorname{\mathcal{D}}(n_{1},\ldots ,n_{N_{b}})=\det\Vert{\widetilde{d}_{n_i,l}}\Vert,\nonumber\\
&\text{where} i=1,\ldots,k,\quad l=1,\ldots,N_b,\quad
l\neq n_j-N_a,\quad j=k+1,\ldots,N_b,\label{dd1}\end{aligned}$$ and where number $k$, $1\leq{k}\leq{N_b}$ is defined by a condition $n_k\leq{N_a}<n_{k+1}$. This proves that if we require condition (\[reg\]) on coefficients $f_{n_{1},\ldots ,n_{N_{b}}}$, then the matrix $\widetilde{d}$ is a totally nonegative (positive, if all inequalities in (\[reg\]) are strict) one (see ([@ando; @G2002]).
In what follows it is convenient to continue indexes of $\kappa_{n}$, $f_{n_1,\ldots,n_{N_b}}$, etc, periodically by $\operatorname{\mathcal{N}}$ on the whole $\operatorname{\mathbb{Z}}$ by means of condition: $$n\to n\,(\text{mod}\operatorname{\mathcal{N}}). \label{modN}$$ Then Eq. (\[tauf\]) can be written as $$\tau (x)=\sum_{n\leq n_{1}<n_{2}<\cdots <n_{N_{b}}\leq {\operatorname{\mathcal{N}}+n-1}}f_{n_{1},\ldots,n_{N_{b}}}
\prod_{l=1}^{N_{b}}e^{K_{n_{l}}(x)},\label{taufn}$$ where the r.h.s. is independent of $n$, and where coefficients $f_{n_1,\ldots,n_{N_b}}$ are defined by (\[f\]) taking (\[modN\]) into account. Thanks to (\[kappas\]) and (\[modN\]) we can prove the following lemma.
\[lemma1\] Let $$g_{l,m,n}=(\kappa_{l}-\kappa_{m})(\kappa_{n}-\kappa_{n+N_b})(\kappa_{l}+\kappa_{m}-\kappa_{n}-\kappa_{n+N_b}).
\label{l11}$$ Then $$g_{l,m,n}\geq0\quad\text{for any}\quad n\in\operatorname{\mathbb{Z}},\quad l=n,\ldots,n+N_b,\quad m=n+N_b,\ldots,\operatorname{\mathcal{N}}+n,\label{l12}$$ and equality takes place only when $l$ and $m$ independently take values $n$ or $n+N_b$ by mod$\,\operatorname{\mathcal{N}}$.
[*Proof.*]{} Thanks to (\[modN\]) it is enough to prove the lemma for $n=1,\ldots,\operatorname{\mathcal{N}}$. For these values of $n$ the $\kappa_{n}$’s are ordered as in (\[kappas\]). For other values the ordering is obtained by coming back by means of (\[modN\]) to values belonging to the interval $n=1,\ldots,\operatorname{\mathcal{N}}$. Thus, when $n=1,\ldots,N_a$ then $n$, $n+N_b$ and $l$ are not more than $\operatorname{\mathcal{N}}$ (see (\[Nnanb\])), but instead of the interval for $m$ given in (\[l12\]) we get two intervals: $m=n+N_b,\ldots,\operatorname{\mathcal{N}}$ and $m=1,\ldots,n$. Then for given values of $n$, $l$ and $m$ in the first interval lemma follows because the first factor in (\[l11\]) is nonpositive, the second one negative and the third one nonnegative. Zero of the first factor is possible only for $l=m$ and then $l=m=n+N_b$ and of the third factor only when both $l=n$ and $m=n+N_b$ (all equalities by mod$\,\operatorname{\mathcal{N}}$). In the second case the first factor is nonnegative (zero only at $l=m=n$ (mod$\,\operatorname{\mathcal{N}}$)), the second one is negative and the third one is nonpositive (zero only at $l=n+N_b$, $m=n$). In the interval $n=N_a+1,\ldots,\operatorname{\mathcal{N}}$ we write by (\[modN\]) $\kappa_{n+N_b}=\kappa_{n-N_a}$, so that again $n$ and $n-N_a$ belong to the interval $1,\ldots,\operatorname{\mathcal{N}}$. Now the interval of $l$ given in (\[l12\]) decouples in two intervals $n,\ldots,\operatorname{\mathcal{N}}$ and $1,\ldots,n-N_a$, while $m=n-N_a,\ldots,n$, so it is positive and less than $\operatorname{\mathcal{N}}$. Then for these values of $n$, $m$ and $l$ in the first interval thanks to (\[kappas\]) the first factor in (\[l11\]) is nonnegative (zero only when $l=m=n$ (mod$\,\operatorname{\mathcal{N}}$)), the second factor is positive, and the third one is nonnegative (zero only when $l=n$ and $m=n+N_b$). In the case of the second interval the first factor is nonpositive (zero only when $l=m=n-N_a=n+N_b$ (mod$\,\operatorname{\mathcal{N}}$)), the second factor is positive and the third factor is nonpositive (zero only when $l=n-N_a=n+N_b$ (mod$\,\operatorname{\mathcal{N}}$) and $m=n$). $\blacksquare$
Asymptotic behavior of the $\tau$-function and potential $u(x)$
===============================================================
Here we study the asymptotic behavior of the function $\tau(x)$ that by (\[tauf\]) is determined by interrelations between functions $K_{n}(x)$ for different $n$. Since by (\[Kn\]) $$K_{l}(x)-K_{m}(x)=(\kappa_{l}-\kappa_{m})(x_{1}+(\kappa_{l}+\kappa_{m})x_{2}),\qquad
\text{for any }l,m\in \mathbb{Z},\label{linen}$$ that is, these differences are linear with respect to the space variables, the asymptotic behavior must have sectorial structure on the $x$-plane. In order to describe this structure we introduce rays $r_{n}$ given by $$r_{n}=\{x:x_{1}+(\kappa_{n+N_{b}}+\kappa_{n})x_{2}=0,\,(\kappa_{n+N_{b}}-\kappa_{n})x_{2}<0),\qquad
n=1,\ldots ,\operatorname{\mathcal{N}}, \label{rayn}$$ that intersect the lines $x_{2}=\pm 1$, respectively, at the points $$\begin{aligned}
&(\kappa_{n+N_b}+\kappa_n,-1) \qquad \text{for\ } n=1,\dots,N_a\notag\\
&(-\kappa_{n+N_b}-\kappa_n,1) \qquad \text{for\ } n=N_a+1,\dots,\operatorname{\mathcal{N}}.\label{y0n}\end{aligned}$$ Thanks to (\[kappas\]) and (\[modN\]), $$\begin{aligned}
&\kappa_{n+N_b}-\kappa_{n}> 0, &n = 1,\ldots, N_{a} , \label{ineq1}\\
&\kappa_{n+N_b}-\kappa_{n}<0, & n = N_{a} +1,\ldots,\operatorname{\mathcal{N}},\label{ineq2}\end{aligned}$$ therefore, increasing $n$ from $1$ to $N_{a}$ the ray rotates anticlockwise in the lower half $x$-plane, crosses the positive part of the $x_{1}$-axis in coming to $n=N_{a}+1$ and, then, rotates anticlockwise in the upper half $x$-plane up to $n=\operatorname{\mathcal{N}}$ and, finally, crossing the negative part of the $x_{1}$-axis, for $n=\operatorname{\mathcal{N}}+1$ comes back to the ray $r_{1}$ thanks to (\[modN\]), see Fig. \[fig\].
(-4.5,-4.5)(4.5,4.5) (0,-4.3)(0,4.3) (-4.3,0)(4.3,0) (3,4)(0,0)(1,4) (4,-1)(0,0)(4,0.5) (-4,-1.25)(0,0)(-4,0.3) (0.75,3.1)(1.4,3)(2,2.7) (-2.6,-0.8)(-2.85,-0.35)(-2.8,0.2) (2.8,-0.7)(2.95,-0.2)(2.75,0.32) (-3.2,-0.4)[$\sigma^{}_{1}$]{} (1.5,3.2)[$\sigma^{}_{n}$]{} (3.35,-0.35)[$\sigma^{}_{N_a}$]{} (4.4,-0.4)[$x^{}_1$]{} (-0.3,4.1)[$x^{}_2$]{} (-3.6,-1.4)[$r^{}_{1}$]{} (3.5,-1.2)[$r^{}_{N_a-1}$]{} (3.5,0.75)[$r^{}_{N_a}$]{} (3.5,3.75)[$r^{}_{n-1}$]{} (0.7,3.75)[$r^{}_{n}$]{} (-3.6,.8)[$r^{}_{\operatorname{\mathcal{N}}}$]{}
Let us assume that some rays, say, $r_{m}$ and $r_{n}$, where for definiteness $m<n$, are parallel. By (\[rayn\]) this means that $\kappa_{m+N_{b}}+\kappa_{m}=\kappa_{n+N_{b}}+\kappa_{n}$, i.e., that $\kappa_{n}-\kappa_{m}=\kappa_{m+N_{b}}-\kappa_{n+N_{b}}$, where the l.h.s. is positive thanks to (\[kappas\]). Then, because of (\[modN\]) it is easy to see that the r.h.s. can be positive only if $1\leq{m}\leq{N_a}<{n}\leq\operatorname{\mathcal{N}}$, so that by (\[rayn\]) ray $r_m$ is in the bottom halfplane and $r_n$ in the upper one. Thus rays can be parallel only if they belong to different halfplanes, but in the case $N_a\neq N_b$ this is possible for a special choice of the parameters $\kappa_{n}$ only. On the contrary, in the case $N_a=N_b$ all pairs $r_{n}$ and $r_{n+N_a}$, $n=1,\ldots,N_a$, and only these pairs give parallel rays. In the special case $N_{a}=N_{b}=1$ we get two rays producing the straight line $x_{1}+(\kappa_{1}+\kappa_{2})x_{2}=0$ that divides the $x$-plane in two halfplanes.
Now we introduce sectors $\sigma_n$, which are subsets of the $x$-plane characterized as $$\sigma_{n}=\{x:K_{n-1}(x)<K_{n+N_{b}-1}(x)\text{ and }K_{n}(x)>K_{n+N_{b}}(x)\},\quad
n=1,\dots ,\mathcal{N}.\label{sigman}$$ Thanks to (\[linen\]) and the discussion above it follows that the sectors $\sigma_{n}$ are sharp (for $\operatorname{\mathcal{N}}>2$) angular sectors with vertexes at the origin of the coordinates bounded from the right (looking from the origin) by the ray $r_{n-1}$ and from the left (looking from the origin) by the ray $r_{n}$. With the increasing of $n$ sectors $\sigma_{n}$ are ordered anticlockwise, starting “from the left” with the sector $\sigma_{1}$, that includes the negative part of the $x_{1}$-axis, and, then, with the sectors $\sigma_{n}$ ($n=2,\ldots ,N_{a}$) in the bottom half-plane, the sector $\sigma_{N_{a}+1}$ “to the right”, that includes the positive part of the $x_{1}$-axis, and the sectors $\sigma_{n}$ ($n=N_{a}+2,\ldots ,\operatorname{\mathcal{N}}$) on the upper half-plane, finishing with the sector $\sigma_{\operatorname{\mathcal{N}}}$ tangent to the sector $\sigma_{1}$, covering in this way the whole $x$-plane with the exception of the bordering rays $r_n$. Therefore, the sectors $\sigma_{n}$ define a $\mathcal{N}$-fold discretization of the round angle at the origin, with $n$ playing the role of a discrete angular variable, see Fig. \[fig\]. It is clear that in the study of asymptotic behavior of $\tau$-function we consider $x$-plane as vector space, as finite part of $x$ when $x\to\infty$ is irrelevant. For determining the directions of rays $r_{n}$ and sectors $\sigma_{n}$, we introduce the vectors $$y_{n}=(\kappa_{n+N_{b}}^{2}-\kappa_{n}^{2},\kappa_{n}-\kappa_{n+N_{b}}),\qquad n=1,\dots ,\operatorname{\mathcal{N}}, \label{pointn'}$$ that enables us to give the following definition.
\[def1’\] We say that $x\to\infty$ along the ray $r_n$ if $x\to\infty$ and there exists such function $\alpha(x)\to+\infty$ that $x-\alpha(x)y_{n}$ is bounded. This will be denoted as $x\stackrel{r_{n}}{\longrightarrow}\infty$.
We say that $x\to\infty$ in the sector $\sigma_{n}$, $n=1,\ldots,\operatorname{\mathcal{N}}$, if $x\to\infty$ and there exist such functions $\alpha(x)\to+\infty$ and $\beta(x)\to+\infty$ that $x-\alpha(x)y_{n-1}-\beta(x)y_{n}$ is bounded. This will be denoted as $x\stackrel{\sigma_{n}}{\longrightarrow}\infty$.
Notice that for $x\stackrel{r_{n}}{\longrightarrow}\infty$ $$\label{def12}
K_{n}(x)-K_{n+N_b}(x)\quad\text{is bounded and}\qquad (\kappa_{n+N_b}-\kappa_{n})x_2\to-\infty,$$ and for $x\stackrel{\sigma_{n}}{\longrightarrow}\infty$ $$\label{def11}
K_{n+N_b-1}(x)-K_{n-1}(x)\to+\infty,\qquad K_{n}(x)-K_{n+N_b}(x)\to+\infty,$$ as follows directly from the definition. In fact we can prove a more general statement.
\[lemma3\] Let $N_{a},N_{b}\geq 1$, and $n\in\operatorname{\mathbb{Z}}$ arbitrary. Then
1. if $x\stackrel{r_{n}}{\longrightarrow}\infty$ then $K_{l}(x)-K_{m}(x)\to+\infty$ or bounded for any $l=n,\ldots,n+N_{b}$ and $m=n+N_{b},\ldots,\operatorname{\mathcal{N}}+n$, where boundedness takes place if and only if $(l,m)=(n,n)$, $(n,n+N_b)$, $(n+N_b,n)$, $(n+N_b,n+N_b)$;
2. if $x\stackrel{\sigma_{n}}{\longrightarrow}\infty$ then $K_{l}(x)-K_{m}(x)\to+\infty$ for any $l=n,\ldots,n+N_{b}-1$ and $m=n+N_{b},\ldots,\operatorname{\mathcal{N}}+n-1$;
where summation in indexes is always understood by mod$\,\operatorname{\mathcal{N}}$.
*Proof.* Thanks to (\[linen\]) and (\[pointn’\]) we have that $K_{l}(y_{n})-K_{m}(y_n)=g_{l,m,n}$. Then the first statement of the lemma follows directly from Definition \[def1’\] and Lemma \[lemma1\]. In the same way we get that $K_{l}(x)-K_m(x)=\alpha g_{l,m,n-1}+\beta g_{l,m,n}$ under substitution $x=\alpha y_{n-1}+\beta y_n$. Then the second statement of the lemma follows from the second statement of the Lemma \[lemma1\] while intervals for $l$ and $m$ appear as intersections of the corresponding intervals in (\[l12\]) for $n$ and $n\to{n-1}$. $\blacksquare $
Now we can prove the following Theorem.
\[th1\] The asymptotic behavior of $\tau(x)$ for $x\rightarrow\infty$ is given by $$\begin{aligned}
&x\stackrel{r_{n}}{\longrightarrow}\infty:&&
\tau (x)=\bigl(z_{n}+z_{n+1}e_{}^{K_{N_{b}+n}(x)-K_{n}(x)}+o(1)\bigr)\exp\left(\sum_{j=n}^{n+N_{b}-1}K_{j}(x)
\right), \label{3:232}\\
&x\stackrel{\sigma_{n}}{\longrightarrow}\infty:&&\tau (x)=\bigl(z_{n}+o(1)\bigr)\exp\left(\sum_{l=n}^{n+N_{b}-1}K_{l}(x)\right), \label{tauasympt}\end{aligned}$$ for any $n\in\mathbb{Z}$, where coefficients $z_n$ are defined as $$z_{n}=f_{n,n+1,\ldots ,n+N_{b}-1}\equiv V(\kappa _{n}^{},\ldots ,\kappa_{n+N_{b}-1}^{})\operatorname{\mathcal{D}}(n,\ldots ,n+N_{b}-1), \label{zn}$$ (cf. (\[f\])). Terms $o(1)$ are decaying exponentially.
*Proof.* Using representation (\[taufn\]) we can write $$\begin{aligned}
&\exp\left(-\sum_{l=n}^{n+N_{b}-1}K_{l}(x)\right) \tau (x)=\nonumber\\
&\qquad=\sum_{n\leq n_{1}<n_{2}<\cdots <n_{N_{b}}\leq {\operatorname{\mathcal{N}}+n-1}}f_{n_{1},\ldots ,n_{N_{b}}}\exp\left( \sum_{j=1}^{N_{b}}K_{n_{j}}(x)-\sum_{l=n}^{n+N_{b}-1}K_{l}(x)\right) .\end{aligned}$$ Let us consider first the asymptotics when $x\stackrel{\sigma_{n}}{\longrightarrow}\infty$. Then each term $K_{n_{j}}(x)$ either cancels with some term in the last sum, when the index $n_{j}$ equals by mod$\operatorname{\mathcal{N}}$ one index $l$ in the interval $\{n,\ldots ,n+N_{b}-1\}$, i.e., an interval of the index $l$ in Lemma \[lemma3\], or belong to the interval $\{n+N_{b},\ldots ,\operatorname{\mathcal{N}}+n-1\}$. Thus by Lemma \[lemma3\] all these exponents at infinity are negative with the only exception of the case where $\{n_{1},\ldots ,n_{N_{b}}\}=\{n,\ldots ,n+N_{b}-1\}(\text{mod}\operatorname{\mathcal{N}})$. This proves (\[tauasympt\]). Proof of (\[3:232\]) goes in the same way with the only difference that we use the first statement of the Lemma \[lemma3\]. Statement on the asymptotic behavior of the terms of $o(1)$ kind is obvious by construction. $\blacksquare $
As we see from (\[3:232\]) and (\[tauasympt\]) the leading asymptotic behavior is fixed by this theorem only if all $$z_{n}>0,\qquad n=1,\ldots ,\operatorname{\mathcal{N}}. \label{zn0}$$ This condition is a sufficient for the regularity of the potential at large $x$, while specific examples show that it is not sufficient for the regularity of the potential in all the $x$-plane. We also see that the asymptotics along the ray direction $r_{n}$ is given by the sum of the leading terms obtained for $x\stackrel{\sigma_{n}}{\longrightarrow}\infty$ and $x\stackrel{\sigma_{n+1}}{\longrightarrow}\infty$. Since the exponential factor cancels out when expansion (\[3:232\]) of $\tau (x)$ is inserted in (\[ux\]), the factor in parenthesis gives the ray behavior of the potential $u(x)$ at infinity. Thus, if condition (\[zn0\]) is imposed potential $u(x)$ has exactly $\operatorname{\mathcal{N}}$ rays $r_n$, obeying nontrivial asymptotic behavior of onesoliton potential (cf. (\[1-sol\])) with parameters $\kappa_n$ and $\kappa_{n+N_b}$. Explicitly, taking into account (\[modN\]) we get the following ray structure and asymptotic behavior of the potential along rays: $$\begin{aligned}
&x_{2}\rightarrow-\infty,\qquad x_{1}+(\kappa_{n}+\kappa_{n+N_{b}})x_{2}\text{ being bounded:}\nonumber\\
&u(x)\cong-2\partial_{x_{1}}^{2}\log\bigl(z_{n}+z_{n+1}e_{}^{K_{n+N_{b}}(x)-K_{n}(x)}\bigr),
\quad n=1,\ldots ,N_{a}.\label{down}\\
\intertext{and}
&x_{2}\rightarrow+\infty,\qquad x_{1}+(\kappa_{n}+\kappa_{n+N_{a}})x_{2}\text{ being bounded:}\nonumber\\
&u(x)\cong-2\partial_{x_{1}}^{2}\log\bigl(z_{n+N_{a}}+z_{n+N_{a}+1}e_{}^{K_{n}(x)-K_{n+N_{a}}(x)}\bigr),
\quad n=1,\ldots ,N_{b},\label{up}\end{aligned}$$ i.e., $N_a$ asymptotic rays in the bottom half plane and $N_b$ asymptotic rays in the upper half-plane. Asymptotic behavior inside sectors $\sigma_{n}$ is given by (\[tauasympt\]) where the only $x$-dependent term is the exponential factor. Taking into account its linear dependence on $x$ and (\[zn0\]) we get that $u(x)$ decays exponentially in all directions inside all sectors.
Concluding remarks
==================
Asymptotic behavior of the $\tau$-function and potential derived here is based on condition (\[zn0\]) allowing vanishing or negative values of coefficients in (\[tauf\]) different from $z_n$ (see (\[zn\])). Thanks to (\[zn0\]) we were able to find explicitly that the ray directions are given by the sums $\kappa_{n}+\kappa_{n+N_b}$, with $n=1,\ldots, \operatorname{\mathcal{N}}$, and to derive the exact asymptotic behavior of the $\tau$-function on the $x$-plane. Let us mention that analogous result was derived in [@BK] for a very special subclass of solitonic potentials, given by matrix $\operatorname{\mathcal{D}}$ in (\[tau\]) equal to a product of a diagonal matrix by the matrix transpose to $\operatorname{\mathcal{V}}$. As we mentioned above, condition (\[zn0\]) does not guarantee (in contrast to (\[reg\])) absence of singularities of the potential $u(x)$. But (\[3:232\]) and (\[tauasympt\]) prove that thanks to (\[zn0\]) singularities of the potential (\[ux\]) cannot appear in asymptotics, as inside sectors $u(x)\to0$ and along rays $r_n$ it is given by (\[down\]) and (\[up\]), that is also finite thanks to (\[zn0\]). Let us emphasize that parameters $N_a$ and $N_b$ giving numbers of rays of the $\tau$-function and potential in the bottom and upper half-planes, correspondingly, appeared in [@BPPP2009] as numbers of poles of the Jost and dual Jost solutions. And, finally, we mention that Theorem \[th1\] is also valid when all inequalities in the regularity requirement (\[reg\]) are strict, i.e., all maximal minors of the matrix $\operatorname{\mathcal{D}}$ are positive, as this condition is stronger than (\[zn0\]) that was enough to prove the theorem.
Acknowledgments
===============
This work is supported in part by the grant RFBR \# 08-01-00501, grant RFBR–CE \# 09-01-92433, Scientific Schools 8265.2010.1, by the Program of RAS “Mathematical Methods of the Nonlinear Dynamics,” by INFN, by the grant PRIN 2008 “Geometrical methods in the theory of nonlinear integrable systems,” and by Consortium E.I.N.S.T.E.IN. AKP thanks Department of Physics of the University of Salento (Lecce) for kind hospitality.
[99]{} B. B. Kadomtsev and V. I. Petviashvili, “On the stability of solitary waves in weakly dispersive media,” *Sov. Phys. Dokl.* **192** (1970) 539–541
V. S. Dryuma, “Analytic solution of the two-dimensional Korteweg–de Vries (KdV) equation,” *Sov. JETP Lett.* **19** (1974) 387–388
V. E. Zakharov and A. B. Shabat, “A scheme for integrating the non-linear equations of mathematical physics by the method of the inverse scattering problem,” *Func. Anal. Appl.* **8** (1974) 226–235
M. J. Ablowitz, D. Bar Yacoov and A. S. Fokas, “On the inverse scattering transform for the Kadomtsev-Petviashvili equation,” *Stud. Appl. Math.* **69** (1983) 135–143
V. G. Lipovsky, “Hamiltonian structure of the Kadomtsev–Petviashvili–II equation in the class of decreasing Cauchy data,” *Funct. Anal. Appl.* **20** (1986) 282–291
M. V. Wickerhauser, “Inverse scattering for the heat operator and evolutions in 2+1 variables,” *Commun. Math. Phys.* **108** (1987) 67–89
P. G. Grinevich and S. P. Novikov, “Two-dimensional inverse scattering problem at negative energy and generalized analytic functions. I. Energy below a ground state,” *Func. Anal. Appl.* **22** (1988) 19–27
M. Boiti, F. Pempinelli and A. K. Pogrebkov, “Scattering Transform for the nonstationary Schrödinger equation with a bidimensionally perturbed $N$-soliton potential,” *J. Math. Phys.* **47** 123510 (2006) 1–43
M. Boiti, F. Pempinelli, A.K. Pogrebkov and B. Prinari, “Inverse scattering theory of the heat equation for the perturbed 1-soliton potential,” *J. Math. Phys* **43** (2002) 1044–1062
M. Boiti, F. Pempinelli, A.K. Pogrebkov and B. Prinari, “Building extended resolvent of heat operator via twisting transformations,” *Theor. Math. Phys.* **159** (2009) 721–733
M. Boiti, F. Pempinelli, A. Pogrebkov and B. Prinari, “Towards an Inverse Scattering theory for non decaying potentials of the heat equation,” *Inverse Problems* **17** (2001) 937–957
M. Boiti, F. Pempinelli, A.K. Pogrebkov and B. Prinari, “The equivalence of different approaches for generating multisoliton solutions of the KPII equation,” *Theor. Math. Phys.* **165** (2010) 1237–1255
Y. Kodama, “KP solitons in shallow water,” *J. Phys. A: Math. Theor.* **43** (2010) 434004 (54pp)
G. Biondini and S. Chakravarty, “Elastic and inelastic line-soliton solutions of the Kadomtsev–Petviashvili II equation,” *Math. Comp. Simul.* **74** (2007) 237–250
F. R. Gantmacher, *Théorie des matrices*, Editions Jacques Gabay (1990)
T. Ando, “Totally Positive Matrices,” *Linear Algebra and Its Applications* **90** (1987) 165–219
F.R. Gantmacher and M.G. Krein, *Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems,* AMS, Providence (2002), *Oszillationsmatrizen, Oszillationskerne und kleine Schwingungen Mechanischer Systeme*, Akademie-Verlag, Berlin (1960)
G. Biondini and Y. Kodama, “On a family of solutions of the Kadomtsev–Petviashvili equation which also satisfy the Toda lattice hierarchy,” *J. Phys. A: Math. Gen.* **36** (2003) 10519–10536
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'C.M. Persson , M. De Luca, B. Mookerjea, A.O.H. Olofsson, J.H. Black, M. Gerin, E. Herbst, T.A. Bell, A. Coutens B. Godard, J.R. Goicoechea, G.E. Hassel, P. Hily-Blant, K.M. Menten, H.S.P. Müller, J.C. Pearson, S. Yu'
-
bibliography:
- 'references.bib'
date: 'Received Dec 20, 2011 / Accepted May 30, 2012'
subtitle: 'II. Analysis of [*Herschel*]{}[^1]/HIFI observations towards W49N and G10.6$-$0.4 (W31C)'
title: Nitrogen hydrides in interstellar gas
---
[As a part of the *Herschel* key programme PRISMAS, we have used the *Herschel*-HIFI instrument to observe interstellar nitrogen hydrides along the sight-lines towards eight high-mass star-forming regions in order to elucidate the production pathways leading to nitrogen-bearing species in diffuse gas. Here, we report observations towards W49N of the NH , , and , , and , transitions, and unsuccessful searches for NH$^+$. All detections show absorption by foreground material over a wide range of velocities, as well as absorption associated directly with the hot-core source itself. As in the previously published observations towards G10.6$-$0.4, the NH, NH$_2$ and NH$_3$ spectra towards W49N show strikingly similar and non-saturated absorption features. We decompose the absorption of the foreground material towards W49N into different velocity components in order to investigate whether the relative abundances vary among the velocity components, and, in addition, we re-analyse the absorption lines towards G10.6$-$0.4 in the same manner. Abundances, with respect to molecular hydrogen, in each velocity component are estimated using CH, which is found to correlate with H$_2$ in the solar neighbourhood diffuse gas. The analysis points to a co-existence of the nitrogen hydrides in diffuse or translucent interstellar gas with a high molecular fraction. Towards both sources, we find that NH is always at least as abundant as both and , in sharp contrast to previous results for dark clouds. We find relatively constant and ratios with mean values of 3.2 and 1.9 towards W49N, and 5.4 and 2.2 towards G10.6$-$0.4, respectively. The mean abundance of o-NH$_3$ is $\sim$2$\times$10$^{-9}$ towards both sources. The nitrogen hydrides also show linear correlations with CN and HNC towards both sources, and looser correlations with CH. The upper limits on the NH$^+$ abundance indicate column densities of $N$(NH), which is in contrast to the behaviour of the abundances of CH$^+$ and OH$^+$ relative to the values determined for the corresponding neutrals CH and OH. Surprisingly low values of the ammonia ortho-to-para ratio are found in both sources, , in the strongest absorption components. This result cannot be explained by current models as we had expected to find a value of unity or higher. ]{}
Introduction
============
Nitrogen is among the six most abundant elements in the universe and, despite its fundamental role in the chemistry of molecules connected with life, the chemical network of nitrogen in the interstellar medium is still poorly understood due to severe difficulties to observe key molecules from the ground. Today about 55 molecules containing nitrogen have been discovered in interstellar space and a few more in circumstellar envelopes[^2]. The major reservoir of nitrogen is believed to be in the form of atomic N and molecular N$_2$. The latter is extremely difficult to observe since it has no permanent dipole moment [@2001ApJ...548..836S]. In dense molecular clouds N$_2$H$^+$ is instead often used as a tracer of N$_2$, which remained undetected until @2004Natur.429..636K reported far-ultraviolet observations towards HD124314. The total abundance of nitrogen therefore still relies, to a high degree, on chemical modelling and observations of nitrogen-bearing species other than N$_2$.
In order to constrain the nitrogen formation pathways, observations of *nitrogen hydrides* are crucial since they are at the root of the nitrogen chemical network, appearing in its first steps in chains of reactions that lead to other more complex species. The abundances of these species are thus key diagnostics for the nitrogen chemistry. The nitrogen hydrides are, however, also problematic to observe since their ground state rotational transitions lie at sub-mm wavelengths and are thus very difficult, or impossible, to reach from the ground. Key species, such as imidogen (NH) and amidogen (NH$_2$), have therefore previously not been widely observed, and there is still no detection of the NH$^+$ radical.
Although very few observations exist of NH and NH$_2$ in interstellar space, they are well known in comets [e.g. @1941ApJ....94..320S; @1998Icar..136..268M; @1993ApJ...404..348F], and have been observed in stellar photospheres [e.g. @1969PASP...81..657S; @1989hra1.book.....F] via their electronic, vibration-rotation, and high rotational transitions. The first detection of interstellar NH was made by @1991ApJ...376L..49M by optical absorption spectroscopy. Subsequent observations by @1997MNRAS.291L..53C and @2009MNRAS.400..392W have yielded several lines of sight in diffuse and translucent gas where column densities of NH, CH, CN, and H$_2$ have been directly measured. The average value of the column density ratio in these diffuse and translucent sight-lines is $N$(NH)/$N$(H$_2$)=3$\times$10$^{-9}$.
Interstellar NH$_2$ was first detected by @1993ApJ...416L..83V in absorption towards SgrB2 in three fine-structure components of the para-symmetry form of NH$_2$, the transition, with partially resolved hyperfine structure at frequencies 461 to 469 GHz. The Infrared Space Observatory (ISO) was later used to observe unresolved absorption lines of both ortho and para symmetry forms of NH$_2$, as well as NH, towards this source through the use of the long-wavelength spectrometer [@2000ApJ...534L.199C; @2004ApJ...600..214G; @2007MNRAS.377.1122P].
In contrast to NH and NH$_2$, ammonia (NH$_3$) has been extensively observed for more than 40 years and was in fact the first polyatomic molecule to be identified in interstellar space [@1968PhRvL..21.1701C] by means of its $J\!=\!K$ inversion transitions at cm wavelengths ($K$ is the quantum number of the projection of total angular momentum $J$ on the molecule’s symmetry axis). The ammonia molecule has, however, two symmetries, like NH$_2$, which arise due to the possible orientations of the hydrogen spins and behave like two distinct species: (all H spins parallel, $K$=3$n$ where $n$ is an integer $\geq 0$) and (not all H spins parallel, $K\neq3n$). Important information about the ammonia formation pathways could be inferred from observations of both symmetries. Unfortunately, only the para form has relatively low-excitation transitions accessible from ground since the $K$=0 ladder of energy levels has no inversion splitting and the $J_K\!=\!3_3$ inversion transitions’ lower energy level is 122K above ground. Ortho inversion lines ($K$=3$n$, $n \ge 1$) can thus only be observed in relatively warm molecular gas. Only a few ammonia observations of the cold diffuse interstellar gas exist, using para inversion lines which leaves the ammonia formation mechanism poorly constrained in diffuse gas. The (0,0) ammonia ground state, with ortho symmetry, can only be studied by rotational transitions. Observations of the fundamental rotational transition of at 572 GHz, which has similar upper state energy as the inversion lines, but several orders of magnitudes higher critical densities, thus requires space telescopes. The Kuiper Airborne Observatory [@1983ApJ...271L..27K] performed the first observations of the transition using heterodyne receivers, and later on the Odin satellite continued such observations in a number of environments, for instance in photo-dissociation and star-forming regions , circumstellar envelopes [@2006ApJ...637..791H], diffuse clouds in absorption towards SgrB2 , and in comets .
[lrccrrrllcc]{} Species & Frequency & Transition & Band & $T_\mathrm{sys}$ & & &\
&&& && G10.6 & W49N & G10.6 & W49N & G10.6 & W49N\
& (GHz) && & (K) & (s)& (s)& (K) & (K) &\
NH & 946.476 & $N_J\!=\!1_0\leftarrow0_1$ & 3b & 416& 234 & 116& 2.5 &3.4 & 0.021 & 0.027\
&974.478 &$N_J\!=\!1_2\leftarrow0_1$ & 4a & 338 &186 & 103 &2.6& 3.9 & 0.018 & 0.018\
o-NH$_2$ & 952.578 & $N_{K_a,K_c} J = 1_{1,1} 3/2 \gets 0_{0,0} 1/2$ & 3b& 230 &92 &68& 2.6 & 3.6 &0.018 & 0.017\
o-NH$_3$ & 572.498& $J_K$=1$_0 \leftarrow 0_0$ & 1b & 87 & 1024 &94& 0.61 & 0.93 & 0.013 & 0.025\
& 1214.859 & $J_{K}$=2$_{0} \leftarrow 1_{0}$ & 5a & 1024 & 293 & 196 & 3.4 & 5.2 & 0.032 & 0.025\
p-NH$_3$ & 1215.246 &$J_{K}$= 2$_{1} \leftarrow 1_{1}$ &5a & 1024 &293& 196& 3.4 & 5.2 &0.032 & 0.025\
NH$^{+}$ & 1012.540& $^2\Pi_{1/2}$ $N$=1$\leftarrow$1 $J$=3/2$\leftarrow$1/2 & 4a & 327 &171& 91& 3.0 & 4.4 &0.013& 0.013\
\[Table: transitions\]
With the launch of the *Herschel* Space Observatory [@Pilbratt2010] in May 2009, unique opportunities to perform observations of transitions between 157 and 625$\mu$m (0.48–1.9THz) became feasible with the Heterodyne Instrument for the Far-Infrared [HIFI; @Graauw2010] owing to its very high sensitivity and spectral resolution. This allowed, for the first time, searches for spectrally resolved, rotational transitions involving the ground states of NH$^+$, NH, NH$_2$, and NH$_3$ with the same instrument. Several observations have already been reported. For instance, using *Herschel*-HIFI, found very high column densities of NH and ND in the cold envelope of the class0 protostar IRAS16293-2422 (2$\times$10$^{14}$cm$^{-2}$ and $\sim$1.3$\times$10$^{14}$cm$^{-2}$, respectively). found NH:NH$_2$:NH$_3$ abundance ratios of $\sim$5:1:300 towards the same source.
The PRISMAS[^3] key programme (PRobing InterStellar Molecules with Absorption line Studies) is targeting absorption lines in the line-of-sight towards eight bright sub-millimetre-wave continuum sources using *Herschel*-HIFI: G10.6$-$0.4 (W31C), W49N, W51, G34.3+0.1, DR21(OH), SgrA (+ 50 km s$^{-1}$ cloud), G005.9-0.4 (W28A) and W33A. High-resolution absorption line spectroscopy is generally a very sensitive and model-independent method for measuring column densities of interstellar molecules, and a powerful tool to probe the diffuse interstellar gas clouds with no or little excitation.
The first results and analysis of absorption lines of nitrogen hydrides along the sight-line towards the massive star-forming region G10.6$-$0.4 (W31C) have already been presented in . Similar abundances with respect to the total amount of hydrogen $N_\mathrm{H}$=2$N(\mathrm{H_2}$)+$N(\mathrm{H}$), were found for all three species: approximately $6\times 10^{-9}$, $3\times 10^{-9}$, and $3\times 10^{-9}$ for NH, NH$_2$, and NH$_3$, respectively. They were estimated across the whole line-of-sight and using the high temperature ortho-to-para limits of three and one for NH$_2$ and NH$_3$, respectively. NH$^+$ was not detected at a 1$\sigma$ rms level of 74mK with a resolution of 1.1 MHz. The abundance patterns that we see in diffuse molecular gas are thus clearly very different from those in IRAS16293-2422 and Sgr B2, where NH:NH$_2$:NH$_3\sim$1:10:100 and the fractional abundance of NH is a few times 10$^{-9}$. The Sgr B2 results may, however, not be representative of cold dark clouds since this source is very complex and atypical.
The unexpectedly high NH abundance has been difficult to explain with chemical models and both the NH and NH$_2$ production by purely gas-phase processes is inhibited by a lack of a sufficient source of N$^+$. The models fail to simultaneously predict the absolute and relative abundances of the nitrogen hydrides. Typical steady state *dark cloud* models ($n$=1$\times$10$^{3}$–5$\times$10$^{4}$ cm$^{-3}$, $T$=10–40 K, $A_\mathrm{V}\gtrsim10$), predict an NH$_2$ abundance of , an NH abundance 10 times lower, and an NH$_2$/NH$_3$ ratio of for a wide range of assumptions , but these are not directly applicable to diffuse molecular gas. Examples of chemical models for diffuse cloud conditions are found in Fig. A.1 and A.2 in paper I. These models were also unable to explain the observed abundances and ratios. Processes on dust grains have previously been proposed as a way to increase the NH production [@1991ApJ...376L..49M; @1993MNRAS.260..420W]. Such models, however, often predict up to 1000 times more NH$_3$ than NH$_2$ [@1993MNRAS.263..589H paper I]. The importance of grain surface chemistry in diffuse clouds is also not clear since water ice mantles have not been detected in diffuse gas, and strong (UV) radiation fields counteract molecular formation on grains. Grain surface production of NH is therefore still debated, and would, if true, change our understanding of surface chemistry in diffuse gas. On the other hand, if grains indeed were unimportant in diffuse gas nitrogen chemistry it would imply that either key gas-phase reactions must have been overlooked, or that the uncertainty of some reaction rates could make a difference. Both additional high-quality observations and chemical modelling are needed to solve this problem.
In this paper, we present new observations and analyses of absorption lines in the line-of-sight towards the high-mass star-forming region W49N, and, we also re-analyse the absorption towards G10.6$-$0.4 in more detail. W49N is one of the most luminous high-mass star-forming regions in the Galaxy ($\sim$10$^7$L$_\odot$) with a core that contains more than a dozen ultra-compact regions [@1984ApJ...283..632D; @1990ApJ...351..189D; @2000ApJ...540..308D]. It is located on the far side of the Galaxy at a distance of 11.4kpc with Galactic coordinates $l$=43.17$^\circ$ and $b$=0.012$^\circ$, in one of the most massive giant molecular clouds ($\sim$10$^6$M$_\odot$) in the Milky Way. The source velocity is about +8kms$^{-1}$ and the foreground gas along the line-of-sight is detected at , revealing gas at two locations in the near and far side of the Sagittarius spiral arm around 40 and 60kms$^{-1}$ [@1985ApJ...297..751D]. The ultra-compact region G10.6$-$0.4 in the star-forming W31 complex is an extremely luminous sub-millimetre and infrared continuum source. The source is located within the so-called 30kms$^{-1}$ arm at a kinematic distance of 4.8kpc [@2003ApJ...587..701F]. The gas associated directly with G10.6$-$0.4 is detected at a systemic source velocity of , determined from OH maser emission observations, while the foreground gas is detected at .
Section \[observations\] summarises the observations and data reduction, and the results from our *Herschel* observations are found in Sect. \[section: results\]. The hyperfine structure (hfs) components of the nitrogen hydrides are discussed in Sect. 4. In Sect. \[Section: Analysis of abundances in different velocity components\] we use three different methods to decompose the absorption lines along the sight-lines towards both sources in different velocity components, and estimate column densities, $N$, and relative abundances, $X$, in each component. We also compare the column densities of the nitrogen hydrides, both with each other to investigate possible correlations, and with other species tracing regions with both low and high molecular fractions. Section \[OPR ammonia\] presents our estimates of the ortho-to-para ratio (OPR) of NH$_3$. We end this paper with a summary and outlook in Sect. \[section summary\]. Note that the analysis of the *background* source emissions and absorptions is left for a future paper.
Observations and data reduction {#observations}
===============================
All the reported observations are a part of the more extended PRISMAS programme towards G10.6$-$0.4 and W49N. The nitrogen observations, which took place between March 2 and April 18, 2010, are summarised in Table \[Table: transitions\]. All *Herschel* identification numbers (OBSID’s) are found in Table \[Table: obsid\] (on-line material). Note that NH was observed together with CH$_2$ at 946 GHz, and ortho-NH$_3$ was observed together with para-NH$_3$ (the absorptions from the source are 97 km s$^{-1}$ apart). Before the launch of *Herschel*, no observations of these NH$_3$ transitions, nor the NH$^+$ transition, had been performed. We do not, however, analyse the transition in this paper since this line shows absorption only at the background source velocities.
We used the dual beam switch mode and the wide band spectrometer (WBS) with a bandwidth of 4$\times$1 GHz and an effective spectral resolution of 1.1 MHz. The corresponding velocity resolution is about 0.3 kms$^{-1}$ at the highest frequencies and 0.6 km s$^{-1}$ at 572 GHz. In addition, simultaneous observations were performed using the high resolution spectrometer (HRS) with an effective spectral resolution of 0.25MHz and a bandwidth of 240–340 MHz except for the 1214 and 1215 GHz lines which had an effective resolution of 0.5 MHz and a bandwidth of 780 MHz in order to cover both lines in the same band. Note that the channel separation of the observations is 0.5 MHz in the WBS and 0.12 MHz in the HRS (0.24 MHz for the 1215 GHz line). Each line was observed with three different frequency settings of the local oscillator (LO) corresponding to a change of approximately 15 km s$^{-1}$ to determine the sideband origin of the lines. During all observations two orthogonal polarisations were used.
The pointings were centred at $\alpha$=19$^\mathrm{h}$10$^\mathrm{m}$13$\fs$2, $\delta$=09$^\circ$06$\arcmin$12$\arcsec$ ($J$2000) for W49N, and $\alpha$=18$^\mathrm{h}$10$^\mathrm{m}$28$\fs$7, $\delta$=$-$19$^\circ$55$\arcmin$5$\arcsec$ ($J$2000) for G10.6$-$0.4. The recommended values for half-power beam width of the telescope are $44\farcs 2$, $22\farcs 1$, and $18\farcs 9$ at 480, 960 and 1120 GHz, respectively. The reference beams were located within 3 on either side of the source. The total calibration uncertainties are $\lesssim$9% for band 1 and $\lesssim$13% for band 5, including the sideband gain ratio uncertainty which is 3–4% for band 1b (ortho-NH$_3$), and 4–6% for band 5a (para-NH$_3$). All errors are added in quadrature. Detailed information about the HIFI calibration including beam efficiency, mixer sideband ratio, pointing, etc., can be found on the Herschel internet site[^4]. The in-flight performance is described by . Since we are interested in absorption lines and their relative strength compared to the continuum in this paper, the forward efficiency of 96% and the main beam-efficiency of 64–76% (between 1120 and 480 GHz) do not affect our results and we have therefore used a $T_\mathrm{A}^*$ intensity scale throughout this paper.
The data were processed using the standard *Herschel* Interactive Processing Environment [HIPE,[^5] @2010ASPC..434..139O], version 5.1, up to level 2 which provides fully calibrated spectra. The data quality is excellent with very low intensity ripples in most cases, typically below a few percent of the double sideband continuum. For NH$^+$ in both sources and both polarisations, and at 572 GHz in the vertical polarisation towards G10.6$-$0.4, we corrected the data from ripples using the tool in HIPE (in paper I we only used one polarisation for these two transitions). The FITS files were then exported to the spectral line software package [xs]{}[^6] which was used in the subsequent data reduction.
The two polarisations are generally in agreement to better than 10%. The three LO-tunings are also in very good agreements without contamination from the image sideband with one exception in W49N around the at 572 GHz line (see Sect. \[subsubsection: SO2 in NH3\]). In all other spectra we averaged the three LO-tunings and both polarisations.
For the above line identification work we used the Cologne Database for Molecular Spectroscopy[^7] [CDMS, @2005JMoSt.742..215M], the Jet Propulsion Laboratory catalogue [^8] [JPL, @1998JQSRT..60..883P], and Frank Lovas’ Spectral Line Atlas of Interstellar Molecules[^9] (SLAIM). Laboratory measurements of NH transitions have been performed by e.g. and @2004JMoSp.225..189F, and measurements of o-NH$_2$ by e.g. @1999JMoSp.195..177M. A review on ammonia is found in , and an investigation of the hyperfine structure components of ortho-NH$_3$ at 572 GHz was recently performed by . NH$^+$ measurements have been made by @1986CPL...132..213V and @2009JChPh.131c4311H.
![*Nitrogen hydrides towards W49N*: Double sideband WBS spectra of NH, ortho-NH$_2$, ortho- and para-NH$_3$, and NH$^+$ over the LSR velocity range -50 to 85 km s$^{-1}$. Quantum numbers are found in Table \[Table: transitions\]. Note that we only analyse the absorption in the velocity range 30–75 km s$^{-1}$ in this paper and leave the source absorption and emission for a future paper.[]{data-label="Fig: DSB W49N hydrides"}](18686Fg01.eps)
Observational results {#section: results}
=====================
Figures \[Fig: DSB W49N hydrides\] and \[Fig: W31C All original hydrides\] (on-line material) show the averaged double sideband WBS spectra of the transitions listed in Table \[Table: transitions\] towards W49N and G10.6$-$0.4, respectively (special treatment of o-NH$_3$ 1$_0$–0$_0$ in W49N is found in Sect. \[subsubsection: SO2 in NH3\]). All detected species show very similar absorption patterns over a wide range of velocities, although W49N has fewer absorbing velocity components in the line-of-sight. No detections of NH$^+$ have been found in any of the two sources.
The resulting noise and single sideband (SSB) continuum levels, for each transition, are found in Table \[Table: transitions\]. Note that since HIFI uses double side band (DSB) receivers, the observed continuum has to be divided by two to obtain the SSB continuum. The sideband gain ratio is assumed to be unity throughout this paper. This has proven to be a good assumption based on PRISMAS observations of saturated absorption lines like HF that establish the true zero levels .
Background source emission
--------------------------
Even though we only analyse the absorption lines in the foreground clouds in this paper we still have to take the source emission into account in the analysis. Several spectra show broad emission features from the background sources which extend far into the velocity range pertaining to the line-of-sight clouds.
Ammonia in particular shows strong emission at the background source velocities with self-absorption around +13 and 0kms$^{-1}$ for W49N and G10.6$-$0.4, respectively. A large part of the NH 974 GHz emission at the source velocities in both sources is most likely not from NH, but from HCN at 974.487 GHz, only 9MHz above the NH line corresponding to 2.5 km s$^{-1}$. The PRISMAS observations of these sources have detected three additional HCN emission lines: at 532GHz, at 620 GHz, and at 886 GHz which strengthens our identification. In addition, we detect no NH emission in the 946 GHz transition which is a further confirmation that the emission is due to another species. HCN in foreground gas is unlikely to produce measurable absorption in the $J=11 - 10$ transition because the excitation energy of the lower state, $E_{10}/k$=234 K, is much too high to be populated at the density and temperature of diffuse or translucent gas.
In the analysis performed in Sect. \[Section: Analysis of abundances in different velocity components\] using three different methods, we have removed the broad emission lines that overlap with the absorption in the line-of-sight by means of Gaussian fits.
### Emission line contamination {#subsubsection: SO2 in NH3}
In W49N, close to the o-NH$_3$ line at 572 GHz, an emission line from the lower sideband, identified as the SO$_2$ line, moves across the ammonia absorption lines in the different LO-tunings. Thus for the o-NH$_3$ 572 GHz transition in W49N we were only able to use the A and C LO-settings in Table \[Table: obsid\], which also had to be cut to remove the parts contaminated by the SO$_2$ line.
The o-NH$_2$ absorption towards W49N is contaminated by a an emission line in the same sideband from the source. We identify the emission line as a blend of three unresolved hfs components of NO $^2\Pi_{1/2}\;J=9.5^f\to 8.5^f$, $F=10.5-9.5$, $9.5-8.5$, and $8.5-9.5$, at 952.464201 GHz. The NO emission line is removed from the o-NH$_2$ absorption (described in Sect. \[NO removal of NH2\], on-line material), and in all following figures in this paper we show o-NH$_2$ in W49N with removed NO emission. The NO identification is strengthened by the PRISMAS observations of three additional NO lines towards W49N shown in Fig. \[Fig: W49N NO\] (on-line material). No emission of NO is found towards G10.6$-$0.4.
### The $^{15}$NH$_3$ isotopologue: only in G10.6$-$0.4
Since our observations also covered the frequency of at 572.113 GHz, we have checked the averaged spectra for emission and/or absorption features of this isotopologue. In Figs. \[Fig: W31C NH3 and 15NH3\] and \[Fig: W31C NH3 and 15NH3 normalised\] (on-line material) we show a 5$\sigma$ emission feature in G10.6$-$0.4 with an amplitude of 40 mK and a line-width of about 4 km s$^{-1}$, which we identify as giving a source velocity of $\sim -$4 km s$^{-1}$. Checking the possibility of absorption lines of o-$^{15}$NH$_3$ in the line-of-sight towards G10.6$-$0.4, we note that there seems to be an absorption feature corresponding to the strongest velocity components of at $\sim$27–32 km s$^{-1}$. This feature is, however, probably not a real detection since it would otherwise imply that This is about 20 times lower than the $^{14}$N/$^{15}$N ratio in the local ISM which is about $\sim$450 , or 441$\pm$5 in the Solar nebula [@2011Sci...332.1533M]. Odin observations of absorption towards Sgr B2 show $^{14}$NH$_3$/$^{15}$NH$_3 > 600$ . Lower ratios have been found, for instance, in low mass cores, 334$\pm$50 [@2010ApJ...710L..49L], and on Earth, $\sim$270. @2011IAUS..280P..76A find values of using CN and HNC in ten dense molecular clouds located at various galactic distances from the Galactic Centre, but this is still several times higher than inferred from a possible o-$^{15}$NH$_3$ absorption. The $^{14}$N/$^{15}$N ratio is also difficult to determine and the results using cyanides may differ from the ratio inferred from other species such as N$_2$H$^+$, and may not reflect the true $^{14}$N/$^{15}$N abundance ratio. More observations are needed to lower the noise in order to obtain a real detection of and to determine the true $^{14}$N/$^{15}$N ratio towards G10.6$-$0.4.
In W49N, no o-$^{15}$NH$_3$ detection is made due to blending SO$_2$ lines from the upper sideband which are both stronger and broader than in G10.6$-$0.4. In addition, the o-$^{14}$NH$_3$ emission is more than two times weaker than in G10.6$-$0.4, and the spectrum has a three times higher noise level which does not allow a detection of o-$^{15}$NH$_3$ emission from the source.
Hyperfine structure components {#section: hfs}
==============================
Both NH and NH$_2$ have numerous hyperfine structure (hfs) components which require a high resolution spectrometer, such as HIFI, to be spectrally resolved. The on-line Tables \[Table: NH hfs transitions\]–\[Table: NH2 hfs transitions\] list the hfs components of the NH 974 and 946 GHz, and the 953 GHz transitions which have 21, 9, and 30 hfs components, respectively. The observed NH 974 GHz and o-NH$_2$ transitions in the +39 km s$^{-1}$ velocity component towards W49N are shown in Figs. \[Fig: NH hfs lines in 39 kms in W49N\] and \[Fig: NH2 hfs lines in 39 kms in W49N\] together with models of respective transition, including hfs components, using Gaussian opacity profiles and a line width of 1 km s$^{-1}$. Here, the intensities have been normalised to the continuum in single sideband as $T_\mathrm{A}$/$T_\mathrm{C}$-1 assuming a sideband gain ratio of unity where $T_\mathrm{A}$ is the observed intensity and $T_\mathrm{C}$ is the SSB continuum as measured in line-free regions in the spectra. The large number of hfs components, in addition to many different, overlapping, velocity components in the line-of-sight, complicates the analysis substantially. In particular, the o-NH$_2$ hfs components in the line-of-sight absorption overlap with the source emission/absorption, thus requiring a source model in the analysis.
![*NH hfs components*: Normalised WBS spectrum towards W49N. Also shown are Gaussian fits with a line width of 1 km s$^{-1}$ of the NH 974 GHz hfs components in the +39 km s$^{-1}$ velocity component.[]{data-label="Fig: NH hfs lines in 39 kms in W49N"}](18686Fg03.eps)
![*Ortho-NH$_2$ hfs components*: Normalised WBS spectrum towards W49N. Also shown are Gaussian fits with a line width of 1 km s$^{-1}$ of the o-NH$_2$ 953 GHz hfs components in the +39 km s$^{-1}$ velocity component.[]{data-label="Fig: NH2 hfs lines in 39 kms in W49N"}](18686Fg04.eps)
The much simpler hyperfine structure of the ortho- and para-NH$_3$ ground state rotational transitions has never been resolved in space before *Herschel*. The Odin satellite was able to observe the 572 GHz ammonia transition, but the velocity resolution of about 0.5 km s$^{-1}$ was not enough to spectrally resolve the hfs components even though the asymmetric line shapes hinted the hfs components . Figure \[Fig: NH3 hfs lines\] shows an example of our observations of the 572 GHz o-NH$_3$ transition in the +39 km s$^{-1}$ absorption velocity component towards W49N with the HRS and a velocity resolution of 0.13 km s$^{-1}$. Three hfs components are seen at 38.4, 39.4 and 40.0 km s$^{-1}$ although the line width of 1.0 km s$^{-1}$ prevents a detailed check of the relative intensities. In the Gaussian fit we therefore use 0.2, 1 and 0.6 as relative intensities, with corresponding relative offsets at -1.05, 0.0, and 0.52 km s$^{-1}$ found by (Table \[Table: NH3 hfs components\], on-line material). The positions and relative intensities of the hfs components and the fitted opacity profiles are also shown in Fig. \[Fig: NH3 hfs lines\]. The six hfs components in the para-NH$_3$ line at 1215 GHz are even more closely spaced and not resolved (Table \[Table: NH3 1215 hfs components\]).
The WBS ammonia observations do not spectrally resolve the NH$_3$ hfs components ($\Delta v$=0.6 km s$^{-1}$ at 572 GHz and $\Delta v$=0.3 km s$^{-1}$ at 1215 GHz), but we still take the hfs components into account in the following NH$_3$ modelling since they produce a slightly asymmetric line shape and a systematic line broadening.\
\
Analysis: Velocity decomposition, column densities, and abundances {#Section: Analysis of abundances in different velocity components}
==================================================================
In paper I we estimated the abundances towards G10.6$-$0.4 simply by evaluating the integrated opacity of each line over the velocity range . This approach gave a first estimate of the average abundance in the full line-of-sight, but did not take into account possible abundance variations in the different velocity components. In addition, no source model was included, and thus the blend of the NH and o-NH$_2$ hfs components from the foreground absorptions with the source absorption was neglected.
In this paper we use three different approaches to decompose the absorption into separate velocity components, and to obtain column densities and abundances in each component. Each method has its own strengths and weaknesses, and differences in the results can be considered an estimate of the errors of the methods. If all three methods point to the same result, it will be considered as robust. We note, however, that the uncertainties of the data are higher towards W49N than for G10.6$-$0.4, since we have removed a SO$_2$ line from the upper sideband in the ortho-NH$_3$ absorption, and an NO line in the o-NH$_2$ absorption.
In this work we also only estimate $N$ and $X$ for the ortho-symmetries of NH$_2$ and NH$_3$ since our own measurements of the ammonia OPR in Sect. \[OPR ammonia\] do not point to the high temperature limit, but instead suggest an OPR *lower* than unity, which cannot be straightforwardly explained. If this unexpected low ammonia OPR is true, then the processes that affect the ammonia OPR perhaps also could affect the OPR of NH$_2$ in diffuse clouds about which we have no information. And since we want to compare relative abundances, we choose to focus on the ortho-symmetries since both are observed in NH$_2$ and NH$_3$, and also since the observations have higher S/N than the .
To obtain integrated opacities of each transition, in each velocity component, the first method uses Gaussian fitting (Sect. \[henriks method\]), while the second one uses the observed spectra of ortho-NH$_3$ and CH as templates for other species (Sect. \[massimos method\]). Both methods calculate the opacity as . The derived integrated opacities are then used to estimate column densities by means of the non-LTE (Local Thermodynamic Equilibrium) [RADEX]{} code (Sect. \[Section: RADEX\]). As a third method, we have used XCLASS (Sect. \[Bhaswatis method\]) which, in contrast to [RADEX]{}, assumes fixed excitation temperatures, to obtain column densities in each velocity component .
In order to be able to use our results as comparisons with chemical models, we need to derive the abundance of the species with respect to the total amount of gas. The difficulty is to obtain reliable estimates of $N$(H) and/or $N$(H$_2$). In paper I, we compared the nitrogen hydrides column densities to the total amount of hydrogen in the line-of-sight, $N_\mathrm{H}$=$N$(H)+2$N$(H$_2$). This can not be done in the present work since the neutral hydrogen absorption shows too broad and overlapping absorption components over much larger velocity ranges as compared to the nitrogen hydrides. In order to estimate the amount of molecular hydrogen in each (narrow) velocity component, we therefore consider the comparison with CH as an abundance determination. This can be made using the CH/H$_2$ correlation (CH/H$_2$=3.5$\times$10$^{-8}$) observed by @2008ApJ...687.1075S in the solar neighbourhood diffuse medium. This correlation is assumed to be valid in regions which are dominated by ultraviolet radiation where $N$(CH)$\lesssim$2$\times$10$^{14}$ cm$^{-2}$. In this way we can use our own PRISMAS observations of CH and measure abundance ratios obtained with the same instrument.
The physical properties of the bulk of the absorbing gas in our sight-lines are , according to the definitions of , a mixture of diffuse ($N_\mathrm{H}\!\!\lesssim\!\!500$ cm$^{-3}$ and and translucent gas ( and a shielding ). And since the comparison later in this section of the nitrogen hydrides with other species tracing both high and low density gas, points to an existence in the denser parts of the gas, molecular hydrogen is most likely the dominant form of hydrogen in these components.
Assuming that the nitrogen hydrides co-exist with HF and H$_2$O in the line-of-sight material we use the fact that the saturated absorptions of the latter two species reach the zero level, for a sideband ratio of unity, to support our assumption that the foreground absorbing material completely covers the continuum within the beam. Comparison spectra of o-NH$_3$ and o-H$_2$O are shown in Figs. \[Fig: W49N comparison species\]–\[Fig: W31C comparison species\], and with HF, ortho- and para-H$_2$O in Figs. \[fig: comparison of W31C-W49N NH, NH2 vs 572, 572 and 1215, CH\]–\[fig: comparison of W31C-W49N 572 vs CH, water, h2o+\] (on-line material).
[lrccl ]{} Species $x$ & Transition & $T_\mathrm{ex}$ & $N(x)$($\int \tau d v$=1.0 km s$^{-1}$)\
& (GHz)& (K) & (cm$^{-2}$)\
NH & 946.476& 3.9 & 3.65$\times$10$^{13}$ & $a$\
& 974.478&4.0 & 7.47$\times$10$^{12}$ &$a$\
o-NH$_2$ & 952.578& 3.9& 3.62$\times$10$^{12}$ &$b$\
o-NH$_3$ & 572.498 & 2.8 & 3.92$\times$10$^{12}$ & ${b}$\
p-NH$_3$ & 1215.245 &4.9 & 1.65$\times$10$^{13}$ & ${b}$\
NH$^+$ & 1012.524& 4.2 & 6.48$\times$10$^{12}$ &\
CH & 532.724&2.9 & 2.28$\times$10$^{13}$ &$b$\
HNC & 90.664 &2.8 & 1.80$\times$10$^{12}$ &${b}$\
CN & 113.169 &2.8 & 1.90$\times$10$^{13}$ &${b}$\
\[Table: columns opacity one\]
[RADEX]{} {#Section: RADEX}
---------
Observations of absorption lines can provide accurate determinations of molecular column densities if the observed transitions trace all of the populated states. In the case of NH$_3$ we observe two transitions of the ortho (A-symmetry) form, arising in the $J_K=0_0$ and $1_0$ states, and one transition of the para (E-symmetry) form arising in the lower $1_1$ rotation-inversion level. Although the upper $1_1$ inversion level is less metastable than the lower one, it may still be significantly populated. The metastable $2_2$ para levels and $3_3$ ortho levels may also have significant populations in diffuse molecular clouds at low density, , owing to infrared pumping and formation. Because we are directly sensitive to three lower states and lack direct information about the excitation temperatures of the observed transitions, we use the non-equilibrium [[RADEX]{}]{} code to relate the total column density of ammonia to the observed integrated optical depths. The models provide integrated opacities of both observed and unobserved excited levels and quantify possible effects of stimulated emission, chemistry, and electron collisions. Results for one reference model are summarised in Table \[Table: columns opacity one\], where conversion factors for all nitrogen hydrides and comparison species are listed in terms of the total column density $N(x)$ of molecule $x$ that is needed to achieve an integrated optical depth $\int \tau dv = 1.0$ km s$^{-1}$ in the specified transition. In this model we treat ortho- and para-symmetries separately. The integrated optical depth is a sum over all hyperfine structure in the transition.
We have assumed diffuse molecular cloud conditions with a kinetic temperature of 30 K and a density of molecular hydrogen , which corresponds to . The resulting conversion factors for the rotational ground state transitions are not very sensitive to density, temperature or electron collisions. The critical densities of the fundamental rotational transitions of NH, NH$_2$, and NH$_3$ are high, $n_\mathrm{crit} \! \sim$10$^8$cm$^{-3}$. This means the upper energy levels of these transitions must be excited almost entirely radiatively at the low densities in diffuse gas. We find, however, that electron collisions can be responsible for excitation temperatures of $\sim\!4$ K in the cm-wave inversion transitions rather than $\lesssim\!3$ K that would be found in conventional excitation analyses at low densities, $n({\rm H}_2)\lesssim 10^3$ cm$^{-3}$. Where collision rates are unknown, we have made guesses scaled in proportion to radiative line strengths. The background radiation field includes both the 2.725 K cosmic microwave background and a model of the Galactic infrared radiation in the solar neighborhood. The resulting excitation temperatures of the observed sub-millimetre transitions are typically , which are small enough compared to $h\nu/k$ that no correction for emission is required. Note that the lower state of the 572 GHz transition contains $95\%$ of the ortho molecules while the lower state of the 1215.2 GHz transition occupies only $43\%$ of the para molecule.
Method I: Multi-Gaussian fit of the nitrogen hydrides simultaneously {#henriks method}
--------------------------------------------------------------------
We have modelled the observed spectra of NH, , and using Gaussian optical depth profiles. These were generated for each hfs component of each species in separate velocity intervals, and were made to fit the observations under the condition that the LSR velocity and width in each velocity component must be the same for all molecules. This can be made assuming that all species co-exist in the same velocity ranges and therefore show absorption at the same velocities. This assumption is supported by the striking similarities of the strongest NH, and absorptions, despite the complicated hyperfine structure patterns of NH and , shown in normalised comparison spectra in Fig. \[fig: comparison of W31C-W49N NH, NH2 vs 572, 572 and 1215, CH\] (on-line material). We note that a good fit therefore is a good indicator if this assumption is valid.
The observed line profiles are thus modelled by Gaussian components of both the hfs components and all velocity components according to $$\frac{T_\mathrm{A}}{T_\mathrm{C}} = \exp\,\biggl[-\sum_{i=1}^{N_{v}} \sum_{j=1}^{N_\mathrm{hfs}}
\tau_\mathrm{hfs}(j) \, \tau_0(i) \, \exp[-4 \ln(2)\,\biggl[\frac{v-v_\mathrm{0}(i)-v_\mathrm{hfs}(j)}
{\Delta v}
\biggr]^2 \biggr]\ ,$$ where $N_v$ is the number of the modelled velocity components (same for all species), $N_\mathrm{hfs}$ is the number of hfs components (which differs between species), $\tau_\mathrm{hfs}$ is the theoretical relative line strength of each hfs component, $\tau_0$ is the opacity of the strongest hfs component in each velocity component, $v_\mathrm{0}$ is the LSR velocity of each velocity component, $v_\mathrm{hfs}$ is the relative velocity offset of the hfs components with respect to the strongest, and $\Delta v$ is the FWHM (full width of half maximum). We assume that for the hfs components of NH and o-NH$_2$, since the many overlapping velocity components prevent a check of the relative intensities of these components.
We have used both ortho- and para-NH$_{3}$, which are not significantly complicated by hfs splitting, to determine the minimum number of velocity components needed, and also to set-up reasonable initial guesses for the line properties necessary for the fitting procedure to converge at all. The fitting was done for each velocity component at a time in a loop until all fits had converged. The fits are most likely not unique. Provided that the fits are good and that we use the same velocity parameters for all species, this is, however, not considered important as we do not ascribe any physical meaning to the Gaussian components.
The results from Method I are peak opacities, line widths and centre velocities for each velocity component and species. The integrated opacity in each velocity component was assumed to be well represented by $\int\tau \,\mathrm{d} v = 1.06 \,\Delta v \, \tau_\mathrm{peak}$. The non-detections of NH$^+$ are used to put upper limits on its integrated opacities (3$\sigma$). We then used the conversion factors in Table \[Table: columns opacity one\] to calculate the column densities tabulated in Table \[Table: Method I resulting ratios and columns\]. Typical errors in the G10.6$-$0.4 resulting column densities for NH, o-NH$_2$ and o-NH$_3$ are between 7 and 15%. In the $v_\mathrm{LSR}$=45 km s$^{-1}$ component, the errors are 22% for all three species, and 23% in the $v_\mathrm{LSR}$=39 km s$^{-1}$ component for NH. The errors in W49N varies between 8 to 11% for all three species, except in the 33 km s$^{-1}$ component where the uncertainties are 37, 26 and 31% for NH, o-NH$_2$ and o-NH$_ 3$, respectively. The errors include calibration uncertainties (see Sect. \[OPR ammonia\]) as well as uncertainties from the fitting procedure.
In addition to the nitrogen hydrides, we also modelled the PRISMAS CH 532 GHz transition, and the CN 113 GHz and HNC 91 GHz transitions observed with the IRAM 30 m antenna [@2010Godard]. These species were not included in the fitting procedure described above. Instead we have used the resulting velocity parameters from the fits of the nitrogen hydrides ($v_\mathrm{LSR}$ and line widths) since we want to investigate possible correlations and make abundance determinations only in the same parts of velocity space as the nitrogen hydrides. The resulting CH column densities are found in Table \[Table: Method I resulting ratios and columns\], and the CN and HNC column densities in the on-line Table \[Table: Method I, III CN and HNC columns\].
Figures \[Fig: W49N N-hydrides normalised\] and \[Fig: W31C N-hydrides normalised\] show the fits and residuals of the nitrogen hydrides together with CH. The resulting fits are well reproducing the observations for all species except for CH suggesting that CH exist in a more widely spread gas than the nitrogen hydrides.
The CH column densities are then used together with the relation \[CH\]/\[H$_2$\]=3.5$\times$10$^{-8}$ [@2008ApJ...687.1075S] to estimate the abundances listed in Table \[Table: Method I resulting ratios and columns\].
[lc ccc ccc c llll ]{}\
$v_\mathrm{LSR}$ & $\Delta v$ & $N$(NH) & $N$(oNH$_2$) & $N$(oNH$_3$) &$N$(NH$^+$) &$N$(CH) & ${{\rm NH}\over{{\rm oNH}_3}}$ &${{\rm oNH}_2\over{{\rm oNH}_3}}$ & $X$(NH) & $X$(oNH$_2$) & $X$(oNH$_3$) & $X$(NH$^+$)\
(kms$^{-1}$) &(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$)\
33.3 & 2.1 & 1.5e12 & 2.1e12 & 4.6e11 &$\lesssim$3.3e11 &1.6e13& 3.3 & 4.6 & 3.3e-9& 4.6e-9 & 1.0e-9 & $\lesssim$7.2e-10\
39.5 & 1.1 & 1.0e13 & 7.3e12 & 4.3e12 &$\lesssim$2.4e11 &2.3e13& 2.3 & 1.7 & 1.5e-8& 1.1e-8 & 6.5e-9 & $\lesssim$3.7e-10\
59.2 & 2.9 & 9.2e12 & 4.1e12 & 2.1e12 &$\lesssim$7.5e11 &6.0e13& 4.4 & 2.0 & 5.4e-9& 2.4e-9 & 1.2e-9 & $\lesssim$4.4e-10\
62.7 & 2.3 & 8.0e12 & 3.9e12 & 2.2e12 &$\lesssim$6.3e11 &5.1e13& 3.6 & 1.8 & 5.5e-9& 2.7e-9 & 1.5e-9 & $\lesssim$4.3e-10\
Total: &…&2.9e13 & 1.7e13 & 9.1e12 &$\lesssim$2.0e12& 1.5e14& 3.2 & 1.9 & 6.7e-9 & 4.1e-9 & 2.1e-9 & $\lesssim$4.6e-10\
Mean: &…&7.2e12& 4.4e12 & 2.3e12 &$\lesssim$4.9e11& 3.8e13& 3.2 & 1.9 & 6.7e-9 & 4.1e-9 & 2.1e-9 & $\lesssim$4.6e-10\
Median: &…&8.6e12& 4.0e12 & 2.2e12 &$\lesssim$4.8e11& 3.7e13& 4.0 & 1.9 & 8.1e-9 & 3.8e-9 & 2.0e-9 & $\lesssim$4.5e-10\
\
$v_\mathrm{LSR}$ & $\Delta v$ & $N$(NH) & $N$(oNH$_2$) & $N$(oNH$_3$) &$N$(NH$^+$) &$N$(CH) & ${{\rm NH}\over{{\rm oNH}_3}}$ &${{\rm oNH}_2\over{{\rm oNH}_3}}$ & $X$(NH) & $X$(oNH$_2$) & $X$(oNH$_3$) & $X$(NH$^+$)\
(kms$^{-1}$) &(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$)\
16.2 & 1.7 & 2.0e13 & 9.6e12 & 4.2e12 &$\lesssim$4.7e11 & 5.8e13 & 4.8 & 2.3 & 1.2e-8 & 5.8e-9 & 2.5e-9 & $\lesssim$2.8e-10\
18.8 & 1.5 & 6.1e12 & 3.1e12 & 8.7e11 &$\lesssim$4.1e11 &2.6e13 & 7.0 & 3.6 & 8.2e-9 & 4.2e-9 & 1.2e-9 & $\lesssim$5.5e-10\
22.1 & 4.4 & 1.6e13 & 6.9e12 & 1.8e12 & $\lesssim$1.2e12 &7.7e13 & 8.9 & 3.8 & 7.3e-9 & 3.1e-9 & 8.2e-10 & $\lesssim$5.5e-10\
22.8 & 1.0 & 1.0e13 & 3.6e12 & 1.5e12 & $\lesssim$2.7e11 & 7.1e12 & 6.7 & 2.4 & 4.9e-8 & 1.8e-8 & 7.4e-9 & $\lesssim$1.3e-9\
24.8 & 3.1 & 6.2e12 & 4.3e12 & 1.4e12 &$\lesssim$8.4e11 & 3.7e13 & 4.4 & 3.1 & 5.9e-9 & 4.1e-9 & 1.3e-9 & $\lesssim$8.0e-10\
27.8 & 1.9 & 2.7e13 & 1.2e13 & 7.3e12 &$\lesssim$5.0e11 & 7.4e13 & 3.7 & 1.6 & 1.3e-8& 5.7e-9 & 3.5e-9 & $\lesssim$2.4e-10\
29.9 & 1.7 &2.9e13 & 1.3e13 & 7.9e12 & $\lesssim$4.6e11 &5.6e13 & 3.7 & 1.6 & 1.8e-8 & 8.1e-9 & 4.9e-9 & $\lesssim$2.9e-10\
32.1 & 1.5 &9.0e12 & 3.5e12 & 1.3e12 & $\lesssim$4.1e11 &3.5e13 & 6.9 & 2.7 & 9.0e-9 & 3.5e-9 & 1.3e-9 & $\lesssim$4.1e-10\
36.1 & 4.0 & 2.0e13 & 6.5e12 & 3.2e12 & $\lesssim$1.1e12 &7.8e13 & 6.3 & 2.0 & 9.0e-9 & 2.9e-9 & 1.4e-9 & $\lesssim$4.9e-10\
39.0 & 1.2 & 1.1e13 & 3.0e12 & 4.7e11 & $\lesssim$3.1e11 & 3.3e13 & 23 & 6.4 & 1.2e-8 & 3.2e-9 & 5.0e-10 & $\lesssim$3.3e-10\
40.9 & 1.7 &3.0e13 & 1.1e13 & 4.1e12 & $\lesssim$4.7e11 &5.4e13 & 7.3 & 2.7 & 1.9e-8 & 7.1e-9 & 2.7e-9 & $\lesssim$3.0e-10\
45.0 & 1.4 & 3.0e12 & 1.1e12 & 5.9e11 &$\lesssim$3.6e11 & 1.1e13 & 5.1 & 1.9 & 9.6e-9 & 3.5e-9 & 1.9e-9 & $\lesssim$1.1e-9\
Total: & …&1.9e14 & 7.8e13 & 3.5e13 &$\lesssim$6.8e12 & 5.5e14 & 5.4 & 2.2 & 1.2e-8 & 5.0e-9 & 2.2e-9 & $\lesssim$4.4e-10\
Mean: &…&1.6e13& 6.5e12 & 2.9e12 &$\lesssim$5.7e11 &4.6e13 & 5.4 & 2.2 & 1.2e-8 & 5.0e-9 & 2.2e-9 & $\lesssim$4.4e-10\
Median: &…&1.4e13& 5.4e12 & 1.7e12 &$\lesssim$4.7e11 &4.6e13 & 8.2 & 3.3 & 1.0e-8 & 4.2e-9 & 1.3e-9 & $\lesssim$3.6e-10\
\[Table: Method I resulting ratios and columns\]
Method II: ortho-NH$_3$ and CH as templates for other species {#massimos method}
-------------------------------------------------------------
This method directly compares the opacity line profile of the non-saturated o-NH$_3$ line at 572 GHz with the line profiles of NH and o-NH$_2$, convolved with respective hyperfine structure. This assumes that all species co-exist in the same velocity ranges, and that the opacity ratios depend only on their column density ratios, which may vary from one velocity component to another. No opacity profile is assumed for a given absorbing cloud so, unlike what is done by methods I and III, we do not attempt to decompose the velocity structure of the template line into a number of Gaussian curves. We perform, instead, simple cuts of the template profile into large velocity bins.
The hfs of the o-NH$_3$ line is not resolved in the WBS data and introduces only a small broadening of the lines that is dealt with by smoothing the NH and o-NH$_2$ spectra. Before comparing the opacity profiles, we mask out those portions of the spectra where a simple scaling between template and target is not expected, such as the velocity range associated to the background sources, and remove the emission line wing by Gaussian fitting. We note that the removal of the line wing introduces very large uncertainties towards W49N around 33 km s$^{-1}$ where a weak absorption component is visible. This method therefore disregards the results in this velocity component.
The velocity bins into which the o-NH$_3$ opacity profile is split are chosen to be significantly larger than the original resolution, but narrow enough to separate the most prominent velocity features (typically a few km s$^{-1}$, see Table \[Table: Model II ratios results\]). Each velocity bin of the o-NH$_3$ template is then convolved, channel by channel, with the NH and o-NH$_2$ hfs components. We thus obtain an intermediate opacity model of how the NH and o-NH$_2$ spectra would appear if absorption were limited to the selected velocity bin. The observed o-NH$_3$ opacity is then fitted by the IDL Least Squares Fitting routine MPFIT [@2009ASPC..411..251M] with a linear combination of the intermediate models. The output of the fits consists of the opacity ratios, between the o-NH$_3$ and the modelled NH and o-NH$_2$ spectra, in the various velocity bins.
To obtain column densities in each velocity bin, we sum the opacities of all channels of the template transition, and then apply the [RADEX]{} conversion factors from Table \[Table: columns opacity one\]. The resulting abundance ratios are finally used to obtain column densities of the other species. Typical errors (including both calibration and fitting uncertainties) are except for the velocity bin in G10.6$-$0.4 and the velocity bin in W49N which both have an uncertainty of approximately 25%.
The main advantage of this method is that a small number of free parameters is needed to obtain a reasonable fit since it uses, as much as possible, the information carried by the template spectrum. The model also avoids the problem of non-uniqueness of Gaussian decompositions of the absorption profiles and allows a straightforward comparison of column densities evaluated for exactly the same velocity intervals. On the other hand, such rigidity in the definition of the velocity bins does not allow us to reliably retrieve the column density ratios if the absorption comes from very different gas volumes resulting in significant velocity shifts of the absorption features between the two species. We note that the fit quality, estimated with $\int |\mathrm{residuals}|\times \mathrm{d} v \, / \int \tau\times \mathrm{d} v$, is a good indicator of such cases. Note also that residual baseline structures, like those produced by standing waves or by an imperfect removal of ammonia emission line wings, may introduce artificial discrepancies between the template spectra and the modelled species.
![*W49N: Fits of Method I* and the SSB normalised spectra of the nitrogen hydrides and CH. The black lines are the observations, and the coloured lines are the model fits. In the bottom the residuals are plotted on top of each other with respective colour. The three transitions used in Method I to determine the $v_\mathrm{LSR}$ and line widths of the velocity components are marked in bold.[]{data-label="Fig: W49N N-hydrides normalised"}](18686Fg06.eps)
![*G10.6$-$0.4: Fits of Method I* and the SSB normalised spectra of the nitrogen hydrides and CH. Same notation as in Fig. \[Fig: W49N N-hydrides normalised\].[]{data-label="Fig: W31C N-hydrides normalised"}](18686Fg07.eps)
In addition to the above modelling, we use the o-NH$_3$ as a template for CN and HNC [@2010Godard], and also use the deconvolved CH spectrum at 532.724 GHz as a template to model all three nitrogen hydrides, in order to obtain an estimate of the abundance with respect to molecular hydrogen.
The resulting column densities and relative abundances in different velocity bins are given in Table \[Table: Model II ratios results\], except for the CN and HNC results which are found in on-line Table \[Table: Method II CN and HNC columns\].
The model fits compared to the normalised spectra in both sources are shown in Figs. \[Fig: W49N method I NH and NH2 from NH3\]–\[Fig: W31C method I NH, NH2, NH3 from CH\] (on-line material). The fit qualities of the comparisons between the o-NH$_3$, NH and o-NH$_2$ are excellent, except in the narrow and deep +39 km s$^{-1}$ component towards W49N which was very difficult to model. The modelling of the nitrogen hydrides using CH as a template show on the other hand rather low fit qualities, suggesting that the assumption that the nitrogen hydrides and CH only come from the same gas is less justified, also supported by Method I.
Method III: XCLASS {#Bhaswatis method}
------------------
Finally, we have generated synthetic spectra separately for NH, o-NH$_2$, o-NH$_3$, CH, CN, and HNC using the software XCLASS[^10] [created by P. Schilke, and described by @2005ApJS..156..127C and references therein] by considering for each species all hfs components and velocity components and the assumption of a fixed excitation temperature. The synthetic spectra generated with XCLASS were fitted to the observed spectra using MAGIX, an iterating engine that allows automatic minimization to constrain model parameters. XCLASS accesses the CDMS and JPL molecular databases, and models each molecule with the following free parameters: source size, temperature, column density, line width and velocity offset relative to the systemic velocity of the source, and derives the column density corresponding to the different velocity components detected in absorption and emission in the observed spectra. The source size refers to the relative size of the absorbing cloud vs. the continuum source, which is assumed to be fully covered within the beam. We have assumed a Gaussian profile for each hfs component, and an excitation temperature of 4 K. Since Method III explicitly assumes a fixed excitation temperature for the estimate of column densities, it is significantly different from methods I and II.
Figures \[Fig: w49n guass fits n-hydrides method III\] and \[Fig: w31c guass fits n-hydrides method III\], in the on-line material, show the model fits together with the observed SSB normalised spectra. Resulting column densities and abundance ratios of the nitrogen hydrides and CH are found in the on-line Table \[Table: Method III resulting ratios and columns\], and the CN and HNC column densities in on-line Table \[Table: Method I, III CN and HNC columns\].
Comparison of results {#Subsection: comparison of results}
---------------------
The results from all three methods agree very well, especially considering that the velocity bins used are not the same.
The total $N$(CH) from all methods are lower than found by , which is expected since we only use parts of the CH absorptions. Comparisons of CH in different velocity bins give much more similar results in both sources. Since we use CH as a tracer of H$_2$, we also compare our estimate of $N$(H$_2$) with towards W49N: for the +39 km s$^{-1}$ component they suggest and they also quote which gives using a standard conversion factor of 10$^{-6}$. They also suggest column densities towards W49N between and from $^{13}$CO observations. All these suggestions for $N$(H$_2$) are several times higher than inferred from our CH observations. This is, however, not surprising because the $^{13}$CO conversion factor is rather uncertain at such low column densities. @2004ApJ...605..247P also obtain higher $N$(H$_2$) than we do towards W49N. The total amount of $N$(H$_2$) in the sight-lines towards our sources, can also be compared with estimates from the K-band extinction : and 7.5$\times$10$^{21}$ cm$^{-2}$ towards W49N and G10.6$-$0.4, respectively.
The resulting NH mean abundances of all three methods in all velocity components are 5.5$\times$10$^{-9}$ and 1.1$\times$10$^{-8}$ in W49N and G10.6$-$0.4, respectively. This can be compared to the average $N$(NH)/$N$(H$_2$) value of 3$\times$10$^{-9}$ in diffuse and translucent sight-lines found by @2009MNRAS.400..392W. The mean values of o-NH$_2$ are 3.1$\times$10$^{-9}$ and 4.5$\times$10$^{-9}$ in W49N and G10.6$-$0.4, respectively, and 1.5$\times$10$^{-9}$ and 1.9$\times$10$^{-9}$ for o-NH$_3$.
Our upper limits on the NH$^+$ abundances, relative to molecular hydrogen, are similar in both sources with mean value $N$(NH$^+$)/$N$(H$_2$) $\lesssim$4$\times$10$^{-10}$. This is orders of magnitudes lower than previous findings from ultraviolet observations towards $\rho$Oph , but still much higher than the predictions from the chemical models presented in Paper I: approximately 10$^{-13}$–10$^{-14}$. The upper limits of the NH$^+$ abundance compared to NH are $\lesssim$2–14% in the different velocity components (except in the uncertain velocity feature around +33 km s$^{-1}$ in W49N), with a mean ratio of $\lesssim$6%. This is in contrast to the behaviour of CH$^+$ and marginally with that of OH$^+$ with respect to the corresponding neutrals. The CH$^+$ radical reaches comparable column densities to those of CH [e.g. @1995ApJS...99..107C], and $N$(OH$^+$) is a factor 30 below $N$(OH) in the visible data [@2010ApJ...719L..20K].
Since our different approaches to deconvolve the velocity components for the observed species agree well, we estimate that the uncertainties in our derived absolute column densities are $\lesssim$20–50%. The column densities for CH have larger uncertainties, but we estimate that the absolute $N$(CH) results are correct within a factor of approximately two. Furthermore, the scatter in the CH to H$_2$ relationship is estimated by @2008ApJ...687.1075S to a factor of 1.6. In summary, we believe that the abundance determinations from CH are correct within a factor of a few. The abundance ratios of the nitrogen hydrides, relative to each other, are on the other hand more accurately determined.
The results of all three methods confirm our conclusion of the analysis of G10.6$-$0.4 in paper I that NH is more abundant than NH$_2$ and NH$_3$ by a factor of approximately , assuming the high temperature ortho-to-para limits of three and one, respectively. Note that the mean abundances of are similar towards both sources, $\sim$2$\times$10$^{-9}$, in contrast to , and, in particular NH, which have higher mean abundances towards G10.6$-$0.4 than towards W49N.
Comparison and correlations of species {#Comparison and correlations of species}
--------------------------------------
Figures \[Fig: W49N comparison species\] and \[Fig: W31C comparison species\] show and five SSB normalised comparison spectra, and in the on-line material Figs. \[fig: comparison of W31C-W49N NH, NH2 vs 572, 572 and 1215, CH\]– \[fig: comparison of W31C-W49N 572 vs CH, water, h2o+\] we show additional comparisons of the nitrogen hydrides with the deconvolved CH absorption, the H 21 cm line, H$_2$O$^+$, OH$^+$, HNC, CN, ortho- and para-H$_2$O, HF, and HCO$^+$.
The CH spectra show absorption over a much wider range of velocities in both sources than the nitrogen hydrides. Ammonia largely follows the CH absorptions, but there are also large differences in some parts of the spectra where there is no or very little absorption of NH$_3$, while CH shows a much broader and stronger absorption, for instance at , and towards W49N. Ammonia seems to trace CH slightly better towards G10.6$-$0.4 than W49N. This comparison suggests that CH exists in both relatively low and high density gas, while the nitrogen hydrides only exist in the parts of the interstellar gas with a relatively high density.
These differences are even more pronounced when comparing with neutral hydrogen as observed by the VLA $\lambda$21 cm absorption by @2003ApJ...587..701F, which shows absorption over a very wide range of velocities with a resolution of 2.5 km s$^{-1}$. The much more extended velocity space coverage of the HI absorption is expected since not all foreground clouds have molecular gas.
The ammonia absorption also follow similar trends as HCO$^+$ $J$=1$\leftarrow$0 [@2010Godard], and H$_ 2$O, which is known to trace clouds of high molecular fraction . HCO$^+$ and H$_ 2$O seem to have a rather constant abundance ratio in these sight-lines, in contrast to their abundances with respect to ammonia. The comparison of ammonia to H$_ 2$O$^+$ and OH$^+$, which mostly reside in lower density gas containing considerable amounts of atomic hydrogen , shows no similarities.
When we compare ammonia with CN and HNC we find very similar absorption patterns. The CN single hyperfine component and the HNC $J$=1$\leftarrow$0 line were observed with the IRAM 30 m antenna [@2010Godard]. Note, that HNC has three hfs components that, however, lie very close in velocities (0.2 and 0.5 km s$^{-1}$), which mostly leads to a broadening of the absorption features. These species mainly reside in denser gas than CH, and both are closely connected to the NH and NH$_2$ chemistry .
### Column density correlations
In order to quantitatively examine abundance correlations of the nitrogen hydrides, CH, HNC and CN, we show column density plots in Figs. \[fig: column density plots 1\]–\[fig: column density plots 2\]. We have here plotted the results from Method I. The parameters of the linear least square fits to the data are found in the figures in addition to the correlation coefficient $R$. In the on-line material we also show column density plots with all three methods (Figs. \[Fig: column density plots 1\]–\[Fig: column density plots 4\]).
Column density plots of the nitrogen hydrides are shown on the left hand side in Fig. \[fig: column density plots 1\]. We note that the scatter is larger for $N$(NH) vs. than compared to vs. which show a rather tight correlation. This may indicate that NH does not entirely exist in the same gas as and . There is also a possibility that the NH and o-NH$_3$ correlation may only be valid for $N$()$\lesssim$5$\times$10$^{12}$ cm$^{-2}$. The $N$(NH) towards G10.6$-$0.4 appear to increase up to a maximum of $\sim$3$\times$10$^{13}$ cm$^{-2}$. Also $N$() show a slight tendency of this behaviour towards G10.6$-$0.4 with a maximum of $\sim$1.2$\times$10$^{13}$ cm$^{-2}$. This possible “chemical saturation” corresponds to a few $\mathrm{A}_{V}$ (estimated from our CH measurements and . This may be explained by a more efficient ammonia production at higher column densities, or that NH, and , are somehow consumed in the ammonia formation.
On the right hand side in Fig. \[fig: column density plots 1\] we show column density plots of the nitrogen hydrides vs. CH. The spread of the data points reflects the difficulty of finding a velocity structure common to hydrides and CH.
In Fig. \[fig: column density plots 2\] we show column density plots of the nitrogen hydrides vs. CN on the left and vs. HNC on the right. All three nitrogen hydrides show linear correlations with both CN and HNC. In dark cloud chemistry, CN and HNC are closely related to NH and NH$_2$ through the reactions of , , and [@1990MNRAS.246..183N]. Also @2009MNRAS.400..392W found a slightly better correlation between NH and CN than compared to CH, and no correlations with species such as CH$^+$.
Ortho-to-para ratio of NH$_3$ {#OPR ammonia}
=============================
There are two ways to produce molecules in interstellar space: in the gas-phase or on grain surfaces. If ammonia is formed in the gas-phase, by highly exoergic processes, the ammonia ortho-to-para ratio has been expected to be very close to the statistical equilibrium value of 1.0 (the spin statistical value is 4–2 for ortho-para, but the number of para states is on the other hand almost a factor of two larger). If ammonia is formed at temperatures lower than 40 K on cold dust grains, and then desorbed when the grains are heated above 100 K, the OPR may differ from unity since the lowest ortho level is 22 K below the lowest para level. If no conversion processes between the two symmetries exist, the OPR of ammonia is expected to increase *above* unity at low formation temperatures.
Prior to Herschel, ortho-to-para ratios have been derived from measurements of inversion transitions, the $(1,1)$ and $(2,2)$ transitions involving para states, and the $(3,3)$ and $(6,6)$ transitions probing highly excited, metastable ortho states. For example, @1999ApJ...525L.105U have found OPR=1.3–1.7 in the L 1157 outflow from the observation of six inversion lines $(J,K)=(1,1)$ to $(6,6)$. Similarly, @2009PASJ...61.1023N have inferred an even higher value of OPR=1.5–3.5 in the central molecular zone of the Galaxy.
Using *Herschel*-HIFI observations of the fundamental rotational transitions of both ortho- and para-ammonia, it is for the first time possible to estimate the ammonia OPR in cold and diffuse interstellar gas of low excitation. Our results do, however, point to the surprising result of an OPR *lower* than unity.
This is shown in Figs. \[Fig: W49N OPR NH3\] and \[Fig: W31C OPR NH3\] where the upper panels show the normalised 1$_0$–0$_0$ ortho-NH$_3$ and 2$_1$–1$_1$ para-NH$_3$ spectra, and the lower panels show the corresponding optical depth ratios for absorptions larger than 3$\sigma$ as a function of LSR velocity towards both sources (channel widths are 0.26 km s$^{-1}$). An ortho-to-para optical depth ratio of 4.2 corresponds to a column density ratio of unity using the [RADEX]{} conversion factors in Table \[Table: columns opacity one\]. The resulting column density ratios are given by the right hand y-axis in both figures. The para line has lower S/N than the ortho line in both sources and several velocity components are weak. Still, most column density ratios are found to be below unity within the uncertainties.
![[*W*49N.]{} *(Upper)* WBS spectra of ortho-NH$_3$ and para-NH$_3$ normalised to SSB continuum. *(Lower)* The optical depth ratios are shown for absorptions larger than 3$\sigma$ as a function of LSR velocity. The column density ratios (OPR), estimated with [RADEX]{}, are given by the right hand y-axis. The horisontal dashed line marks an ortho-to-para optical depth ratio of 4.2 corresponding to a column density ratio of unity. []{data-label="Fig: W49N OPR NH3"}](18686Fg22.eps)
![[*G*10.6$-$0.4 (W31C).]{} Notation as in Fig. \[Fig: W49N OPR NH3\]. []{data-label="Fig: W31C OPR NH3"}](18686Fg23.eps)
We have also used Method I and II to estimate the OPR. Method I gives and in the strongest velocity components towards W49N (+39 km s$^{-1}$ component) and G10.6$-$0.4 (+16, 28, 30 and 41 km s$^{-1}$ components), respectively. The results from Method II gives similar results towards G10.6$-$0.4: in the velocity bins , , and ; but lower results towards W49N in $v_\mathrm{LSR}$=36–42 km s$^{-1}$, 0.4$\pm$0.2.
The errors are dominated by the noise and the uncertainty in the sideband gain ratio. We have used the errors in sideband gain ratios stated on the *Herschel* internet site (4% for ortho and 6% for para). There is also an additional error arising from the rather large sideband separation of 12 GHz which means that the assumption of equal continuum temperature is not fully valid. This effect mimics a sideband gain ratio different from unity and is also taken into account (4 and 2% for ortho and para, respectively, estimated by $T_\mathrm{C,L}/T_\mathrm{C,U} = (\nu_\mathrm{L}/\nu_\mathrm{U})^\beta$ with $\beta = 2$). The errors from the noise is estimated by $\delta \tau$=$\exp(\tau) \times \delta I /I_0$, and the errors from the sideband gain ratio and differences in the continuum in the sidebands are estimated by $\delta \tau = \epsilon - ln(1 + \epsilon \exp(\tau))$ where $\epsilon$ is both errors added in quadrature. The derivation of this formula assumes that the depth of the absorption line is correct, which means that $T_\mathrm{C}$(SSB)–$T_\mathrm{A}$ is conserved but the continuum level is not. The noise and calibration errors are finally added in quadrature. The error estimates for Method I and II add respective uncertainties from the methods to the calibration errors.
Some previous measurements of ammonia have resulted in an OPR lower than unity, although not in diffuse gas. In Orion KL @1988AA...201..285H obtained OPR(NH$_3$)=0.5 from inversion emission lines, and @1985AA...146..134H found OPR($^{15}$NH$_3$)=0.7. The latter low value was suggested to be caused by an excitation effect: a possible over-abundance of the unobserved $K$=0 state. This explanation cannot be applied in our case, since we actually observe the $K$=0 ortho state. modelled several NH$_3$ inversion transitions in the frame of C-shock models towards the warm and dense SgrB2 envelope and their best OPR of ammonia was $\sim$0.5, also lower than the statistical value of unity. , however, observed 21 high excitation ortho- and para-ammonia lines, both metastable and non-metastable levels, towards SgrB2 using ISO, and derived an ammonia OPR of unity.
There are to our knowledge no previous estimate of the OPR(NH$_3$) in *diffuse* gas, and it may very well be very different from the OPR in dense gas. We can, however, compare our strong absorption of the para-NH$_3$ line to the observations by who observed the two lowest inversion lines of para-NH$_3$ towards W49N. Their estimate of is 2.5$\times 10^{12}$ cm$^{-2}$ which is less than half our value, $\sim$6$\times 10^{12}$ cm$^{-2}$. If this column density is correct, it would imply an OPR higher than unity in this velocity component in W49N. In our excitation model, however, the integrated optical depth observed by in the $1_1$ inversion line at $+39$ km s$^{-1}$ implies a in good agreement with our value, since we obtain a higher excitation temperature than the 2.7 K used by for the $1_1$ inversion transition.
We have no clear explanation yet of our surprising result and we can only speculate about its origin. Either there are to us unknown instrumental effects, or that our assumption that the line-of-sight gas completely covers the background continuum within the beam is not correct, or there must exist some physical or chemical processes that affect the OPR in diffuse gas. In general, the fact that interstellar clouds are weak plasmas allows for conversion processes at low temperatures that would not be present in a purely neutral gas. What OPR these processes lead to must be determined by careful modelling. In diffuse molecular clouds of relatively low density, $n({\rm H}_2) \lesssim 10^3$, but relatively high ionisation fraction, $n(e)/n({\rm H}_2)\ga10^{-4}$, the rate of destruction of NH$_3$ by reactions with C$^+$, H$^+$, and H$_3^+$, can be as high as $10^{-9}$ s$^{-1}$ at kinetic temperatures $T\!\sim\!30$ K. The latter two ions can also interchange ortho and para states directly by proton substitution, perhaps at a comparable rate. These rates of destruction and interchange can approach the rates of radiative and collisional excitation out of the metastable levels, which means that the excitation and chemistry of NH$_3$ should be treated together in a self-consistent fashion. Under these conditions, NH$_3$ is a non-equilibrium system and a “spin temperature” derived from an ortho-to-para ratio is not expected to be meaningful.
The NH$_4^+$ molecule could also affect the ammonia OPR in a similar manner. The fastest gas-phase reaction sequence of ammonia is $$\mathrm{N^+} \rightarrow^\mathrm{H_2} \mathrm{NH^+} \rightarrow^\mathrm{H_2} \mathrm{NH_2^+} \rightarrow^\mathrm{H_2} \mathrm{NH_3^+} \rightarrow^\mathrm{H_2} \mathrm{NH_4^+} \rightarrow^\mathrm{e^-} \mathrm{NH_3}, \, \mathrm{NH_2},$$ where N$^+$ is formed by cosmic ray ionisation, or by reactions of He$^+$ with N$_2$ or CN which are formed by neutral-neutral reactions. Whether or not ammonia is produced on grains or in the gas, its destruction in the gas by protonating ions such as H$_3^+$ also leads to NH$_4^+$, which can have a variety of nuclear spin states depending on the overall nuclear spin of H$_3^+$. The ammonia OPR could then simply reflect the spin states of NH$_4^+$.
In addition, some laboratory evidence exists that dissociative recombination reactions have different rate coefficients depending upon the nuclear spin configuration of the molecular ion.
Observations of the ortho-to-para ratio of NH$_3$ could thus provide a valuable insight into the competing processes of formation and destruction, radiative and collisional excitation, and reactive interchange processes.
Summary {#section summary}
=======
Our spectrally resolved rotational transitions of NH, o-NH$_2$, ortho- and para-NH$_3$ along the sight-lines towards the high-mass star-forming regions W49N and G10.6$-$0.4 show remarkable similarities of line profiles and abundances. We find similar abundances of all three species and a co-existence in diffuse or translucent interstellar gas with a high molecular fraction. The mean abundance of abundance is towards both sources. The mean ratios of all three methods of the nitrogen hydrides in all velocity components, are , and and 2.0, towards G10.6$-$0.4 and W49N, respectively. This is in sharp contrast to previous observations of the nitrogen hydrides in dark clouds where the ammonia abundances are found to be $\sim$100 times higher than $X$(NH), and $\sim$10–300 times higher than $X$(NH$_2$). NH and are found to be linearly correlated with at least for $N$()$\lesssim$5$\times$10$^{12}$ cm$^{-2}$ which corresponds to a few $A_\mathrm{V}$. Upper limits of $N$(NH$^+$) in both sources indicate a $N$(NH$^+$)/$N$(NH) ratio of $\lesssim$2–14%, with a mean of $\lesssim$6%.
Linear correlations are also found for all three nitrogen hydrides with respect to CH, CN and HNC, although CH displays a more loose correlation than the latter two species. The nitrogen hydrides also largely follow the absorption pattern in Doppler velocity space of HCO$^+$ and water, a species also known to trace regions of a high molecular fraction.
We have obtained a surprisingly low ortho-to-para ratio of ammonia, , in the strongest velocity components, which is below the high-temperature limit of unity. No clear explanation has been found. More observations are needed of both the rotational transitions and the inversion lines with ground-based facilities, to be able to make firm conclusions about the ammonia OPR in diffuse gas.
We will continue to investigate the absorption lines in the sight-lines towards the other six PRISMAS sources. This will allow an analysis of the nitrogen chemistry at various galactic distances from the Galactic Centre. We will also use new Open Time 1 (OT1) *Herschel*-HIFI data of higher excitation lines to analyse the hot core sources which will be compared and contrasted with the diffuse interstellar gas. The ortho-to-para ratio of NH$_3$ will also be further investigated both in the sources and in the diffuse gas, in addition to the OPR of NH$_2$, for which new OT1 data in four of the PRISMAS sources will be analysed and compared to the ammonia OPR.
The *Herschel* spacecraft was designed, built, tested, and launched under a contract to ESA managed by the Herschel/Planck Project team by an industrial consortium under the overall responsibility of the prime contractor Thales Alenia Space (Cannes), and including Astrium (Friedrichshafen) responsible for the payload module and for system testing at spacecraft level, Thales Alenia Space (Turin) responsible for the service module, and Astrium (Toulouse) responsible for the telescope, with in excess of a hundred subcontractors. HIFI has been designed and built by a consortium of institutes and university departments from across Europe, Canada and the United States under the leadership of SRON Netherlands Institute for Space Research, Groningen, The Netherlands and with major contributions from Germany, France and the US. Consortium members are: Canada: CSA, U.Waterloo; France: CESR, LAB, LERMA, IRAM; Germany: KOSMA, MPIfR, MPS; Ireland, NUI Maynooth; Italy: ASI, IFSI-INAF, Osservatorio Astrofisico di Arcetri- INAF; Netherlands: SRON, TUD; Poland: CAMK, CBK; Spain: Observatorio Astronómico Nacional (IGN), Centro de Astrobiología (CSIC-INTA). Sweden: Chalmers University of Technology - MC2, RSS & GARD; Onsala Space Observatory; Swedish National Space Board, Stockholm University - Stockholm Observatory; Switzerland: ETH Zurich, FHNW; USA: Caltech, JPL, NHSC. CP and JHB acknowledge generous support from the Swedish National Space Board. MdL and MG acknowledge funding by CNES and by the ANR SCHISM project (ANR-09-BLAN-0231-01). T.A.B, B.G. and J.R.G. thank the Spanish MICINN for funding support through grants AYA2009-07304 and CSD2009-00038. H.S.P.M. is very grateful to the Bundesministerium für Bildung und Forschung (BMBF) for financial support aimed at maintaining the Cologne Database for Molecular Spectroscopy, CDMS. This support has been administered by the Deutsches Zentrum für Luft- und Raumfahrt (DLR). We also thank the referee Harvey Liszt whose constructive comments led to a significant improvement of the paper.
Herschel observations
=====================
[llrccccc ]{} Source & Species & Frequency & Band & LO-setting$^a$ & Date & OBSID\
& & (GHz)\
W49N & NH$^b$ & 946.476 & 3b & A &2010-04-13 & 1342194700\
& & & & B & & 1342194701\
& & & & C & & 1342194702\
& NH & 974.478 & 4a & A &2010-04-18 & 1342195004\
& & & & B & & 1342195005\
& & & & C & & 1342195006\
& o-NH$_2$ & 952.578 & 3b & A &2010-04-13 & 1342194706\
& & & & B & & 1342194707\
& & & & C & & 1342194708\
& o-NH$_3$ & 572.498 & 1b & A & 2010-04-11 & 1342194517\
& & & & B & & 1342194518\
& & & & C & & 1342194519\
& p-NH$_3^c$ & 1215.246 & 5a & A &2010-04-18 & 1342195067\
& & & & B & & 1342195068\
& & & & C & & 1342195069\
& NH$^+$ & 1012.540 & 4a & A & 2010-04-18 & 1342194998\
& & & & B & & 1342194999\
& & & & C & & 1342195000\
G10.6-0.4 & NH$^b$ & 946.476 & 3b & A &2010-03-18 & 1342192316\
& & & & B & & 1342192317\
& & & & C & & 1342192318\
& NH & 974.478 & 4a & A &2010-03-03 & 1342191620\
& & & & B & & 1342191621\
& & & & C & & 1342191622\
& o-NH$_2$ & 952.578 & 3b & A &2010-03-18 & 1342192319\
& & & & B & & 1342192320\
& & & & C & & 1342192321\
& o-NH$_3$ & 572.498 & 1b & A &2010-03-02 & 1342191578\
& & & & B & & 1342191579\
& & & & C & & 1342191580\
& p-NH$_3^c$ & 1215.246 & 5a & A &2010-03-05 & 1342191697\
& & & & B & & 1342191698\
& & & & C & & 1342191699\
& NH$^+$ & 1012.540 & 4a & A & 2010-03-03 & 1342191623\
& & & & B & & 1342191624\
& & & & C & & 1342191625\
\[Table: obsid\]
Emission line contamination of o-NH$_2$ towards W49N {#NO removal of NH2}
====================================================
The o-NH$_2$ absorption towards W49N is contaminated by an emission line in the same sideband from the source, a more complicated situation than an emission line from the other sideband. In Fig. \[Fig:W49N NH2 and NO\] an emission line is clearly visible around 47kms$^{-1}$ in the o-NH$_2$ spectrum. Here, the intensities have been normalised to the continuum in single sideband as $T_\mathrm{A}$/$T_\mathrm{C}$-1 assuming a sideband gain ratio of unity where $T_\mathrm{A}$ is the observed intensity and $T_\mathrm{C}$ is the SSB continuum as measured in line-free regions in the spectra. We identify the emission line as a blend of three unresolved hfs components of NO $^2\Pi_{1/2}\;J=9.5^f\to 8.5^f$, $F=10.5-9.5$, $9.5-8.5$, and $8.5-9.5$, at 952.464201 GHz [weighted mean frequency, cf. @1999JMoSp.196....5V]. The emission line appearing near 142kms$^{-1}$ in this figure is consequently identified as the hfs blend in the lower half of the same spin-rotation doublet of NO $^2\Pi_{1/2}\;J=9.5^e\to 8.5^e$ at an average rest frequency of 952.145441 GHz. The PRISMAS observations have also found three additional NO lines in W49N shown in Fig. \[Fig: W49N NO\], each one consisting of unresolved hfs components, while no emission of NO is found in G10.6$-$0.4.
Since the two NO lines seen in Fig. \[Fig:W49N NH2 and NO\] have almost equal line strength, and are also observed with the same instrument in the same band, we have used the observed 952.145 GHz transition as a template to remove the interfering NO line at 952.464 GHz from the o-NH$_2$ absorption. In order to do this we use $$\label{NO removal}
T_\mathrm{norm} = \frac{T_\mathrm{A} - T_\mathrm{C}} {T_\mathrm{C} + T_\mathrm{NO} },$$ to calculate the normalised SSB intensity $T_\mathrm{norm}$ in K, where $T_\mathrm{C}$ is the SSB continuum, and $T_\mathrm{NO}$ is the intensity of the NO line . The model NO line, shown in Fig. \[Fig: W49N: NO model\], is then moved to the velocity of the emission line. The resulting absorption line spectrum of o-NH$_2$ towards W49N with removed NO emission is shown in green in Fig. \[Fig:W49N NH2 and NO\].
![*W49N*: Double sideband WBS spectra of four NO emission lines from the source itself. Each NO line consists of 3 hfs components, with quatum numbers $^2\Pi_{1/2}\;J=6.5^e\to 5.5^e$ (651 GHz), $J=6.5^f\to 5.5^f$ (652 GHz), $J=7.5^e\to 6.5^e$ (752 GHz), and $J=9.5^e\to 8.5^e$ (952 GHz). The emission line at $-$18kms$^{-1}$ blended with NO 651.433 GHz is SO 11$_{11}$–11$_{10}$. []{data-label="Fig: W49N NO"}](18686Fg25.eps)
Hyperfine structure components {#hyperfine-structure-components}
==============================
[cccc ]{}
Frequency & $A_{ul}$& $\Delta v $ & Rel. Intensity\
(MHz) &(s$^{-1}$)& (kms$^{-1}$) & $\frac {A_{ul}\times g_u} {A_{ul}\mathrm{(main)}\times g_u \mathrm{(main)}}$\
974315.58 & 1.78e-6 & 50.08& 0.0001\
974342.57 & 6.02e-6 & 41.78 & 0.0007\
974354.64 & 4.54e-6 & 38.07& 0.0003\
974410.56 & 8.91e-5 & 20.87& 0.006\
974411.39 & 5.13e-4 & 20.61 & 0.018\
974436.35 & 2.28e-3 & 12.93 & 0.164\
974437.54 & 1.27e-3 & 12.56 & 0.137\
974444.04 & 1.81e-3 & 10.56 & 0.130\
974450.44 & 4.80e-3 & 8.60 & 0.173\
974462.22 & 5.04e-4 & 4.97 & 0.363\
974471.00 & 5.67e-3 & 2.27 & 0.612\
974475.41 & 3.38e-3 & 0.91 & 0.243\
974478.38 & 6.94e-3 & 0 & 1.0\
974479.34 & 6.01e-3 & -0.30 & 0.649\
974531.32 & 2.59e-4 &-16.29 & 0.019\
974539.82 & 6.46e-4 &-18.90 & 0.023\
974558.07 & 9.82e-4 &-24.52& 0.035\
974564.78 & 7.67e-4 &-26.58 & 0.055\
974574.43 & 8.15e-4 &-29.55& 0.088\
974583.03 & 2.59e-4 &-32.20& 0.019\
974607.78 & 1.21e-4 &-39.81 & 0.013\
\[Table: NH hfs transitions\]
[cccc ]{}
Frequency & $A_{ul}$& $\Delta v $ & Rel. Intensity\
(MHz) &(s$^{-1}$)& (kms$^{-1}$) & $\frac {A_{ul}\times g_u} {A_{ul}\mathrm{(main)}\times g_u \mathrm{(main)}}$\
946380.79 & 2.40e-3 & 30.100375183108216 & 0.369\
946380.79 & 9.58e-4 & 30.100375183108216& 0.295\
946419.79 & 3.66e-4 & 17.746741564471662 & 0.056\
946419.98 & 8.99e-4 & 17.688682017264846 & 0.276\
946475.82 & 3.25e-3 & 0 & 1.0\
946509.24 & 1.93e-3& -10.586919289910648 &0.297\
946509.24 & 1.21e-3 &-10.586919289910648 & 0.372\
946527.48 & 1.81e-3& -16.363416629849031 & 0.278\
946527.56 & 1.84e-4& -16.389231431253044 & 0.057\
\[Table: NH 946 hfs transitions\]
[cccc ]{}
Frequency & $A_{ul}$& $\Delta v $ & Rel. Intensity\
(MHz) &(s$^{-1}$)& (kms$^{-1}$) & $\frac {A_{ul}\times g_u} {A_{ul}\mathrm{(main)}\times g_u \mathrm{(main)}}$\
952435.66 & 4.86e-6& 44.91 & 0.0002\
952446.99 & 1.33e-5 &41.34 & 0.0006\
952463.69 & 3.07e-5 & 36.09 & 0.002\
952490.73 & 3.5e-3 & 27.58 & 0.079\
952502.06 & 1.28e-3 & 24.01 & 0.029\
952503.09 & 1.79e-3 & 23.69 & 0.081\
952514.42 & 3.65e-3 & 20.12 & 0.164\
952528.90 & 2.03e-3 & 15.56 & 0.046\
952533.03 & 2.23e-4 & 14.27 & 0.010\
952540.23 & 6.53e-3 & 12.00 & 0.147\
952542.21 & 7.02e-3 & 11.37 & 0.474\
952549.73 & 2.63e-3 & 9.01 & 0.178\
952560.41 & 3.32e-3 & 5.65 & 0.149\
952562.12 & 6.42e-3 & 5.11 & 0.289\
952571.74 & 7.55e-3 & 2.08 & 0.340\
952573.46 & 3.88e-3 & 1.54 & 0.174\
952577.11 & 8.47e-3 & 0.39 & 0.570\
952578.35 & 1.11e-2 & 0 & 1.0\
952600.46 & 1.59e-3 & -6.96 & 0.072\
952615.49 & 3.58e-3 & -11.69 & 0.081\
952626.82 & 2.73e-3 & -15.25 & 0.061\
952627.84 & 2.67e-3 & -15.57 & 0.120\
952628.25 & 3.35e-3 & -15.70 & 0.226\
952639.17 & 1.40e-3 & -19.14 & 0.063\
952653.66 & 9.66e-3 & -23.70 & 0.022\
952655.64 & 7.43e-4 & -24.32 & 0.050\
952659.49 & 4.40e-4 & -25.54 & 0.020\
952664.99 & 1.58e-3 & -27.27 & 0.036\
952686.88 & 2.97e-4 & -34.16 & 0.013\
952698.21 & 7.06e-5 & -37.72 & 0.003\
\[Table: NH2 hfs transitions\]
[ccc ]{}
Frequency & $\Delta v$ & Rel. Intensity\
(MHz) & (kms$^{-1}$) &\
572.4971 & -1.0473 & 0.1999\
572.4984 & 0.0 & 1.0\
572.5002 & 0.5237 & 0.6\
\[Table: NH3 hfs components\]
[cccc ]{}
Frequency & $A_{ul}$& $\Delta v $ & Rel. Intensity\
(MHz) &(s$^{-1}$)& (kms$^{-1}$) & $\frac {A_{ul}\times g_u} {A_{ul}\mathrm{(main)}\times g_u \mathrm{(main)}}$\
1215.2443 &1.02e-2 & 0.31 & 0.536\
1215.2449 &3.39e-3 & 0.16 & 0.179\
1215.2453 &5.65e-3 & 0.06 & 0.179\
1215.2456 &1.36e-2 & 0 & 1.0\
1215.2459 &3.77e-4 & -0.09 & 0.012\
1215.2468 &7.53e-3 & -0.32 & 0.238\
\[Table: NH3 1215 hfs components\]
Method II results
=================
[ll c ccc c ccc cc ]{}
\
$v_\mathrm{LSR}$ & $N$(NH) & $N$(o-NH$_2$) & $N$(o-NH$_3$) &$N$(CH) & NH/o-NH$_3$ & o-NH$_2$/o-NH$_3$ & $X$(NH)& $X$(o-NH$_2$) & $X$(o-NH$_3$)\
(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) &\
36.0 - 42.0 & 1.1e13 & 8.3e12 &4.1e12 & 6.3e13 &2.7 & 2.0 & 6.1e-9 & 4.6e-9 & 2.3e-9\
50.0 - 57.0 & 1.4e12&1.4e12&1.2e12 & 6.6e13 & 1.2 & 1.2 & 7.4e-10 & 7.4e-10 & 6.4e-10\
57.0 - 61.0 & 7.8e12&3.5e12 & 1.9e12 & 7.3e13 &4.1 & 1.8 &3.7e-9 & 1.7e-9 & 9.1e-10\
61.0 - 70.0 & 1.0e13&4.8e12 & 3.0e12 & 9.1e13 & 3.3 & 1.6 & 3.8e-9 & 1.8e-9 & 1.2e-9\
Total &3.0e13 &1.8e13& 1.0e13 & 2.9e14 & 3.0 & 1.8 & 3.6e-9 & 2.2e-9 & 1.2e-9\
Mean & 7.6e12 &4.5e12 & 2.6e12 & 7.3e13 & 3.0 & 1.8 & 3.6e-9 & 2.2e-9 & 1.2e-9\
Median & 9.0e12&4.2e12 & 2.5e12 & 7.0e13& 3.6 & 1.7 & 4.5e-9 & 2.1e-9 & 1.2e-9\
\
$v_\mathrm{LSR}$ & $N$(NH) & $N$(o-NH$_2$) & $N$(o-NH$_3$) &$N$(CH) & NH/o-NH$_3$ & o-NH$_2$/o-NH$_3$ & $X$(NH)& $X$(o-NH$_2$) & $X$(o-NH$_3$)\
(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) &\
12.5 - 20.0 & 2.9e13 & 1.4e13 &5.4e12 & 1.2e14 & 5.4 & 2.6 & 8.5e-9 & 4.1e-9 & 1.6e-9\
20.0 - 25.0 & 2.7e13 &1.2e13 & 3.8e12 &9.3e13 &7.1 & 3.2 & 1.0e-8 &4.5e-9 & 1.4e-9\
25.0 - 29.0 & 3.0e13 &1.5e13 & 8.4e12 & 1.2e14 &3.6 & 1.8 & 8.8e-9 &4.4e-9 & 2.5e-9\
29.0 - 31.0 & 2.8e13 &1.2e13 & 6.9e12 & 5.4e13 &4.1 & 1.7 & 1.8e-8 &7.8e-9 & 4.5e-9\
31.0 - 35.0 & 1.4e13 &5.0e12 & 2.8e12 & 7.1e13 & 5.0 & 1.8 & 6.9e-9 &2.5e-9 & 1.4e-9\
35.0 - 39.5 & 2.6e13 & 8.2e12 & 3.0e12 & 1.1e14 & 8.7 & 2.7 & 8.3e-9 &2.6e-9 & 9.6e-10\
39.5 - 43.5 & 3.2e13 &1.1e13 & 3.9e12 & 6.3e13 &8.2 & 2.8 & 1.8e-8 &6.1e-9 & 2.2e-9\
43.5 - 50.0 & 1.8e12 &1.0e12 & 8.0e11 & 1.9e13 & 2.3 & 1.3 & 3.3e-9 & 1.8e-9 & 1.5e-9\
Total & 1.9e14 & 7.8e13 & 3.5e13 & 6.5e14 & 5.4 & 2.2& 1.0e-8 & 4.2e-9 & 1.9e-9\
Mean & 2.3e13 &9.8e12 & 4.4e12& 8.1e13 & 5.4 & 2.2 & 1.0e-8 & 4.2e-9 & 1.9e-9\
Median & 2.7e13& 1.1e13 & 3.9e12& 8.2e13 & 7.1 & 3.0 &1.2e-8 & 4.9e-9 & 1.6e-9\
\[Table: Model II ratios results\]
Method III results
==================
[lc ccc ccc ccc ]{}\
$v_\mathrm{LSR}$ & $\Delta v$ & $N$(NH) & $N$(o-NH$_2$) & $N$(o-NH$_3$) & $N$(CH) & NH/o-NH$_3$ & o-NH$_2$/o-NH$_3$ & $X$(NH)& $X$(o-NH$_2$)& $X$(o-NH$_3$)\
(kms$^{-1}$) &(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) &\
33.5 & 1.9 & 1.1e12 & 2.2e12 & 3.6e11 & 3.2e13 & 3.1 & 6.1 & 1.2e-9 & 2.4e-9 & 3.9e-10\
39.6 & 1.1& 1.2e13 & 7.4e12 & 3.2e12 &3.3e13& 3.8 & 2.3 & 1.3e-8 & 7.8e-9 & 3.4e-9\
59.4 & 3.0 & 1.1e13 & 3.5e12 & 1.7e12 & 7.0e13 & 6.5 & 2.1 & 5.5e-9 & 1.8e-9 & 8.5e-10\
62.8 & 2.4& 9.7e12 & 3.2e12 & 1.9e12 & 5.8e13 & 5.1 & 1.7 & 5.9e-9 & 1.9e-9 & 1.1e-9\
Total: &…&3.4e13 & 1.6e13 & 7.2e12 & 1.9e14 & 4.7 & 2.3 & 6.1e-9 & 3.0e-9 & 1.3e-9\
Mean: &…& 8.5e12 & 4.1e12 & 1.8e12 & 4.8e13 & 4.7 & 2.3 & 6.1e-9 & 3.0e-9 & 1.3e-9\
Median: &…& 1.0e13 & 3.4e12 & 1.8e12 & 4.6e13 & 5.8 & 1.9 & 8.0e-9 & 2.6e-9 & 1.4e-9\
\
$v_\mathrm{LSR}$ & $\Delta v$ & $N$(NH) & $N$(o-NH$_2$) & $N$(o-NH$_3$) & $N$(CH) & NH/o-NH$_3$ & o-NH$_2$/o-NH$_3$ & $X$(NH)& $X$(o-NH$_2$)& $X$(o-NH$_3$)\
(kms$^{-1}$) &(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) &\
tables 16.3 & 1.9 & 2.4e13 & 9.6e12 & 3.4e12 & 7.7e13 & 7.1 & 2.8 & 1.1e-8 & 4.4e-9 & 1.5e-9\
19.0 & 1.3 & 6.8e12 & 2.3e12 & 6.0e11 & 2.6e13 & 11 & 3.8 & 9.2e-9 & 3.1e-9 & 8.1e-10\
22.2 & 4.2 & 1.9e13 & 7.4e12 & 1.7e12 & 9.0e13 & 11 & 4.4 & 7.4e-9 & 2.9e-9 & 6.6e-10\
22.9 & 1.0 & 1.1e13 & 3.0e12 & 1.0e12 & 1.2e13 & 11 & 3.0 & 3.2e-8 & 8.8e-9 & 2.9e-9\
25.0 & 3.3 & 8.3e12 & 4.8e12 & 1.1e12 & 4.0e13 & 7.5 & 4.4 & 7.3e-9 & 4.2e-9 & 9.6e-10\
28.0 & 1.9 & 3.2e13 & 1.2e13 & 6.4e12 & 9.8e13 & 5.0 & 1.9 & 1.1e-8 & 4.3e-9 & 2.3e-9\
30.0 & 1.5 & 3.1e13 & 1.4e13 & 6.8e12 & 6.1e13 & 4.6 & 2.1 & 1.8e-8 & 8.0e-9 & 3.9e-9\
32.2 & 1.7 & 1.2e13 & 3.1e12 & 8.9e11 & 2.5e13 & 13 & 3.5 & 1.7e-8 & 4.3e-9 & 1.2e-9\
36.2 & 3.8 & 2.2e13 & 7.3e12 & 2.5e12 & 9.4e13 & 8.8 & 2.9 & 8.2e-9 & 2.7e-9 & 9.3e-10\
39.1 & 1.2 & 1.4e13 & 3.2e12 & 5.5e11 & 4.6e13 & 25 & 5.8 & 1.1e-8 & 2.4e-9 & 4.2e-10\
41.0 & 1.7 & 3.5e13 & 1.1e13 & 3.1e12 & 8.5e13 & 11 & 3.5 & 1.4e-8 & 4.5e-9 & 1.3e-9\
45.1 & 1.3 & 3.7e12 & 1.6e12 & 3.8e11 & 1.4e13 & 9.7 & 4.2 & 9.3e-9 & 4.0e.9 & 9.5e-10\
Total: & …&2.2e14 & 7.9e13 & 2.8e13 & 6.7e14& 7.7 & 2.8 & 1.1e-8 & 4.2e-9 & 1.5e-9\
Mean: &…&1.8e13& 6.6e12 & 2.4e12 & 5.6e13& 7.7 & 2.8 &1.1e-8 & 4.2e-9 & 1.5e-9\
Median: &…&1.7e13& 6.1e12 & 1.4e12 & 5.4e13& 12 & 4.3 & 1.1e-8 & 4.0e-9 & 9.2e-10\
\[Table: Method III resulting ratios and columns\]
Results for CN and HNC
======================
[l c | cc cc | c ccc ]{} W49N && &\
$v_\mathrm{LSR}$ & $\Delta v$ & $N$(CN) & $N$(HNC) & CN/o-NH$_3$ & HNC/o-NH$_3$ & $N$(CN) & $N$(HNC) & CN/o-NH$_3$ & HNC/o-NH$_3$\
(kms$^{-1}$) &(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) & && (cm$^{-2}$) &(cm$^{-2}$)\
33.5 & 1.9 & 8.1e12 & 4.3e11 & 18 & 0.93 &7.5e12& 4.5e11 & 21 & 1.3\
39.6 & 1.1 & 6.6e13 & 2.9e12 & 15 & 0.67 &6.3e13& 2.9e12 & 20 & 0.91\
59.4 & 3.0 & 3.5e13 & 2.1e12 & 17 & 1.0 &3.3e13& 2.2e12 & 19 & 1.3\
62.8 & 2.4 & 2.8e13 & 1.5e12 & 13 & 0.68 &2.6e13& 1.7e12 & 14 & 0.89\
Total: && 1.4e14 & 6.9e12 & 15 & 0.76 &1.3e14 & 7.3e12 & 18 & 1.0\
Mean: && 3.4e13 & 1.7e12 & 15 & 0.76 &3.2e13 & 1.8e12 & 18 & 1.0\
Median: && 3.2e13 & 1.8e12& 15 & 0.84 &3.0e13 & 2.0e12 & 16 & 1.1\
G10.6$-$0.4 && &\
$v_\mathrm{LSR}$ & $\Delta v$ & $N$(CN) & $N$(HNC) & CN/o-NH$_3$ & HNC/o-NH$_3$ & $N$(CN) & $N$(HNC) & CN/o-NH$_3$ & HNC/o-NH$_3$\
(kms$^{-1}$) &(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) & && (cm$^{-2}$) &(cm$^{-2}$)\
16.3 & 1.9 & 1.0e14 & 3.6e12 & 24 & 0.86 & 9.0e13 &4.1e12 &26 & 1.2\
19.0 & 1.3 & 3.3e13 & 1.3e12 & 38 & 1.5 & 2.9e13 &1.5e12 &48 &2.5\
22.2 & 4.2 & 5.7e13 & 2.7e12 & 32 & 1.5 & 5.1e13 &3.1e12 &30 & 1.8\
22.9 & 1.0 & 6.6e13 & 2.5e12 & 44 & 1.7 & 5.8e13 &2.9e12 &58 & 2.9\
25.0 & 3.3 & 9.5e12 & …& 6.8 & …& 8.4e12 &1.7e12 &7.6 &1.5\
28.0 & 1.9 & 1.2e14 & 4.1e12 & 16 & 0.56 & 1.1e14 &4.7e12 &17 &0.73\
30.0 & 1.5 & 1.4e14 & 5.8e12 & 18 & 0.73 & 1.3e14 &6.7e12 &19 &1.0\
32.2 & 1.7 & 2.5e13 & 1.0e12 & 19 & 0.77 & 2.3e13 &1.2e12 &26 &1.3\
36.2 & 3.8 & 1.0e14 & 5.1e12 & 31 & 1.6 & 9.0e13 &5.9e12 &36 &2.4\
39.1 & 1.2 & 3.7e13 & 1.1e12 & 79 & 2.3 & 3.3e13 &1.3e12 &60 &2.4\
41.0 & 1.7 & 1.6e14 & 5.9e12 & 39 & 1.4 & 1.4e14 &6.8e12 &45 &2.2\
45.1 & 1.3 & 1.2e13 & 8.5e11 & 20 & 1.4 & 1.1e14 &9.7e12 &289 &26\
Total: & & 8.6e14& 3.4e13 & 25 & 1.0 & 8.7e14 & 5.0e13 & 31 & 1.7\
Mean: && 7.2e13 & 3.1e12 & 25 & 1.0 & 7.4e13 &3.6e12 & 53 & 2.6\
Median: && 6.2e13 & 2.7e12 & 37 & 1.5 & 7.3e13 & 4.1e12& 31 & 1.7\
\[Table: Method I, III CN and HNC columns\]
[l c ccc]{}
\
$v_\mathrm{LSR}$ & $N$(CN) & $N$(HNC) & CN/o-NH$_3$ & HNC/o-NH$_3$\
(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) &\
36.0 - 42.0 & 5.3e13 & 2.9e12 & 13 & 0.71\
50.0 - 57.0 & 4.4e12 & 5.7e11 & 3.7 & 0.48\
57.0 - 61.0 & 2.1e13 & 1.7e12 & 11 & 0.89\
61.0 - 70.0 & 2.6e13 & 2.0e12 & 8.7 & 0.67\
Total & 1.0e14 & 7.2e12 & 10 & 0.70\
Mean & 2.6e13 & 1.8e12 & 10 & 0.70\
Median & 2.4e13 & 1.9e12 & 9.6 & 0.76\
\
$v_\mathrm{LSR}$ & $N$(CN) & $N$(HNC) & CN/o-NH$_3$ & HNC/o-NH$_3$\
(kms$^{-1}$) & (cm$^{-2}$) &(cm$^{-2}$) &\
12.5 - 20.0 &1.0e14 &5.8e12 & 19 & 1.1\
20.0 - 25.0 &9.1e13 &5.2e12 & 24 & 1.4\
25.0 - 29.0 &9.6e13 &5.6e12 & 11 & 0.67\
29.0 - 31.0 &8.8e13 &4.9e12 & 13 & 0.71\
31.0 - 35.0 &4.2e13 &2.3e12 & 15 & 0.82\
35.0 - 39.5 &8.0e13 &5.3e12 & 27 & 1.8\
39.5 - 43.5 &1.4e14 &7.4e12 & 36 & 1.9\
43.5 - 50.0 &9.8e12 &1.1e12 & 12 & 1.4\
Total & 6.5e14 & 3.8e13 & 18 & 1.1\
Mean &8.1e13 & 4.7e12 & 18 & 1.1\
Median & 9.0e13 & 5.3e12 & 23 & 1.4\
\[Table: Method II CN and HNC columns\]
Figures
=======
![*W49N:* Fits of method II to NH and o-NH$_2$, and their observed spectra. Residuals are found at the bottom. Template spectrum is o-NH$_3$ at 572GHz.[]{data-label="Fig: W49N method I NH and NH2 from NH3"}](18686Fg29.eps)
![*W49N:* Fits of method II to NH, o-NH$_2$ and o-NH$_3$, and their observed spectra. Residuals are found at the bottom. Template spectrum is CH at 532GHz.[]{data-label="Fig: W49N method I NH, NH2, NH3 from CH"}](18686Fg30.eps)
![*G10.6$-$0.4 (W31C):* Fits of method II to NH and , and their observed spectra. Residuals are found at the bottom. Template spectrum is o-NH$_3$ at 572GHz.[]{data-label="Fig: W31C method I NH and NH2 from NH3"}](18686Fg31.eps)
![*G10.6$-$0.4 (W31C):* Fits of method II to NH, and o-NH$_3$, and their observed spectra. Residuals are found at the bottom. Template spectrum is CH at 532GHz.[]{data-label="Fig: W31C method I NH, NH2, NH3 from CH"}](18686Fg32.eps)
![*W49N:* Method III (XCLASS) fits to the Nitrogen hydrides and CH. []{data-label="Fig: w49n guass fits n-hydrides method III"}](18686Fg33.eps)
![*G10.6$-$0.4 (W31C):* Method III (XCLASS) fits to the Nitrogen hydrides and CH. []{data-label="Fig: w31c guass fits n-hydrides method III"}](18686Fg34.eps)
Comparison plots of different methods
=====================================
In Figs. \[Fig: column density plots 1\]–\[Fig: column density plots 4\] we plot the resulting column densities from our three different methods.
Note that since Method I and Method III use Gaussians with smaller line widths than the velocity bins of Method II towards G10.6$-$0.4, the number of velocity components is not the same in this source. The +33 km s$^{-1}$ component towards W49N is not used in Method II, which on the other hand models a velocity bin that is difficult to fit using Gaussians in Method I and III . The comparison of the nitrogen hydrides vs. CH in W49N, in two of the four velocity bins, shows the only cases in which the results do not agree reasonably well between the methods. The resulting CH column densities also show a larger spread between the methods than the other species. The reason is not fully understood, but is probably caused by the different approaches to the CH modelling. Method II uses the deconvolved CH spectra as a template for the nitrogen hydrides and is thereby trying to fit the broader CH absorption to the more narrow features of the nitrogen hydrides with moderate success. The other two methods use an opposite approach: they use the output of the fitting of the nitrogen hydrides as an input to CH to fit Gaussians only in the same parts in velocity space, and are thereby trying to fit narrow features to the broader CH absorption. The fit is good in some parts of the velocity space, and very bad in the velocity space in which CH has absorption but the nitrogen hydrides do not, which is expected.
[^1]: *Herschel* is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
[^2]: www.astrochymist.org, www.astro.uni-koeln.de/cdms/molecules
[^3]: http://astro.ens.fr/?PRISMAS
[^4]: http://herschel.esac.esa.int/Docs/HIFI/html/ch5.html
[^5]: HIPE is a joint development by the *Herschel* Science Ground Segment Consortium, consisting of ESA, the NASA *Herschel* Science Centre, and the HIFI, PACS and SPIRE consortia.
[^6]: Developed by Per Bergman at Onsala Space Observatory, Sweden; [http://www.chalmers.se/rss/oso-en/observations/data-\
reduction-software]{}
[^7]:
[^8]:
[^9]:
[^10]:
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment.'
author:
- |
Alexander Mathews$^*$, Lexing Xie$^{*\dag}$, Xuming He$^{\dag *}$\
$^*$The Australian National University, $^\dag$NICTA\
alex.mathews@anu.edu.au, lexing.xie@anu.edu.au, xuming.he@nicta.com.au
title: 'SentiCap: Generating Image Descriptions with Sentiments'
---
Introduction {#sec:intro}
============
Automatically describing an image by generating a coherent sentence unifies two core challenges in artificial intelligence – vision and language. Despite being a difficult problem, the research community has recently made headway into this area, thanks to large labeled datasets, and progresses in learning expressive neural network models. In addition to composing a factual description about the objects, scene, and their interactions in an image, there are richer variations in language, often referred to as styles [@crystal1969investigating]. Take emotion, for example, it is such a common phenomena in our day-to-day communications that over half of text accompanying online pictures contains an emoji (a graphical alphabet for emotions) [@instagram2015emoji]. How well emotions are expressed and understood [influences]{} decision-making [@lerner2015emotion] – from the mundane (e.g., making a restaurant menu appealing) to major (e.g., choosing a political leader in elections). Recognizing [sentiment and opinions]{} from written communications has been an active research topic for the past decade [@pang2008opinion; @socherrecursive], the synthesis of text with sentiment that is relevant to a given image is still an open problem. In Figure \[fig:intro\], [each]{} image is described with a factual caption, and with positive or negative emotion[, respectively]{}. One may argue that the descriptions with sentiments are more likely to pique interest about the subject being pictured (the dog and the motocycle), or about their background settings (interaction with the dog at home, or how the motocycle came about).
![Example images with neural, positive () and negative () captions, by crowd workers in MSCOCO dataset [@chen2015microsoft] and this work (Section \[sec:mturk\]). []{data-label="fig:intro"}](fig/intro_example){width=".4\textwidth"}
In this paper, we describe a method, called SentiCap, to generate image captions with sentiments. We build upon the CNN+RNN (Convolution Neural Network + Recurrent Neural Network) recipe that has seen many recent successes [@donahue2015long; @Karpathy2015CVPR; @mao2014deep; @vinyals2015show; @xu2015show]. In particular, we propose a switching Recurrent Neural Network (RNN) model to represent sentiments. This model consists of two parallel RNNs – one represents a general background language model; another specialises in descriptions with sentiments. We design a novel word-level regularizer, so as to emphasize the sentiment words during training [and to optimally combine the two RNN streams]{} (Section \[sec:model\]). We have gathered a new dataset of several thousand captions with positive and negative sentiments by re-writing factual descriptions (Section \[sec:mturk\]). [Trained on 2000+ sentimental captions and 413K neutral captions, our switching RNN out-performs a range of heuristic and learned baselines in the number of emotional captions generated, and in a [[[variety]{}]{}]{} of subjective and human evaluation metrics. In particular SentiCap has the highest number of success in placing at least one sentiment word into the caption, 88% positive (or 72% negative) captions are perceived by crowd workers as more positive (or negative) than the factual caption, with a similar descriptiveness rating. ]{}
Related Work
============
[Recent]{} advances in visual recognition have made “an image is a thousand words” much closer to reality, [largely due to the advances in Convolutional Neural Networks (CNN) [@Simonyan2015; @Szegedy_2015_CVPR]]{}. [A related topic also advancing rapidly is image captioning, where most early systems were]{} [based on similarity retrieval using objects and attributes [@farhadi2010every; @kulkarni2011baby; @hodosh2013framing; @Gupta2012a], and assembling sentence fragments such as object-action-scene [@farhadi2010every], subject-verb-object [@rohrbach2013translating], object-attribute-prepositions [@kulkarni2011baby] or global image properties such as scene and lighting [@Nwogu2011].]{} [Recent systems model richer language structure, such as formulating a integer linear program to map visual elements to the parse tree of a sentence [@kuznetsova2014treetalk], or embedding [@Xu2015] video and compositional semantics into a joint space.]{}
Word-level language models [such as RNNs [@mikolov2011strategies; @sutskever2011generating] and maximum-entropy (max-ent) language models [@mikolov2011strategies] have improved with the aid of]{} significantly larger datasets and more computing power. Several research teams independently proposed image captioning systems that combine CNN-based image representation and [such]{} language models. Fang [et al.]{} used a cascade of word detectors from images and a [max-ent]{} model. The Show and Tell [@vinyals2015show] system used an RNN as the language model, seeded by CNN image features. Xu [et al.]{} estimated spatial attention as a latent variable, to make the Show and Tell system aware of local image information. Karpathy and Li[ ]{} used an [RNN to generate a sentence from the alignment between objects and words]{}. [Other work has]{} [employed multi-layer RNNs [@Chen_2015_CVPR; @donahue2015long]]{} [for image captioning.]{} [Most]{} RNN-based multimodal language models [use]{} the Long Short Term Memory (LSTM) unit that preserves long-term information [and prevents]{} overfitting [@hochreiter1997long]. We adopt one of the competitive systems [@vinyals2015show] – CNN+RNN with LSTM units as our basic multimodal sentence generation engine, due to its simplicity and computational efficiency.
[Researchers have modeled]{} how [an]{} image is presented, and what kind of response it is likely to elicit from viewers, such as analyzing the aesthetics and emotion in images [@murray2012ava; @joshi2011aesthetics]. More recently, the Visual SentiBank [@borth2013sentibank] system constructed a catalogue of Adjective-Noun-Pairs (ANPs) that are frequently used to describe online images. We build upon Visual SentiBank to construct sentiment vocabulary, but to the best of our knowledge, no existing work tries to compose image descriptions with desired sentiments. [Identifying sentiment in text is an active area of research[ [@pang2008opinion; @socherrecursive]]{}. Several [teams]{} [@Nakagawa2010; @Mcdonald2011] [designed sentence models with latent variables representing the sentiment.]{} [Our work focuses on]{} generating sentences and not explicitly modelling sentiment using hidden variables.]{}
Describing an Image with Sentiments {#sec:model}
===================================
Given an image $I$ and its $D_x$-dimensional visual feature ${\mathbf{x}}\in{{\mathbb R}}^{D_x}$, our goal is to generate a sequence of words (i.e. a caption) ${\mathbf{Y}}=\{{\mathbf{y}}_1,\cdots,{\mathbf{y}}_T\}$ to describe the image with a specific style, such as expressing sentiment. Here ${\mathbf{y}}_t\in\{0,1\}^V$ is 1-of-V encoded indicator vector for the $t^{th}$ word; $V$ is the size of the vocabulary; and $T$ is the length of the caption.
We assume [[[that]{}]{}]{} sentence generation involves two underlying mechanisms, one of which focuses on the factual description of the image while the other describes the image content with sentiments. [We formulate such caption generation process using a switching multi-modal language model, which sequentially generates words in a sentence.]{} [Formally]{}, we introduce a binary [sentiment]{} variable $s_t\in\{0,1\}$ for every word ${\mathbf{y}}_t$ to indicate which mechanism is used. At each time step $t$, our model produces the probability of ${\mathbf{y}}_t$ and the current sentiment variable $s_t$ given the image feature ${\mathbf{x}}$ and the previous words ${\mathbf{y}}_{1:t-1}$, denoted by $p({\mathbf{y}}_t,s_t|{\mathbf{x}},\mathbf{y}_{1:t-1})$. We generate the word probability by marginalizing [over]{} the sentiment variable $s_t$: $$\resizebox{0.25\columnwidth}{!}{$
p({\mathbf{y}}_t|{\mathbf{x}},{\mathbf{y}}_{1:t-1})$}=\sum_{s_t}\resizebox{0.525\columnwidth}{!}{$p({\mathbf{y}}_t|s_t,{\mathbf{x}},{\mathbf{y}}_{1:t-1})p(s_t|{\mathbf{x}},{\mathbf{y}}_{1:t-1})
$}
\label{eq:bayesrule_s}
{\vspace{-0.mm}}$$ Here $p({\mathbf{y}}_t|s_t,\cdot)$ is the caption model conditioned on the sentiment variable and [$p(s_t|\cdot)$ is the probability of the word sentiment]{}. The rest of this section will introduce these components [and model learning]{} in detail.
Switching RNNs for Sentiment Captions {#ssec:sentimodel}
-------------------------------------
We adopt a joint CNN+RNN architecture [@vinyals2015show] in the conditional caption model. Our full model combines two CNN+RNNs running in parallel: one capturing the [factual word generation (referred to as the background language model)]{}, the other specializing in words with sentiment. [The full model is a switching RNN, in which the variable $s_t$ functions as a switching gate.]{} This model design aims to learn sentiments well, despite data sparsity – using only a small dataset [of]{} image description with sentiments (Section \[sec:mturk\]), with the help from millions of image-sentence pairs that factually describe pictures [@chen2015microsoft].
Each RNN stream [consists of]{} a series of LSTM units. Formally, we denote the $D$-dimensional hidden state of an LSTM as ${\mathbf{h}}_t \in {{\mathbb R}}^D$, its memory cell as ${\mathbf{c}}_t\in {{\mathbb R}}^D$, the input, output, forget gates as ${\mathbf{i}}_t,~{\mathbf{o}}_t,~{\mathbf{f}}_t \in {{\mathbb R}}^D$, [respectively.]{} [Let $k$ indicate which RNN stream it is,]{} the LSTM can be implemented as: $$\begin{aligned}
&\begin{pmatrix} {\mathbf{i}}_t^k \\ {\mathbf{f}}_t^k \\ {\mathbf{o}}_t^k \\ {\mathbf{g}}_t^k \end{pmatrix} =
\begin{pmatrix} \sigma \\ \sigma \\ \sigma \\\tanh \end{pmatrix} \mathlarger{{\mathbf{T}}}^k{}
\begin{pmatrix} {\mathbf{E}}^k{\mathbf{y}}_{t-1} \\ {\mathbf{h}}^k_{t-1} \end{pmatrix}
\label{eq:lstm}\\
&{\mathbf{c}}_t^k = {\mathbf{f}}_t^k \odot {\mathbf{c}}_{t-1}^k + {\mathbf{i}}_t^k \odot {\mathbf{g}}_t^k, \quad
{\mathbf{h}}_t^k = {\mathbf{o}}_t^k \odot {{\mathbf{c}}_t^k}. \nonumber
{\vspace{-0.mm}}\end{aligned}$$ Here $\sigma(\chi)$ is the sigmoid function $1/(1+e^{-\chi})$; $tanh$ is the hyperbolic tangent function; [${\mathbf{T}}^k\in{{\mathbb R}}^{4D\times2D}$]{} is a set of learned weights; ${\mathbf{g}}^k_t\in{{\mathbb R}}^D$ is the input to the memory cell; [${\mathbf{E}}^k \in {{\mathbb R}}^{D\times V}$]{} is a learned embedding matrix [in model $k$]{}, and ${\mathbf{E}}^k{\mathbf{y}}_t$ is the embedding vector of the word ${\mathbf{y}}_t$.
To incorporate image information, we use an image representation $\hat{\mathbf{x}}={\mathbf{W}}_x{\mathbf{x}}$ as the word embedding ${\mathbf{Ey}}_0$ when $t=1$, where ${\mathbf{x}}$ is a high-dimensional image feature extracted from a convolutional neural network [@Simonyan2015], and ${\mathbf{W}}_x$ is a learned embedding matrix. Note that the LSTM hidden state ${\mathbf{h}}_t^k$ summarizes ${\mathbf{y}}_{1:t-1}$ and ${\mathbf{x}}$. The conditional probability of the output caption words depends on the hidden state of the corresponding LSTM, $$\begin{aligned}
p({\mathbf{y}}_t | s_t=k, {\mathbf{x}}, {\mathbf{y}}_{1:t-1}) &\propto \exp({\mathbf{W}}_y^{k} {\mathbf{h}}_t^k) {\vspace{-0.mm}}\end{aligned}$$ where ${\mathbf{W}}_y^k \in {{\mathbb R}}^{D\times V}$ is a set of learned output weights.
[The]{} sentiment switching model generates the probability of switching [between the two RNN streams]{} at each time $t$, [with]{} a single layer network taking the hidden states of both RNNs as input: $$\begin{aligned}
p(s_t=1 | {\mathbf{x}}, {\mathbf{y}}_{1:t-1}) &= \sigma({\mathbf{W}}_s[{\mathbf{h}}^0_t;{\mathbf{h}}^1_t])\label{eq:switch}
{\vspace{-0.mm}}\end{aligned}$$ where ${\mathbf{W}}_s$ is the weight matrix for the hidden states.
[An illustration of this sentiment switching model is in Figure \[fig:rnn\].]{} In summary, the parameter set for each RNN ($k=\{0,1\}$) is [$\Theta^k=\{{\mathbf{T}}^k, {\mathbf{W}}_y^k, {\mathbf{E}}^k, {\mathbf{W}}_x^k\}$]{}, and that of the switching RNN is [$\Theta=\Theta^0\cup\Theta^1\cup {\mathbf{W}}_s$]{}. We have tried including ${\mathbf{x}}$ for learning $p(s_t|{\mathbf{x}}, {\mathbf{y}}_{1:t-1})$ but found no benefit.
![Illustration of the switching RNN model for captions with sentiment. Lines with diamonds denote projections with learned weights. LSTM cells are described in Eq \[eq:lstm\]. $\gamma_t^0$ and $\gamma_t^1$ are probabilities of sentiment switch defined in Eq (\[eq:switch\]) and act as gating functions for the two streams. []{data-label="fig:rnn"}](fig/parrallel_rnn){width=".35\textwidth"}
Learning the Switching RNN Model
--------------------------------
[One of the key challenges is to design a learning scheme for $p(s_t | {\mathbf{x}}, {\mathbf{y}}_{1:t-1})$ and two CNN+RNN components. We take a two-stage learning approach to estimate the parameters $\Theta$ in our switching RNN model based on a large dataset with factual captions and a small set with sentiment captions.]{}
**Learning a background multi-modal RNN.** We first train a CNN+RNN with a large dataset of image and caption pairs, denoted as $\mathcal{D}^0=\{({\mathbf{x}}_0^i,{\mathbf{y}}_0^i)\}_{i=1}^N$. $\Theta^0$ are learned by minimizing the negative log-likelihood of the caption words given images, [ $$\label{eqn:baselearn}
\resizebox{0.26\columnwidth}{!}{$
L^0(\Theta^0,\mathcal{D}^0)
= -$}\sum_i\sum_t \resizebox{0.5\columnwidth}{!}{$\log p({\mathbf{y}}^i_{0,t}|s_t=0,{\mathbf{x}}^i_0,{\mathbf{y}}^i_{0,1:t-1} ).$}
{\vspace{-0.mm}}$$ ]{}
**Learning from captions with sentiments.** [Based on the pre-trained CNN+RNN in Eq , we then learn the switching RNN using a small image caption dataset with a specific sentiment polarity, denoted as $\mathcal{D}=\{({\mathbf{x}}^i,{\mathbf{y}}^i,\eta^i)\}_{i=1}^M$, $M\ll N$. Here $\eta_t^i \in [0, 1]$ is the sentiment strength of the $t^{th}$ word in the $i$-th training sentence,]{} [being either positive or negative as specified in the training data.]{}
[We design a new training objective function to use word-level sentiment information for learning $\Theta^1$ and the switching weights ${\mathbf{W}}_s$, while keeping the pre-learned $\Theta^0$ fixed. For clarity, we denote the sentiment probability as: $$\begin{aligned}
\gamma^{0}_t = p(s_t = 0|{\mathbf{x}},{\mathbf{y}}_{1:t-1}), \quad\gamma^{1}_t = 1-\gamma^{0}_t; \label{eq:switch}
{\vspace{-0.mm}}\end{aligned}$$ and the log likelihood of generating a new word ${\mathbf{y}}_t$ given image and word histories $({\mathbf{x}},{\mathbf{y}}_{1:t-1})$ as $L_t(\Theta,{\mathbf{x}},{\mathbf{y}})$, which can be written as (cf. Eq ), $$\begin{aligned}
L_t&(\Theta,{\mathbf{x}},{\mathbf{y}})= \log p({\mathbf{y}}_t | {\mathbf{x}}, {\mathbf{y}}_{1:t-1})= \\ &\log [\gamma^0_t p({\mathbf{y}}_t|s_t=0,{\mathbf{x}},{\mathbf{y}}_{-t} ) +
\gamma^1_t p({\mathbf{y}}_t|s_t=1,{\mathbf{x}},{\mathbf{y}}_{-t} )].\nonumber \end{aligned}$$ The overall learning objective function for incorporating word sentiment is a combination of a weighted log likelihood and the cross-entropy between $\gamma_t$ and [$\eta_t$]{}, $$\begin{aligned}
{\cal L}(\Theta,\mathcal{D}) &= -\sum_i\sum_t (1+ \lambda_\eta \eta^i_t )
[L_t(\Theta,{\mathbf{x}}^i,{\mathbf{y}}^i) \label{eq:giantr}
\\ &+ \lambda_\gamma (\eta_t^i \log\gamma^{1,i}_t + (1-\eta_t^i) \log\gamma^{0,i}_t) ] + R(\Theta),\nonumber\\
R(\Theta)=&\frac{\lambda_\theta}{2}\|\Theta^1 - \Theta^0\|^2
\label{eq:modreg}\end{aligned}$$ where $\lambda_\eta$ and $\lambda_\gamma$ are weight parameters, and $R(\Theta)$ is the regularization term with weight parameter $\lambda_\theta$. Intuitively,]{} when $\eta_t > 0$, i.e. the training sentence encounters [a sentiment word]{}, the likelihood weighting factor $\lambda_\eta \eta^i_t$ [increases the importance of $L_t$ in the overall likelihood; at the same time, the cross-entropy term $\lambda_\gamma (\eta_t^i \log\gamma^{1,i}_t + (1-\eta_t^i) \log\gamma^{0,i}_t) $ encourage switching variable $\gamma^1_t$ to be $>0$, emphasizing the new model. The regularized training finds a trade-off between the [data]{} likelihood and L2 difference between the current and base RNN, [and is one of the most competitive approaches in domain transfer [@schweikert2008empirical]]{}.]{}
[[**[Settings for model learning.]{}**]{}]{} We use stochastic gradient descent with backpropagation on mini-batches to optimize the RNNs. We apply dropout to the input of each step, which is either the image embedding $\hat{\mathbf{x}}$ [for $t=1$]{} or the word embedding ${\mathbf{E}}^k{\mathbf{y}}_{t-1}$ and the hidden output ${\mathbf{h}}^k_{t-1}$ from time $t-1$, for both the background and sentiment streams $k=0,1$.
We learn models for positive and negative sentiments separately, due to the observation that either sentiment could be valid for the majority of images (Section \[sec:mturk\]). [We initialize $\Theta^1$ as $\Theta^0$ and use the following gradient of to minimize ${\cal L}(\Theta,\mathcal{D})$ with respect to $\Theta^1$ and ${\mathbf{W}}_s$, holding $\Theta^0$ fixed. $$\begin{aligned}
{\frac{\partial \cal L}{\partial \Theta}} = &-\sum_i\sum_t (1+ \lambda_\eta \eta^i_t ) [{\frac{\partial L_t}{\partial \Theta}} \nonumber\\&+ \lambda_\gamma ( \frac{\eta_t^i}{\gamma^{1,i}_t}{\frac{\partial \gamma^{1,i}_t}{\partial \Theta}} + \frac{1-\eta_t^i}{\gamma^{0,i}_t} {\frac{\partial \gamma^{0,i}_t}{\partial \Theta}}) ] + {\frac{\partial R(\Theta)}{\partial \Theta}}
{\vspace{-0.mm}}\end{aligned}$$ ]{}Here ${\frac{\partial L_t}{\partial \Theta}},{\frac{\partial \gamma^{0,i}_t}{\partial \Theta}},\text{ and }{\frac{\partial \gamma^{1,i}_t}{\partial \Theta}}$ are computed through differentiating across Equations (\[eq:bayesrule\_s\])–(\[eq:switch\]). During training, we set $\eta_t=1$ when word ${\mathbf{y}}_t$ is part of an ANP with the target sentiment polarity, otherwise $\eta_t=0$. We also include a default L2-norm regularization for neural network tuning $|\Theta|^2$ with a small weight ($10^{-8}$). We automatically search for the hyperparameters $\lambda_\theta$, $\lambda_\eta$ and $\lambda_\gamma$ on a validation set using Whetlab [@snoek2012practical].
An Image Caption Dataset with Sentiments {#sec:mturk}
========================================
![image](fig/gt_rating){width=".9\textwidth"}
In order to learn the association between images and captions with sentiments, we build a novel dataset of image-caption pairs where the caption both describes an image, and also convey the desired sentiment. [We summarize the new dataset, and the crowd-sourcing task to collect image-sentiment caption data. More details of the data collection process are included in the suplementary.]{}
There are many ways a photo could evoke emotions. In this work, we focus on creating a collection and learning sentiments [*from an objective viewer*]{} who does not know the back story outside of the photo – a setting also used by recent collections of objectively descriptive image captions [@chen2015microsoft; @hodosh2013framing].
[[**[Dataset construction.]{}**]{}]{} We design a crowd-sourcing task to collect such objectively described emotional image captions. This is done in a caption re-writing task based upon objective captions from MSCOCO [@chen2015microsoft] by asking Amazon Mechanical Turk (AMT) workers to choose among ANPs of the desired sentiment, and incorporate one or more of them into any one of the five existing captions. Detailed design of the AMT task is in the appendix.
The set of candidate ANPs required for this task is collected from the captions for a large sets of online images. We expand the Visual SentiBank [@borth2013sentibank] vocabulary with a set of ANPs from the YFCC100M image captions [@thomee2015yfcc100m] [as]{} the overlap between the original SentiBank ANPs and the MSCOCO images is insuffcient. We keep ANPs with non-trival frequency and a clear positive or negative sentiment, when rated in the same way as SentiBank. This gives us 1,027 ANPs with a positive emotion, 436 with negative emotions. We collect at least 3 positive and 3 negative captions per image. Figure \[fig:mturk\_eval\](a) contains one example image and its respective positive and negative caption written by AMT workers. We release the list of ANPs and the captions in the online appendix.
[[**[Quality validation.]{}**]{}]{} We validate the quality of the resulting captions with another [two-question]{} AMT task as detailed in the suppliment. This validation is done on [124 images with 3 neutral captions from MSCOCO]{}, and images with 3 positive and 3 negative captions from our dataset. We first ask AMT workers to rate the descriptiveness of a caption for a given image on a four-point scale [@hodosh2013framing; @vinyals2015show]. The [*descriptiveness*]{} column in Figure \[fig:mturk\_eval\](b), shows that the measure for objective descriptiveness tend to decrease when the caption contains additional sentiment. Ratings for the positive captions ([Pos]{}) have a small decrease (by 0.08, or one-tenth of the standard deviation), while those for the negative captions ([Neg]{}) have a significant decrease (by 0.73), likely because the notion of negativity is diverse.
[We also ask whether the sentiment of the sentence matches the image. Each rating task is completed by 3 different AMT workers.]{} In the [*correct sentiment*]{} column of Figure \[fig:mturk\_eval\](b), we record the number of votes each caption received for bearing a sentiment that matches the image. We can see that the vast majority of the captions are unanimously considered emotionally appropriate ($94\%$, or 315/335 for [Pos]{}; $82\%$, or 250/305 for [Neg]{}). Among the captions with less than unanimous votes received, most of them (20 for [Pos]{} and 49 for [Neg]{}) still have majority agreement for having the correct sentiment, which is on par with the level of noise (16 for [Coco]{} captions).
Experiments {#sec:exp}
===========
[sen]{}% [B-1]{} [B-2]{} [B-3]{} [B-4]{} [Rouge]{}$_L$ [Meteor]{} [Cide]{}$_r$ [Senti]{} [Desc]{} [DescCmp]{}
-- -------------- ---------- --------- --------- --------- --------- --------------- ------------ -------------- ----------- --------------- -------------
CNN+RNN 1.0 48.7 28.1 17.0 10.7 36.6 15.3 55.6 – 2.90$\pm$0.90 –
ANP-Replace 90.3 48.2 27.8 16.4 10.1 36.6 16.5 55.2 84.8% 2.89$\pm$0.92 95.0%
ANP-Scoring 90.3 48.3 27.9 16.6 10.1 36.5 16.6 55.4 84.8% 2.86$\pm$0.96 95.3%
RNN-Transfer 86.5 49.3 29.5 17.9 10.9 37.2 17.0 54.1 84.2% 2.73$\pm$0.96 76.2%
SentiCap 93.2 49.1 29.1 17.5 10.8 36.5 16.8 54.4 88.4% 2.86$\pm$0.97 84.6%
CNN+RNN 0.8 47.6 27.5 16.3 9.8 36.1 15.0 54.6 – 2.81$\pm$0.94 –
ANP-Replace 85.5 48.1 28.8 17.7 10.9 36.3 16.0 56.5 61.4% 2.51$\pm$0.93 73.7%
ANP-Scoring 85.5 47.9 28.7 17.7 11.1 36.2 16.0 57.1 64.5% 2.52$\pm$0.94 76.0%
RNN-Transfer 73.4 47.8 29.0 18.7 12.1 36.7 16.2 55.9 68.1% 2.52$\pm$0.96 70.3%
SentiCap 97.4 50.0 31.2 20.3 13.1 37.9 16.8 61.8 72.5% 2.40$\pm$0.89 65.0%
[[**[Implementation details.]{}**]{}]{} We implement RNNs with LSTM units using the Theano package [@BastienTheano2012]. Our implementation of CNN+RNN reproduces caption generation performance in recent work [@Karpathy2015CVPR]. The visual input to the switching RNN is 4096-dimensional feature vector from the second last layer of the Oxford VGG CNN [@Simonyan2015]. These features are [linearly embedded into a $D=512$ dimensional space]{}. Our word embeddings ${\mathbf{Ey}}$ are 512 dimensions and the hidden state ${\mathbf{h}}$ and memory cell ${\mathbf{c}}$ of the LSTM module also have 512 dimensions. The size of our vocabulary for generating sentences is 8,787, and becomes 8,811 after including additional sentiment words.
[We train the model using Stochastic Gradient Descent (SGD) with mini-batching and the momentum update rule. Mini-batches of size 128 are used with a fixed momentum of 0.99 and a fixed learning rate of 0.001. Gradients are clipped to the range $[-5, 5]$ for all weights during back-propagation. We use perplexity as our stopping criteria. The entire system has about 48 million parameters, and learning them on the sentiment dataset with our implementation takes about 20 minutes at 113 image-sentence pairs per second, while the original model on the MSCOCO dataset takes around 24 hours at 352 image-sentence pairs per second]{}. Given a new image, we predict the best caption by doing a beam-search with beam-size 5 for the best words at each position. We implementd the system on a multicore workstation with an Nvidia K40 GPU.
[[**[Dataset setup.]{}**]{}]{} The background RNN is learned on the MSCOCO training set [@chen2015microsoft] of 413K+ sentences on 82K+ images. We construct an additional set of caption with sentiments as described in Section \[sec:mturk\] using images from the MSCOCO validation partition. The [Pos]{} subset contains 2,873 positive sentences and 998 images for training, and another 2,019 sentences over 673 images for testing. The [Neg]{} subset contains 2,468 negative sentences and 997 images for training, and another 1,509 sentences over 503 images for testing. Each of the test images has three positive and/or three negative captions.
[[**[Systems for comparison.]{}**]{}]{} The starting point of our model is the RNN with LSTM units and CNN input [@vinyals2015show] learned on the MS COCO training set only, denoted as [*CNN+RNN*]{}. [Two simple baselines [*ANP-Replace*]{} and [*ANP-Scoring*]{} use sentences generated by [*CNN+RNN*]{} and then add an adjective with strong sentiment to a random noun. [*ANP-Replace*]{} adds the most common adjective, in the sentiment captions for the chosen noun. [*ANP-Scoring*]{} uses multi-class logistic regression to select the most likely adjective for the chosen noun, given the Oxford VGG features.]{} The next model, denoted as [*RNN-Transfer*]{}, learns a fine-tuned RNN [on the sentiment dataset]{} with additional regularization from [*CNN+RNN*]{} [@schweikert2008empirical], [as in $R(\Theta)$ (cf. Eq )]{}. We name the [full switching RNN system as]{} [*SentiCap*]{}, which jointly learns [the RNN and the switching probability with word-level sentiments from Equation (\[eq:giantr\]).]{}
![image](fig/3x4){width=".7\textwidth"}
[[**[Evaluation metrics.]{}**]{}]{} We evaluate our system both with automatic metrics and with crowd-sourced judgements through Amazon Mechanical Turk. Automatic evaluation uses the [Bleu]{}, [Rouge]{}$_L$, [Meteor]{}, [Cide]{}$_r$ metrics from the Microsoft COCO evaluation software [@chen2015microsoft].
In our crowd-sourced evaluation task AMT workers are given an image and two automatically generated sentences displayed in a random order (example provided in supplement). One sentence is from the [*CNN+RNN*]{} model without sentiment, while the other sentence is from [[*SentiCap*]{} or one of the systems being compared]{}. AMT workers are asked to rate the descriptiveness of each image from 1-4 and select the more positive or more negative image caption. A process for filtering out noisy ratings is described in the supplement. Each pair of sentences is rated by three different AMT workers; at least two must agree that a sentence is more positive/negative for it to be counted as such. The descriptiveness score uses mean aggregation.
[[**[Results.]{}**]{}]{} Table \[tab:senticap\] summarizes the automatic and crowd-sourced evaluations. We can see that [*CNN+RNN*]{} presents almost no sentiment ANPs as it is trained only on MSCOCO. [*SentiCap*]{} contains significantly more sentences with sentiment words [than [any of the three]{} baseline methods,]{} which is expected when the word-level regularization has taken effect. [That [*SentiCap*]{} has more sentiment words than the two insertion baselines [*ANP-Replace*]{} and [*ANP-Scoring*]{} shows that [*SentiCap*]{} actively drives the flow of the sentence towards using sentimental ANPs. Sentences from [*SentiCap*]{} are, on average, judged by crowd sourced workers to have stronger sentiment than any of the three baselines. For positive [*SentiCap*]{}, 88.4% are judged to have a more positive sentiment than the [*CNN+RNN*]{} baseline. These gains are made with only a small reduction in the descriptiveness [– yet this decrease is due to a minority of failure cases, since 84.6% of captions ranked favorably in the pair-wise descriptiveness comparison.]{} [*SentiCap*]{} negative sentences are judged to have more negative sentiment 72.5% of the time. On the automatic metrics [*SentiCap*]{} generating negative captions outperforms all three baselines by a margin.]{} This improvement is likely due to negative [*SentiCap*]{} being able to learn more reliable statistics for the new words that only appear in negative ANPs.
[*SentiCap*]{} sentences with positive sentiment were judged by AMT workers as [*more interesting*]{} than those without sentiment in 66.4% of cases, which shows that our method improves the expressiveness of the image captions. On the other hand, negative sentences were judged to be [*less interesting*]{} than those without sentiment in 63.2% of cases. This is mostly due to that negativity in the sentence naturally contradicts with being [*interesting*]{}, a positive sentiment.
It has been noted by [@vinyals2015show] that RNN captioning methods tend to exactly reproduce sentences from the training set. Our [SentiCap]{} method produces a larger fraction of novel sentences than an RNN trained on a single caption domain. A sentence is novel if there is no match in the MSCOCO training set or the sentiment caption dataset. Overall, [SentiCap]{} produces 95.7% novel captions; while [CNN+RNN]{}, which was trained only on MSCOCO, produces 38.2% novel captions – higher than the 20% observed in [@vinyals2015show].
Figure \[fig:exp\] contains a number of examples with generated sentiment captions – the left half are positive, the right half negative. We can see that the switch variable captures almost all sentiment phrases, and some of the surrounding words (e.g. [*train station*]{}, [*plate*]{}). Examples in the first two rows are generally descriptive and accurate such as [*delicious piece of cake*]{} (2a), [*ugly car*]{} and [*abandoned buildings*]{} (1c). Results for the other examples contain more or less inappropriateness in either the content description or sentiment, or both. (3b) captures the [*happy*]{} spirit correctly, but the semantic of a child in playground is mistaken with that of a man on a skateboard due to very high visual resemblance. [(3d) interestingly juxtaposed the positive ANP [*clever trick*]{} and negative ANP [*dead man*]{}, creating an impossible yet amusing caption. ]{}
Conclusion
==========
\[sec:conclusion\]
We proposed SentiCap, a switching RNN model for generating image captions with sentiments. One novel feature of this model is a specialized word-level supervision scheme to effectively make use of a small amount of training data with sentiments. We also designed a crowd-sourced caption re-writing task to generate sentimental yet descriptive captions. We demonstrate the effectiveness of the proposed model using both automatic and crowd-sourced evaluations, with the SentiCap model able to generate an emotional caption for over 90% of the images, and the vast majority of the generated captions are rated as having the appropriate sentiment by crowd workers. Future work can include unified model for positive and negative sentiment; models for linguistic styles (including sentiments) beyond the word level, and designing generative models for a richer set of emotions such as pride, shame, anger.
[ [**Acknowledgments**]{} NICTA is funded by the Australian Government as represented by the Dept. of Communications and the ARC through the ICT Centre of Excellence program. This work is also supported in part by the Australian Research Council via the Discovery Project program. The Tesla K40 used for this research was donated by the NVIDIA Corporation. ]{}
Ames, M., and Naaman, M. 2007. Why we tag: Motivations for annotation in mobile and online media. .
Bastien, F.; Lamblin, P.; Pascanu, R.; Bergstra, J.; Goodfellow, I. J.; Bergeron, A.; Bouchard, N.; and Bengio, Y. 2012. Theano: new features and speed improvements. .
Borth, D.; Ji, R.; Chen, T.; Breuel, T.; and Chang, S.-F. 2013. Large-scale visual sentiment ontology and detectors using adjective noun pairs. .
Chen, X., and Zitnick, C. L. 2015. Mind’s eye: A recurrent visual representation for image caption generation. .
Chen, X.; Fang, H.; Lin, T.-Y.; Vedantam, R.; Gupta, S.; Dollar, P.; and Zitnick, C. L. 2015. . .
Crystal, D., and Davy, D. 1969. . ERIC.
Donahue, J.; Hendricks, L. A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; and Darrell, T. 2015. Long-term recurrent convolutional networks for visual recognition and description. .
Esuli, A., and Sebastiani, F. 2006. : A publicly available lexical resource for opinion mining. .
Fang, H.; Gupta, S.; Iandola, F.; Srivastava, R. K.; Deng, L.; Dollar, P.; Gao, J.; He, X.; Mitchell, M.; Platt, J. C.; Lawrence Zitnick, C.; and Zweig, G. 2015. From captions to visual concepts and back. .
Farhadi, A.; Hejrati, M.; Sadeghi, M. A.; Young, P.; Rashtchian, C.; Hockenmaier, J.; and Forsyth, D. 2010. Every picture tells a story: Generating sentences from images. .
Gupta, A.; Verma, Y.; and Jawahar, C. V. 2012. . .
Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. 9(8):1735–1780.
Hodosh, M.; Young, P.; and Hockenmaier, J. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. .
. 2015. . [ <http://instagram-engineering.tumblr.com/post/117889701472/emojineering-part-1-machine-learning-for-emoji> ]{}, retrieved June 2015.
Joshi, D.; Datta, R.; Fedorovskaya, E.; Luong, Q.-T.; Wang, J. Z.; Li, J.; and Luo, J. 2011. Aesthetics and emotions in images. .
Karpathy, A., and Fei-Fei, L. 2015. Deep visual-semantic alignments for generating image descriptions. .
Kulkarni, G.; Premraj, V.; Dhar, S.; Li, S.; Choi, Y.; Berg, A.; and Berg, T. 2011. Baby talk: Understanding and generating simple image descriptions. .
Kuznetsova, P.; Ordonez, V.; Berg, T. L.; and Choi, Y. 2014. Treetalk: Composition and compression of trees for image descriptions. .
Lerner, J. S.; Li, Y.; Valdesolo, P.; and Kassam, K. S. 2015. Emotion and decision making. 66.
Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Huangzhi, H.; and Yuille, A. 2015. . .
Mikolov, T.; Deoras, A.; Povey, D.; Burget, L.; and Cernocky, J. 2011. Strategies for training large scale neural network language models. .
Murray, N.; Marchesotti, L.; and Perronnin, F. 2012. : A large-scale database for aesthetic visual analysis. .
Nakagawa, T.; Inui, K.; and Kurohashi, S. 2010. . .
Nwogu, I.; Zhou, Y.; and Brown, C. 2011. . .
Pang, B., and Lee, L. 2008. Opinion mining and sentiment analysis. .
Rohrbach, M.; Qiu, W.; Titov, I.; Thater, S.; Pinkal, M.; and Schiele, B. 2013. Translating video content to natural language descriptions. .
Schweikert, G.; R[ä]{}tsch, G.; Widmer, C.; and Sch[ö]{}lkopf, B. 2008. An empirical analysis of domain adaptation algorithms for genomic sequence analysis. .
Simonyan, K., and Zisserman, A. 2015. . .
Snoek, J.; Larochelle, H.; and Adams, R. P. 2012. Practical bayesian optimization of machine learning algorithms. .
Socher, R.; Perelygin, A.; Wu, J. Y.; Chuang, J.; Manning, C. D.; Ng, A. Y.; and Potts, C. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. .
Sutskever, I.; Martens, J.; and Hinton, G. E. 2011. Generating text with recurrent neural networks. .
Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2015. Going deeper with convolutions. .
T[ä]{}ckstr[ö]{}m, O., and McDonald, R. 2011. Discovering fine-grained sentiment with latent variable structured prediction models. .
Thelwall, M.; Buckley, K.; Paltoglou, G.; Cai, D.; and Kappas, A. 2010. Sentiment strength detection in short informal text. .
Thomee, B.; Shamma, D. A.; Friedland, G.; Elizalde, B.; Ni, K.; Poland, D.; Borth, D.; and Li, L.-J. 2015. The new data and new challenges in multimedia research. .
Vinyals, O.; Toshev, A.; Bengio, S.; and Erhan, D. 2015. Show and tell: A neural image caption generator. .
Xu, K.; Ba, J.; Kiros, R.; Courville, A.; Salakhutdinov, R.; Zemel, R.; and Bengio, Y. 2015a. Show, attend and tell: Neural image caption generation with visual attention. .
Xu, R.; Xiong, C.; Chen, W.; and Corso, J. 2015b. . .
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Electric-field-controlled charge transport is a key concept of modern computers, embodied namely in field effect transistors. The metallic gate voltage controls charge population, thus it is possible to define logical elements which are the key to computational processes. Here, we investigate a similar system defined by metallic gates inducing quasi-one-dimensional transport channels on a high-mobility electron system in the presence of a strong perpendicular magnetic field. Firstly, we solve the three-dimensional Poisson equation, self-consistently imposing relevant boundary conditions, and use the output as an initial condition to calculate charge density and potential distribution in the plane of a two-dimensional electron system, in the presence of an external magnetic field. Subsequently, we impose an external current and obtain the spatial distribution of the transport charges, considering various magnetic field and gate voltage strengths at sufficiently low ($<$ 10 Kelvin) temperatures. We show that magnetic field breaks the spatial symmetry of the current distribution, whereas voltage applied to metallic gates determines the scattering processes.'
address:
- 'Vacational School of Health, Yeni Yuzyil University, Istanbul, 34010, Turkey'
- 'Maltepe University, Faculty Engineering and Natural Sciences, Department of Electrics and Electronics, 34857 Istanbul, Turkey'
- 'Ekendiz Tanay Center for Art and Science, Department of Physics, Ula, Mugla, 48650, Turkey'
author:
- Deniz Eksi
- Afif Siddiki
title: 'Investigating the current distribution of parallel-configured quantum point contacts under quantum Hall conditions'
---
and
Quantum Hall Effect, Quantum Point Contact
Introduction
============
Discovery of semiconductor-based electronics stemming from quantum mechanics revolutionized our computational abilities [@Davies]. The basic idea behind this is to confine electrons in the growth direction ($z$) to a plane and control their population by an electric field applied to the metallic gates residing on the surface. These structures are known as the field-effect transistors (FETs). The best known of these semiconductor devices are the metal-oxide-silicon (MOS) heterojunctions, which are the main ingredients of our daily used computers. A similar heterostructure is the GaAs/AlGaAs junction, in which the electron mobility is much higher [@Datta], i.e., scattering due to impurities is reduced. Here, at the initial crystal growth the average electron density $n_{\rm el}$ is fixed by the number of silicon donors $n_0$ which are homogeneously distributed, and electrons are confined to a single quantum well, forming a two-dimensional electron system (2DES). In this paper, we focus on such high-mobility 2DESs, where charge transport is also controlled by surface gates.
The above described 2DESs present peculiar transport properties when they are subject to high and perpendicular magnetic fields $B$, known as quantum Hall effects [@Girvin00:book], the study of which has produced two Nobel prizes. It is observed that the longitudinal resistance vanishes at certain $B$ intervals, whereas the transverse (Hall) resistance assumes quantized values in units of conductance quanta $e^2/h$ [@vK80:494]. Moreover, even in the absence of an external $B$ field, gate-voltage-induced narrow transport channels also present quantized conductance behavior [@Wees88:848]. Such devices are named quantum point contacts (QPCs), which are the main object of our investigation [@Kristensen98:180; @SiddikiMarquardt; @Arslan08:125423]. These devices are claimed to be a key element in developing quantum computers, while coherence is a significant parameter in charge transport, and topologically protected information processing is required [@AdyStern:quantumcomp].
The scope of this paper is to provide a self-consistent calculation scheme which is able to describe electronic transport through QPCs within the local Ohm’s law. In this work, we compute the potential and current distributions of serially connected QPCs, starting from the calculation of bare electrostatic potential assuming a crystal structure which is used experimentally. Next, applying a perpendicular magnetic field to the 2DES, we obtain the spatial distribution of current-carrying channels, depending on field strength. In the final investigation, an external in-plane electric field is taken into account, and the current flow is obtained under certain conditions. The results of this study indicate that minor field variations are robust in determining the current distribution, whereas gate potential $V_{\rm G}$ and temperature $T$ dominate the scattering processes, as expected.
Model
=====
Since the main goal of our study is to obtain the current distribution through the QPCs, one should first obtain the electrostatic potential distribution $V(x,y,z)$ (hence , electron density distribution, $\rho(x,y,z)$) via solving the Poisson equation,
$$\label{key}
\nabla^{2}V(x,y,z)= - 4\pi\rho(x,y,z)$$
in 3D by imposing relevant boundary conditions and using material properties. For this purpose, we utilize a well-developed numerical method called EST3D , which is based on an iterative method to obtain, self-consistently, $V(x,y,z)$ and $\rho(x,y,z)$ [@Arslan08:125423]. An advanced 3D fast-Fourier subroutine is used to calculate the distributions, layer by layer, where all the surfaces (top, side and bottom) are assumed to be under vacuum, silicon-doped (two layers of delta-doping) GaAs/AlGaAs heterostructure is considered (see Fig . \[fig1\]) and metallic gates are defined on the surface, which are kept at $V_{\rm G}$, as in our previous studies [@Salman:13; @Atci:17]. The vacuum, the heterostructure and the metallic gates are defined by their dielectric constants. Initially, the delta-doped silicon layers are charged positively with a fixed number of charges depending on the crystal growth parameters. The metallic gates are taken to be charged positively or negatively. The rest of the heterostructure, i.e., surfaces including vacuum, are neutral in charge. Starting with these boundary conditions, one obtains $V(x,y,z)$ and $\rho(x,y,z)$ depending on the potential on gates, strength of doping and thickness of GaAs and AlGaAs layers.
![\[fig1\] Schematic presentation of the GaAs/AlGaAs heterostructure. The crystal is in vacuum and 2DES is formed at the interface of the junction. The number of donors and the structure geometry are taken from experimental reports [@2013NatSR; @2017NatCo].](fig1.pdf){width="5cm"}
Equipped with the self-consistently calculated potential and charge density distributions for each layer at zero temperature and magnetic field, one can obtain the density and current distributions in the presence of external in-plane electric and off-plane magnetic fields, using the Newton-Raphson iteration [@Eksi:10; @Yildiz:14; @Kilicoglu16:035702]. Our strategy is to use $V(x,y,z)$ as an initial input, obtained in the previous step, and calculate finite temperature and magnetic field reconstructed potential and charge distributions. Considering the experimental values of energies and charge densities, it can be easily seen that our results are viable, such that the typical charge density of the 2DES is similar to $3\times10^{15}$ m$^{-2}$ , corresponding to a Fermi energy ($E_F$) of 13 meV. At $10$ Tesla magnetic energy, $\hbar \omega_{c}(~ \omega_{c}=eB/m^{*}$) is on the order of 17 meV, and thermal energy (T $\leq$ 10 K) is much smaller than the confinement energy (approximately 4 eV) and potential (energy) on metallic gates ($\sim$ -0.2 eV). The details of the calculation procedures and validity of the assumptions are explained in our previous studies [@Arslan08:125423; @Kilicoglu16:035702].
While performing calculations considering an external current, we always stay in the linear-response regime, which essentially imposes that the applied in-plane electric field does not affect the density and potential distributions. This is well justified, as the current amplitudes considered are much smaller than the Fermi energy [@Guven03:115327]. In the following Sections we present our numerical results, first investigating a toy model using cosine-defined QPCs. The rationale is to clarify the effect of scattering processes without including the geometrical dependencies on them and explain the current distribution depending only on $B$ field. Next, we calculate the same quantities for higher gate voltage QPCs at various magnetic fields.
![\[fig2\] (a) The spatial distribution of screened potential at zero temperature and vanishing magnetic field with $V_{g}=-0.1$ V (b) Self-consistent filling factor distribution and (c) The current distribution at $B= 7.5$ T. Gray scale on the left legend denotes the potential strength, whereas right scale shows the filling factor with black indicating $\nu=2$. Small light blue arrows depict local current distribution and large black arrow shows total current direction. In (b) and (c), equilibrium temperature is $7.43$ K. ](fig2.pdf){width="7.5cm"}
![\[fig3\] Current distributions at (a) $8$ T and (b) $8.5$ T, resulting in $T_{\rm E}$ $10.15$ K and $2.0$ K, respectively. At the higher $B$ value, most of the current is owing without scattering; hence, $T_{\rm E}$ is reduced. ](fig3a.pdf "fig:"){width="7cm"} ![\[fig3\] Current distributions at (a) $8$ T and (b) $8.5$ T, resulting in $T_{\rm E}$ $10.15$ K and $2.0$ K, respectively. At the higher $B$ value, most of the current is owing without scattering; hence, $T_{\rm E}$ is reduced. ](fig3b.pdf "fig:"){width="7cm"}
Results and discussion
======================
It is significant to compare our numerical results with already existing ones , to show the consistency between them. The usual approach is to assume that the QPCs generate cosine- or Gaussian-like potentials in the plane of the 2DESs [@Macucci02:39; @Igor07:qpc1; @Igor07:qpc2]. By such modeling, one can obtain reliable results without further computational complications compared to realistically modeled devices. Here, we prefer to use cosine functions because fast-Fourier transformation processes are much faster and more precise compared to other well-defined functions. Also note that we are only interested in the transport properties of the 2DESs; hence, we show our numerical results just for the $z=z_{\rm 2DES}$ layer, i.e., 284 nm below the gate. Therefore, when the electron (number) density $n_{\rm el}(x,y)$ is presented, we take the result of 3D calculation for $\rho(x,y,z_{\rm 2DES})$ . A similar path is taken for the screened potential at zero temperature and vanishing $B$ field, namely $V^{T=0,B=0}_{\rm scr}(x,y,z_{\rm 2DES})$.
As an illuminating example, we define four QPCs on the top surface of our heterostructure, as shown in Fig. \[fig2\]. The corresponding screened potential profile ($V^{T=0,B=0}_{\rm scr})$) is shown in Fig. \[fig2\]a, together with the dimensionless electron density ($\nu(x,y)$, Fig. \[fig2\]b) and current ($j(x,y)$) distribution as a function of position in the plane of the 2DES (i.e. $z=z_{2DES}$), Fig . \[fig2\]c. It is beneficial to parametrize density by normalizing it with the strength of the external magnetic field. The dimensionless electron density is called the filling factor and is given by $\nu(x,y)=2\pi \ell^{2}n_{\rm el}(x,y)$, where $\ell$ is the magnetic length, defined as $\ell^{2}=eB/h$. From Fig. \[fig2\]a, one can see that the potential generated by the surface gates (both QPCs and side gates, dark blue regions in Fig. \[fig2\]b) depletes electrons beneath them ($V_{scr} (x,y)$= -0.1 eV), and the external potential is well screened by the electrons elsewhere ($V_{scr} (x,y)\simeq 0.0$ eV), if a repulsive potential is applied to the gates ($V_{\rm G}=-0.1$ eV). Obviously, the QPCs constrain electron transport together with the side gates which confine them to a quasi-2D channel. The resulting density distribution is shown in Fig. \[fig2\]b, where the color gradient presents the variation, and regions without a gradient (dark blue) indicate the electron depleted zones below the gates.
It is significant to emphasize that integer filling factors play a distinguishing role both in screening and transport properties of the system at hand. Let us consider a situation where the ratio between self-consistent electron density and the magnetic flux density assumes an integer value. In this case, the Fermi energy falls in between the magnetic field quantized density of states (DOS) locally; hence, there are no states available at these regions. This leads to areas of poor screening and constant electron density, called the incompressible strips [@Chklovskii92:4026]. On the other hand, for the very same reason, scattering is suppressed, leading to a highly reduced resistance along the current direction. Essentially, local longitudinal resistance vanishes at the limit of zero temperature . Therefore, we depicted integer filling factors by black color, $\nu=2$, to estimate locations of the incompressible strips.
![image](fig4a.pdf){width="5cm"} ![image](fig4b.pdf){width="5cm"} ![image](fig4c.pdf){width="5cm"} ![image](fig4d.pdf){width="5cm"} ![image](fig4e.pdf){width="5cm"} ![image](fig4f.pdf){width="5cm"}
Fig. \[fig2\]c shows the distribution of current that is imposed in the positive $y$ direction, with a normalized amplitude of $0.01$. We observe that most of the current exerted is confined to integer-filling-factor regions, namely, $\nu$=2. A closer look at the data indicates that some of the current is backscattered in the proximity of the top- and bottom-most QPCs, indicating that because of finite temperature, a negligible number of transport electrons ($<0.01 \% $) are scattered to compressible (with high DOS, metal-like) regions, where longitudinal resistance is finite. A remarkable feature in the current distribution is the asymmetry between the upper and lower parts of Fig. \[fig2\]c. It is seen that more current flows from the upper half. This is in agreement with the experimental [@afif:njp2] and theoretical [@SiddikiEPL:09] findings reported in the literature, justifying our results. The main reason for such behavior is grounded in the symmetry-breaking external magnetic field, namely, the Lorentz force resulting in induced Hall voltage. Keep in mind that, at relatively low magnetic field , the edge incompressible strips are as narrow as the Fermi wavelength; hence, at this magnetic field the current is mostly driven by the drift velocity. Therefore, one can conclude that scattering is mainly dominated by impurities. At elevated field strengths (Fig. \[fig3\]a and Fig. \[fig3\]b), we observe that current first shifts to the lower part of the sample ($B=8$ T) and then is approximately symmetrically distributed over the sample at $B=8.5$ T. This is mainly due to the enlargement of the incompressible strips while increasing the magnetic field. At $B=8.0$ T, the lower incompressible strip is well developed; i.e., the width of the strips is larger than the Fermi wavelength. Hence, the current is confined mainly to this scattering-free region. Once the magnetic field strength is increased by $0.5$ T, both incompressible strips at the lower and higher parts of the sample become larger than the Fermi wavelength; therefore, current is shared between them in an approximately equal manner. We are able to confirm this behavior just by checking the convergence temperature of the system. It is observed that, for antisymmetric current distributions, the dissipation is higher; hence, equilibrium (convergence) temperature is $7.43$ K and $10.15$ K for $7.5$ T and $8.0$ T, respectively . In accordance with our conclusion, the equilibrium temperature $T_{\rm E}$ is lower once almost all of the current is confined to well-developed incompressible strips, namely, $2.0$ K at $8.5$ T.
Before presenting further results, to summarize the main ow of our understanding: The current is confined to the scattering-free incompressible strips, where dissipation is suppressed, leading to lower equilibrium temperatures. Depending on magnetic field, the existence, location and widths of the strips vary such that at lower fields the upper strip, at intermediate fields the lower strip and at higher fields both strips are well developed.
Next, we present results where the depleting gates are biased with a higher negative voltage of $-0.2$ V, yielding a steeper screened potential profile, which in turn leads to narrower incompressible strips. Fig. \[fig4\]a-f presents our numerical results considering six characteristic $B$ values, in increasing order. At the lowest field value ($6$ T), no incompressible regions are formed; therefore, current is distributed all over the sample with high dissipation yielding a high $T_{\rm E}$ $(= 4.55$ K). Similar to the previous situation, the first incompressible strip is formed at $6.5$ T at the upper edge of the sample, where relatively more current is confined to the strip. This yields less scattering, which ends up with the lowest $T_{\rm E}$. Current is approximately symmetric at $7.0$ T, but interestingly, $T_{\rm E}$ is higher compared to the previous case. One can interpret this behavior as follows: although the current is confined to incompressible strips at both sides of the sample, the remaining current is highly scattered between the QPCs, increasing the dissipation, which then elevates the temperature. Our interpretation is justified when we consider two consecutive field strengths, namely, $7.5$ T and $8.0$ T. In both cases, the current distribution is symmetric; however, at $7.5$ $T_{\rm E}$ the temperature is $5.12$ K, being the highest value of our interval of interest, and then decreases to $3.48$ K at $8.0$ T, where bulk scattering is suppressed, as can be seen clearly from Fig. \[fig4\]e. At the highest $B$ strength, although the bulk scattering is predominantly suppressed, the strong back-scattering at the injection (bottom left of the sample) and the collection (top right) regions dissipation is high , and as a consequence, $T_{\rm E}$ increases to $4.2$ K.
The results shown above give sufficient information about the current distribution in the close vicinity of the QPCs depending on the magnetic field. Moreover, taking into account the scattering processes, one can also comment on the variation of the equilibrium temperature, depending both on $B$ field strength and also on the formation of incompressible strips , strongly bound to the steepness of the potential profile determined by the gate voltage $V_{\rm G}$.
Conclusion
==========
Our investigation focuses on the current distribution of parallel-configured QPCs under QH conditions, utilizing self-consistent numerical calculation schemes. We obtain the potential and charge distribution of a GaAs/AlGaAs heterojunction by solving the Poisson equation in 3D for given boundary conditions. The obtained potential at the layer of 2DES is used to calculate the distribution of scattering-free incompressible strips and hence the current. We found that the location of the transporting channels depends strongly on the applied perpendicular magnetic field strength and switch from asymmetric to symmetric. This behavior is also observed once the gate voltages are varied.
Our main finding is that the variation of the equilibrium temperature is due to (both back- and forward-) scattering affecting the dissipation. In conclusion, low equilibrium temperatures are obtained if the current is mainly confined to the scattering-free incompressible strips.
Our findings are in accordance with previous theoretical and experimental studies. Moreover, we are able to demonstrate that, in a parallel configuration of cosine-defined QPCs, equilibrium temperature is a key indicator to determine coherent transport through such quantum devices. It is admirable to obtain similar results for realistically determined QPCs and to compare them with experimental results. It is also a challenging problem to include (Joule) heating effects, in order to investigate the dissipation processes microscopically.
Acknowledgement {#acknowledgement .unnumbered}
===============
A.S. thanks Mimar Sinan Fine Arts University Physics Department members for fruitful discussions on theoretical issues.
[99]{} J.H. Davies, in [*The Physics of Low-Dimensional Semicon- ductors*]{} (Cambridge University Press, New York, 1998).
S. Datta, in [*Electronic Transport in Mesoscopic Systems*]{} (University press, Cambridge, 1995).
R.E. Prange and S.M. Girvin, [*The Quantum Hall Effect*]{} (Springer, New York, 1987).
K.V. Klitzing, G. Dorda and M. Pepper, Physical Review Letters [**45**]{}, 494 (1980). doi: 10.1103/PhysRevLett.45.494
B.J. van Wees, H. van Houten, C.W.J. Beenakker, J.G. Williamson, L.P. Kouwenhoven, D. van der Marel and C.T. Foxon, Phys. Rev. Lett [**60**]{}, 848 (1988). doi: 10.1103/PhysRevLett.60.848
A. Kristensen, P.E. Lindelof, J.B. Jensen, M. Zaffalon, J. Hollingbery, S. Pedersen, J. Nygard, H. Bruus, S. Reimann, C.B. Sorensen, M. Michel and A. Forchel, Physica B [**249**]{}, 180 (1998). doi: 10.1016/S0921-4526(98)00094-5
A. Siddiki and F. Marquardt, Phys. Rev. B [**75**]{}, 045325 (2007). doi: 10.1103/PhysRevB.75.045325
S. Arslan, E. Cicek, D. Eksi, S. Aktas, A. Weichselbaum and A. Siddiki, Phys. Rev. B [**78**]{}, 125423 (2008). doi: 10.1103/PhysRevB.78.125423
C. Nayak, S.H. Simon, A. Stern, M. Freedman and S. Das Sarma, Reviews of Modern Physics [**80**]{}, 1083 (2008). doi: 10.1103/RevModPhys.80.1083
A. Salman, [Kotim[ä]{}ki]{} V, A. Siddiki and E. [R[ä]{}s[ä]{}nen]{}, European Physical Journal B [**86**]{}, 155 (2013). doi: 10.1140/epjb/e2013-31110-9
H. Atci and A. Siddiki, [**95**]{}, 045132 (2017). doi: doi.org/10.1103/PhysRevB.95.045132
E.M. Kendirlik, S. Sirt, S.B. Kalkan, W. Dietsche, W. Wegscheider, S. Ludwig and A. Siddiki, Scientific Reports [**3**]{}, 3133 (2013). doi: 10.1038/srep03133
E.M. Kendirlik, S. Sirt, S.B. Kalkan, N. Ofek, V. Umansky and A. Siddiki, Nature Communications [**8**]{}, 14082 (2017). doi: 10.1038/ncomms14082
D. Eksi, O. Kilicoglu, O. [G[ö]{}ktas]{} and A. Siddiki, [**82**]{}, 165308 (2010). doi: 10.1103/PhysRevB.82.165308
A. Yildiz, D. Eksi and A. Siddiki, Journal of the Physical Society of Japan [**83**]{}, 014704 (2014). doi: 10.7566/JPSJ.83.014704
O. Kilicoglu, D. Eksi and A. Siddiki, Journal of Physics: Condensed Matter [**29**]{}, 035702 (2017). doi: 10.1088/1361-648X/29/3/035702
K. G[ü]{}ven and R.R. Gerhardts, Phys. Rev. B [**67**]{}, 115327 (2003). doi: 10.1103/PhysRevB.67.115327
G. Fiori, G. Iannaccone and M. Macucci, Journal of Computational Electronics [**1**]{}, 39 (2002). doi: 10.1023/A:1020703425018
S. Ihnatsenka and I.V. Zozoulenko, ArXiv Condensed Matter e-prints (2007). doi: 10.1103/PhysRevB.76.045338 arXiv:cond-mat/0703380
S. Ihnatsenka and I.V. Zozoulenko, ArXiv Condensed Matter e-prints (2007). doi: 10.1103/PhysRevLett.99.166801 arXiv:0706.0125
D.B. Chklovskii, B.I. Shklovskii and L.I. Glazman, Phys. Rev. B [**46**]{}, 4026 (1992). doi: 10.1103/PhysRevB.46.4026
A. Siddiki and R.R. Gerhardts, Phys. Rev. B [**70**]{}, 195335 (2004). doi: 10.1103/PhysRevB.70.195335
J.H. Oh and R.R. Gerhardts, Physica E [**1**]{}, 108 (1997). doi: 10.1016/S1386-9477(97)00024-6
A. Siddiki, J. Horas, D. Kupidura, W. Wegscheider and S. Ludwig, New Journal of Physics [**12**]{}, 113011 (2010). doi: 10.1088/1367-2630/12/11/113011
A. Siddiki, EPL [**87**]{}, 17008 (2009). doi: 10.1209/0295-5075/87/17008
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Intra-cellular biochemical reactions exhibit a rich dynamical phenomenology which cannot be explained within the framework of mean-field rate equations and additive noise. Here, we show that the presence of metastable states and radically different timescales are general features of a broad class of autocatalytic reaction networks, and that this fact may be exploited to gain analytical results. The latter point is demonstrated by a treatment of the paradigmatic Togashi-Kaneko reaction, which has resisted theoretical analysis for the last decade.'
author:
- Tommaso Biancalani
- Tim Rogers
- 'Alan J. McKane'
title: 'Noise-induced metastability in biochemical networks'
---
With recent advances in experimental techniques, it is becoming increasingly clear that the dynamics of cellular biochemical reactions are subject to a great deal of noise [@Raj2009]. This poses a significant challenge to our understanding of such systems, as it has been known for some time that the effects of noise may lead to substantial differences in the macroscopic behavior [@Rao2002; @Maheshri2007]. The reactions which take place within a cell are highly interdependent, together forming biochemical networks which support the functioning of the cell. It remains a major open problem to make clear the link between the structural features of these networks and the resulting dynamics. A full understanding of the effects of noise is essential to this effort [@Kaern2005; @Shahrezaei2008].
Here, we report analytical progress on this problem made by studying a simple class of autocatalytic reaction networks whose dynamical behavior is radically affected by intrinsic stochasticity in finite volume cells. In particular, we show how networks of this type give rise to a separation of timescales between fast almost-deterministic oscillations and slow stochastic metastability. Our class includes the influential Togashi-Kaneko (TK) reaction scheme, numerical simulations of which have been found to undergo a noise-induced dynamical transition [@Togashi2001; @*Togashi2003]. Despite the importance of their work, a satisfactory analytic treatment of this effect has not been achieved in over a decade. Here we provide such a treatment as an application of our theory.
The general model we work with is composed of $n$ chemical species, denoted by $X_i$ with $i=1,\ldots,n$, residing in a cell of (non-dimensional) volume $V$. The molecules undergo autocatalytic reactions of the form $X_i + X_j \rightarrow 2 X_j$, with rate coefficients $r_{ij}$. We put $r_{ij}=0$ if that particular reaction is not possible. We also stipulate that the total rates of creation and destruction of each reactant $i$ are in balance, that is, $\sum_jr_{ij}=\sum_jr_{ji}$. Two additional reactions, $\emptyset \rightarrow X_i$ and $X_i \rightarrow \emptyset$, represent diffusion into and out of the cell, respectively. The rate of diffusion is slow compared to the internal reactions, having coefficient $D\ll1$. We will also use the symbol $X_i$ to denote the number of molecules of that type, and $\bm{x}$ to indicate the concentration vector with components $x_{i}=X_i/V$.
The dynamics of the system defined by the above reactions are specified once the transition rates, $T(\bm{x} | \bm{x}')$, indicating the probability per unit of time that the system goes from state $\bm{x}'$ to state $\bm{x}$, are given. They are found by invoking mass action: $$\begin{split}
&T\Big(x_i-\frac{1}{V}, x_j+\frac{1}{V}\,\Big|\, x_i, x_j\Big) = Vr_{ij}x_ix_j\,,\\
&T\Big(x_i-\frac{1}{V}\,\Big|\, x_i\Big) =DVx_i\,,\quad T\Big(x_i+\frac{1}{V}\,\Big|\, x_i\Big) =DV \,.
\end{split}\label{trates}$$ The probability of finding the system in the state $\bm{x}$ at time $t$, $P({\bm{x}},t)$, then satisfies the master equation $$\frac{dP({\bm{x}},t)}{dt} = \sum_{{\bm{x}}'\neq{\bm{x}}}\big[ T(\bm{x} | \bm{x}')P({\bm{x}}',t) - T(\bm{x}' | \bm{x})P({\bm{x}},t)\big],
\label{master}$$ with the transition rates given above [@Gardiner1985].
![(Color online) Sample stochastic time series of the simple three-species reaction network described in the text, with volume $V=10^4$ and diffusion coefficient $D=10^{-4}$. The thick (blue), thin (red) and dashed (purple) lines show the concentrations of chemicals $X_1$, $X_2$ and $X_3$, respectively. The smaller figures show detail of rapid oscillations (left) and metastability (right), taken from the main plot. All simulations were performed using Gillespie’s algorithm [@Gillespie1977].[]{data-label="fig:trajectories"}](Fig1.eps "fig:"){width="40.00000%"}\
Stochastic simulations of reaction networks of the class described above display a rich phenomenology including rapid oscillations and random switching between metastable states. For example, the time series displayed in Fig. \[fig:trajectories\] were obtained from simulations of a three-species reaction with (arbitrarily chosen) non-zero reaction rates $r_{1,2}=1$, $r_{2,3}=4$, $r_{3,2}=3$, $r_{3,1}=1$. In what follows we will show how these features can be qualitatively and quantitatively understood by an analysis of the influence of noise and the separation of timescales.
The dynamics are drastically affected by the relationship between the cell volume and the diffusion coefficient. To elucidate this, we introduce a rescaled volume $\lambda=DV$, which we treat as an $O(1)$ control parameter. Scaling $V$ and $D$ simultaneously in this way, we can rewrite the master equation (\[master\]) as a power series in a single small parameter (we choose $D$, but $V^{-1}$ is also a valid expansion parameter), leading to a Kramers-Moyal expansion [@Risken1989]. Truncating it at second order, one obtains a Fokker-Planck equation equivalent to the following stochastic differential equation (SDE), defined in the Itō sense [@Gardiner1985]:
$$\label{sdes}
\dot x_i = x_i\sum_jR_{ij}x_j + D (1 - x_i) + \sqrt{D}\,\eta_i(t)\,,$$
where $i=1,\ldots, n$, $R_{ij}=r_{ji}-r_{ij}$ and the $\eta_i$ are Gaussian noise variables with zero mean and correlator $$\begin{split}
\big\langle {\eta_i}(t)& {\eta_j}(t') \big\rangle = \\
&\quad \delta(t-t')\,\frac{1}{\lambda}\left[ \delta_{i,j} \Big( x_i\sum_kS_{ik}x_k \Big)-S_{ij}x_i x_j \right].
\end{split}
\label{noise}$$ Here the angle brackets signify an average over the noise, and $S_{ij}=r_{ij}+r_{ji}$.
Several important facts about the dynamics can be ascertained from inspection of Eqs. and . First, we discuss the limit of large volume. The factor of $\lambda^{-1}$ in the noise correlator indicates that for finite volumes the system experiences internal fluctuations. These vanish as $\lambda\rightarrow\infty$, leaving behind a deterministic system of differential equations equivalent to those obtained from a mean-field analysis of the reaction network. For general reaction networks these equations describe simple oscillatory relaxation towards the homogeneous fixed point $x_i=1$ for all $i$. This prediction is quite at odds with the rich phenomenology which is observed in stochastic simulations (as seen in Fig. \[fig:trajectories\], for example). A proper treatment of the noise is thus necessary: from now on we keep $\lambda$ fixed and finite.
The presence of the small parameter $D$ in Eq. implies a separation of timescales. On an $O(1)$ timescale (which we refer to as [*fast*]{}), diffusion is negligible and the system feels no noise. Setting $D=0$ in Eq. yields a deterministic dynamical system in which the homogeneous state $x_i\equiv1$ is a center; it has Jacobian matrix $R$, which is antisymmetric and thus has all imaginary eigenvalues. We can therefore expect rapid almost-deterministic oscillations as seen, for example, in the lower left panel of Fig. \[fig:trajectories\]. On a slow $O(1/D)$ timescale, two additional factors play a role. First, the system experiences a deterministic linear drag towards the homogeneous state. Second, the effects of noise become relevant, leading to stochasticity in the trajectories.
For smaller volumes, the overall noise strength is greater, and thus the form of the noise correlator has an important role in shaping the system dynamics. In particular, since the strength of the noise is a function of the state of the system, trajectories are forced away from states giving rise to large values of noise, creating an effective attraction towards those states in which the noise vanishes. This effect is relatively well-known in the study of systems with multiplicative noise (for example, see [@Horsthemke1984] and references therein), and we will illustrate it with an explicit calculation for the TK model. Inspection of the correlator reveals that the states for which the noise vanishes are those in which no autocatalytic reaction can occur. That is, for each pair $i,j$ one of $x_i$, $x_j$ or $r_{ij}$ must be zero. The metastability of these states is further enhanced by the fact that this condition also causes the $O(1)$ term in Eq. to vanish. An example can be seen in the lower right panel of Fig. \[fig:trajectories\], where the state $X_1=3,\,X_2=0\,,X_3=0$ is metastable.
As well as providing a qualitative picture of dynamics observed in this class of biochemical reaction networks, the mathematics we describe may also be employed to obtain precise analytical results [^1]. We now illustrate these methods in the paradigmatic case of the TK reaction [@Togashi2001; @*Togashi2003]. The model is composed of four chemical species whose reactions form a closed cycle, so that the non-zero rates are $r_{1,2}=r_{2,3}=r_{3,4}=r_{4,1}=1$. In stochastic simulations of the model, different dynamics are observed depending on the volume of the cell. For very large volumes, one finds an approximately homogeneous distribution of chemical species, however, at lower volumes the system is typically dominated by a pair of species (either $X_1$ and $X_3$, or $X_2$ and $X_4$), with the other pair absent: these are the metastable states predicted in the earlier discussion.
To visualize this dynamical transition, TK [@Togashi2001; @*Togashi2003] introduced the quantity $z = (x_1+x_3) - (x_2+x_4)$. The pair-dominated state corresponds to $|z|\approx 4$. By measuring the stationary distribution $P(z)$ from long simulation runs, one observes a transition induced by cell volume – see Fig. \[fig:trans\]. There is a critical volume $V_c\approx 1/D$ at which $P(z)$ is flat; above $V_c$ the distribution has a single peak at $z=0$; below $V_c$ it is bimodal with peaks at $z\approx\pm 4$, indicative of the pair-dominated regime.
In large volumes the model also exhibits quasi-cycles, a second (weaker) stochastic effect whereby damped oscillations present in the deterministic dynamics are excited by the noise. Quasi-cycles are amenable to analysis using a linear noise approximation [@Dauxois2009], however, it is clear that the dynamical transition is related to the noise-induced metastability discussed above and will require more powerful methods. This point was elucidated in Ohkubo [*et al*]{} [@Ohkubo2008], with the investigation of a simple one-dimensional model inspired by the TK reaction.
![(Color online) Stationary probability distribution for $z=(x_1+x_3) - (x_2+x_4)$ in the TK reaction. The histograms were obtained from simulation data with diffusion coefficient $D=5\times10^{-3}$ at volumes $V=10^{4}$ (red, unimodal), $V=2\times10^{3}$ (purple, flat) and $V=10^{3}$ (blue, bimodal). In each case the corresponding theoretical prediction of Eq. is shown with a solid line.[]{data-label="fig:trans"}](Fig2.eps "fig:"){width="40.00000%"}\
We begin our analysis of Eq. (\[sdes\]) for the TK reaction by making a change of variables which can be understood mathematically (as a real Fourier transform) or physically (as corresponding to the total concentration, the $z$ variable introduced by TK and two variables related to the $X_{1}-X_{3}$ and $X_{2}-X_{4}$ dynamics). This is $$\begin{aligned}
& & w=x_1 + x_2 + x_3 + x_4, \ z=(x_1+x_3)-(x_2+x_4), \nonumber \\
& & u=x_1-x_3, \ v=x_2-x_4.
\label{CofV}\end{aligned}$$ Applying the transformation, for the total concentration we find the closed equation $\dot{w}=D(4-w)$. For the remainder of the analysis, we fix $w$ to its fixed-point value of $4$. For the variables $z$, $u$, and $v$ we then find $$\begin{split}
\dot z &= -2 u v- D \,z + \sqrt{\frac{D}{\lambda}(16-z^2)}\,\zeta_1(t)\,\,, \\
\dot u &= - \frac{v(z+4)}{2} - D\,u +\sqrt{\frac{D\,(4-z)}{\lambda\,(4+z)}}\,\Big(u\,\zeta_1(t)+\phi\,\zeta_2(t)\Big)\,, \\
\dot v &= - \frac{u(z-4)}{2} - D\,v -\sqrt{\frac{D\,(4+z)}{\lambda\,(4-z)}}\,\Big(v\,\zeta_1(t)+\psi\,\zeta_3(t)\Big) \,,
\end{split}
\label{sdes2}$$ where $\phi = \sqrt{(z + 4)^2/4-u^2}\,$, $\psi=\sqrt{(z - 4)^2/4-v^2}$, and the $\zeta$ variables are independent Gaussian white noise.
The dynamics on the $O(1)$ timescale are solvable. In fact, $\phi$ and $\psi$ defined above are conserved quantities of the system with $D$ set to zero. Solution trajectories are therefore confined to the closed curve given by the intersection of the surfaces defined by the values of $\phi$ and $\psi$, which are determined by initial conditions. Details of the full solution will be provided in a forthcoming paper [@Biancalani2012]; for the present discussion it is sufficient to point out that the trajectories are periodic, with the period for $z$ being $$\label{o1period}
T = \frac{2}{\sqrt{16- \left( \phi^2 - \psi^2 \right)}}\,\, \text{K} \left( \frac{16 - (\phi + \psi)^2}{16 - (\phi - \psi)^2} \right),$$ where $\text{K}(\cdots)$ denotes the elliptic integral of the first kind. The period for $u$ and $v$ is double that of $z$. It is important to note that $K(x)$ grows without bound as $x\to1$, and thus Eq. (\[o1period\]) implies that the period of oscillation $T$ diverges as either $\phi\to0$ or $\psi\to0$. In these limits, the trajectories of the deterministic dynamics deform into a homoclinic network linking the fixed point $(u,v,z)=(0,4,-4)\,\,\textrm{to}\,\,(u,v,z)=(0,-4,-4)$ or $(-4,0,4)\,\,\textrm{to}\,\,(4,0,4)$, respectively. This fact explains the presence of both fast oscillatory dynamics and metastability in the same parameter range.
We turn now to the study of the behavior of $z$ on an $O(1/D)$ timescale. From left to right, the terms in the equation for $\dot{z}$ in system are responsible for the fast oscillation caused by interaction with $u$ and $v$, the linear drag towards zero, and the noise. Since the oscillations occur on a timescale faster than the other two terms, we expect that a time average on a timescale $\tau$, such that $T \ll \tau \ll 1/D$, will not affect the drag and the noise substantially. To do the averaging, we coarse-grain time by intervals with length $\tau$ in Eq. and replace every term with its time average over that interval [@Freidlin1984]. We write $\overline{(\cdots)} = \tau^{-1} \int_t^{t+\tau}dt \,(\cdots)$ for the time average and make use of the following assumptions: $$\overline{\vphantom{i} u v}\,\approx0\,,\quad \overline{\left(16-z^{2}\right)^\frac{1}{2}} \,\approx\,\left(16-\overline{z}^2\right)^\frac{1}{2}\,.
\label{ansatz}$$ These are justified on physical grounds: the first follows from the fact that the conserved quantities of the fast dynamics are approximately constant on intervals of length $\tau\ll 1/D$, since the average of $uv$ is a multiple of the average of $\dot{z}$, and $z$ has periodic trajectories if $\phi$ and $\psi$ are fixed; in the second approximation, we are assuming that the strength of noise is not strongly affected by fast oscillations in $z$.
The resulting so-called averaged equation for $\bar z$ is
$$\dot{\bar{z}} = - D \bar{z} + \sqrt{\frac{D}{\lambda}\,(16-\bar z^2)}\,\zeta(t)\,.
\label{zeq4}$$
This equation describes an interplay between the drag and the noise, and provides a complete picture of the dynamical transition first observed by TK. Physically, we may think of the system as gently relaxing to the origin, while being agitated by a noise term which vanishes at the metastable states $\bar z=\pm4$. Depending on the strength of the noise (controlled by the parameter $\lambda$), the system will either be attracted to zero by the linear drag, or forced to the boundaries by the noise. By varying $\lambda$ we transition between these dynamical regimes, an effect which is most clearly demonstrated by calculation of the stationary distribution $P(\bar z\,;\,\lambda)$. From Eq. , we find
$$P(\bar z\,;\,\lambda) = \big(16-\bar z^2\big)^{\lambda -1} \frac{\Gamma\left(\frac{1}{2}+\lambda \right)}{\sqrt{\pi }\, 4^{2 \lambda-1}\, \Gamma(\lambda)}.
\label{zstaz}$$
Our prediction is tested against the numerics in Fig. \[fig:trans\]. This equation confirms the critical volume $V_c=1/D$ as the point of transition between a unimodal and bimodal stationary distribution. We should point out that Eq. is correct only up to first order in $D$; certain features of the simulation data (such as $|z|$ occasionally exceeding 4 due to variations in total concentration $w$) are not captured at this level of approximation.
It is worth pausing a moment to discuss the relation of the noise-induced metastable states to the fast oscillatory dynamics discussed in the earlier analysis. For example, from the definition of $\psi$, we see that $z=4$ can only be obtained when $v=0$ and $\psi=0$, and thus we are in the regime in which the period of the oscillation is divergent. In this case one can expect fast periodicity to break down and the system to remain in a given metastable state for a random length of time, before being freed and proceeding along a trajectory close to the homoclinic orbit linking it to another. The lower right-hand plot of Fig. 1 shows this behavior.
Beyond determining the stationary distribution of $z$, our methods may also be used to calculate various other quantities associated with the model. For example, in [@Togashi2001] the fraction of time spent in the pair-dominated state (that is, $X_1+X_3=0$ or $X_2+X_4=0$), called the ‘rate of residence’, was measured from simulations and plotted as a function of $\lambda$. The authors noted a puzzling shift in this quantity when adjusting for different cell volumes, which we are now able to explain.
From Eq. we can determine a straightforward prediction for the rate of residence by computing the fraction of time that $z$ spends within $1/V$ of $\pm4$. We integrate the stationary distribution to find: $$\begin{split}
1-\int_{-4+1/V}^{4-1/V}P(z\,&;\,DV)\,dz\\
&=\Big.V^{-DV}+\text{higher order terms.}
\end{split}\label{dvlogv}$$ Therefore, to properly compare different cell volumes and diffusion coefficients, one should hold $DV\ln(V)$ constant, rather than $\lambda$. The fit between Eq. and data from simulations is shown in Fig. \[fig:rateofres\].
![(Color online) Rate of residence of the pair-dominated state as a function of $DV\ln(V)$. The circles show the result measured from simulations carried out with fixed $D=10^{-3}$ and varying $V$; for each data point a single simulation of duration $t_{\max}=10^7$ was conducted and the fraction of time spent in the pair-dominated state measured. The solid line corresponds to the first-order prediction in Eq. .[]{data-label="fig:rateofres"}](Fig3.eps "fig:"){width="40.00000%"}\
In this Rapid Communication we have examined the influence of noise on the link between structure and function in a class of biochemical networks. The consistent formulation of the problem which we provide starts from the master equation and proceeds through a well-defined approximation scheme to an SDE which correctly captures the behavior of the system. Although this equation is not exactly solvable, we are able to proceed by identifying and exploiting a separation of timescales involved in the problem. This analytical process was demonstrated explicitly for the paradigmatic TK reaction, providing an understanding of the phenomenology of the model and yielding expressions for quantities of interest which are compared to the ones obtained numerically by TK.
Since it is the discreteness of molecules which gives rise to the intrinsic noise experienced by reaction systems of this type, one might expect that such effects are only relevant in small systems, and can be neglected in general (indeed, this is a central assumption of any theory based on the study of macroscopic rate equations). In practice the situation is far more subtle; what matters more than the strength of the noise is how it interacts with other aspects of the model, such as the slow relaxation due to a small diffusion coefficient. As we have shown, this interaction gives rise to metastability in the class of autocatalytic reaction networks we investigate; moreover, it can be exploited mathematically to explain the dynamical transition observed in the TK reaction. A closely related noise effect has recently been observed in an ecological model [@Rogers2012], where it induces the spontaneous formation of species, and we expect that more surprising results of this type will come in the near future.
This work was funded (T.R. and A.J.M.) under EPSRC Grant No. EP/H02171X/1. T.B. also acknowledges partial funding from EPSRC.
[18]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**]{}, ed. (, ) @noop [****, ()]{} @noop [**]{}, ed. (, , ) @noop [**]{} (, , ) @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{} @noop [**]{} (, , ) [****, ()](\doibase 10.1143/JPSJ.77.044002) @noop [****, ()]{}
[^1]: Timescale separation techniques have also been applied successfully to other models with intrinsic noise, for example, in [@Parker2009] to study the properties of stochastic extinction events in the Lotka-Volterra model
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, we propose a complete and robust motion planning system for the aggressive flight of autonomous quadrotors. The proposed method is built upon on a classical teach-and-repeat framework, which is widely adopted in infrastructure inspection, aerial transportation, and search-and-rescue. For these applications, human’s intention is essential to decide the topological structure of the flight trajectory of the drone. However, poor teaching trajectories and changing environments prevent a simple teach-and-repeat system from being applied flexibly and robustly. In this paper, instead of commanding the drone to precisely follow a teaching trajectory, we propose a method to automatically convert a human-piloted trajectory, which can be arbitrarily jerky, to a topologically equivalent one. The generated trajectory is guaranteed to be smooth, safe, and kinodynamically feasible, with a human preferable aggressiveness. Also, to avoid unmapped or dynamic obstacles during flights, a sliding-windowed local perception and re-planning method are introduced to our system, to generate safe local trajectories onboard. We name our system as *teach-repeat-replan*. It can capture users’ intention of a flight mission, convert an arbitrarily jerky teaching path to a smooth repeating trajectory, and generate safe local re-plans to avoid unmapped or moving obstacles. The proposed planning system is integrated into a complete autonomous quadrotor with global and local perception and localization sub-modules. Our system is validated by performing aggressive flights in challenging indoor/outdoor environments. We release all components in our quadrotor system as open-source ros-packages[^1].'
author:
- 'Fei Gao, Luqi Wang$^*$, Boyu Zhou$^*$, Luxin Han, Jie Pan, and Shaojie Shen[^2] [^3]'
bibliography:
- 'tro2019fei.bib'
title: |
Teach-Repeat-Replan:\
A Complete and Robust System for Aggressive Flight in Complex Environments
---
Aerial Systems: Applications, Motion and Path Planning, Collision Avoidance, Autonomous Vehicle Navigation.
Introduction
============
the development of autonomy in aerial robots, Micro Aerial Vehicle (MAV) has been more and more involved in our daily life. Among all applications emerged in recent years, quadrotor teach-and-repeat has shown significant potentials in aerial videography, industrial inspection, and human-robot interaction. In this paper, we investigate and answer the problem of what is the best way to incorporate a human’s intention in autonomous and aggressive flight, and what is a flexible, robust and complete aerial teach-and-repeat system.
There is a massive market for consumer drones nowadays. However, we observe that most of the operators of consumer drones are not professional pilots and would struggle in generating their ideal trajectory for a long time. In some scenarios, such as the drone racing or aerial filming, a beginner-level pilot is impossible to control the drone to finish the race safely or take an aerial video smoothly unless months of training. Also, there is considerable demand in applying drones to repetitive industrial inspections or search-and-rescue missions, where human provides a preferable routine. In these situations, demonstrating a desirable trajectory and letting the drone to repeat it is a common wish. However, the taught trajectory generated by an unskilled pilot is usually incredibly hard or dynamically infeasible to repeat, especially in some cluttered environments. Moreover, most of the vision-based teach-and-repeat applications [@fei2019ral], [@fehr2018visual], [@furgale2010visual], such as our previous work [@fei2019ral], are sensitive to changing environments. In [@fei2019ral], even the environment changes very slightly, the global map has to be rebuilt, and the teaching has to be redone.
Based on these observations, in this paper, instead of asking the drone to follow the human-piloted trajectory exactly, we only require the human operator to provide a rough trajectory with an expected topological structure. Such a human’s teaching trajectory can be arbitrarily slow or jerky, but it captures the rough route the drone is expected to fly. Our system then autonomously converts this poor teaching trajectory to a topological equivalent and energy-time efficient one with an expected aggressiveness. Moreover, during the repeating flight, our system locally observes environmental changes and re-plans sliding-windowed safe trajectories to avoid unmapped or moving obstacles. In this way, our system can deal with changing environments. Our proposed system extends the classical robotics teach-and-repeat framework and is named as *teach-repeat-replan*. It is complete, flexible, and robust.
In our proposed system, the surrounding environment is reconstructed by onboard sensors. Then the user’s demonstrated trajectory is recorded by virtually controlling the drone in the map with a joystick or remote controller. Afterward, we find a flight corridor that preserves the topological structure of the teaching trajectory. The global planning is decoupled as spatial and temporal planning sub-problems. Having the flight corridor, an energy-optimal spatial trajectory which is guaranteed to be safe, and a time-optimal temporal trajectory which is guaranteed to be physically feasible, are iteratively generated. In repeating, while the quadrotor is tracking the global spatial-temporal trajectory, a computationally efficient local map [@han2019fiesta] is fused onboard by stereo cameras. Based on local observations, our proposed system uses a sliding-window fast re-planning method [@boyu2019ral] to avoid possible collisions. The re-planning module utilizes gradient information to locally wrap the global trajectory to generate safe and kinodynamic feasible local plans against unmapped or moving obstacles.
The concept of generating optimal topology-equivalent trajectories for quadrotor teach-and-repeat was first proposed in our previous research [@fei2019ral]. In [@fei2019ral], once the repeating trajectory is generated, the drone executes it without any other considerations. In that work [@fei2019ral], the environment must remain intact during the repeating, and the localization of the drone is assumed to be perfect. These requirements are certainly not guaranteed in practice, therefore, prevent the system from being applied widely. In this paper, we extend the framework of the classical teach-and-repeat and propose several new contributions to make our system complete, robust, and flexible. Contributions of this paper are listed as:
1. We advance our flight corridor generation method. The flight corridor we use now provides much more optimization freedom compared to our previous work [@fei2019ral]. The improvement of the flight corridor facilitates the generation of more efficient and smooth global trajectories. Moreover, we propose methods to accelerate the corridor generation on both CPU and GPU.
2. We introduce our previous works on online mapping [@han2019fiesta] and re-planning [@boyu2019ral] into our system, to improve the robustness against errors of global maps, drifts of localization, and environmental changes and moving obstacles.
3. We present a whole set of experiments and comparisons in various scenarios to validate our system.
4. We release all components in the system as open-source packages, which include local/global planning, perception, and localization, and onboard controller.
![image](sys_architecture){width="1.8\columnwidth"}
![The hardware setting of our autonomous drone system. \[fig:sys\_hardware\] ](drone){width="0.9\columnwidth"}
In what follows, we discuss related literature in Sect. \[sec:related\_work\] and introduce our system in Sect. \[sec:system\_overview\]. Our methods for finding a flight corridor consisting of large convex polyhedrons, and spatial-temporal trajectory optimization are detailed in Sect. \[sec:corridor\_generation\] and Sect. \[sec:trajectory\_optimization\], respectively. The local re-planning is introduced in Sect. \[sec:online\_local\_replanning\]. Experimental and benchmarked results are given in Sect. \[sec:results\]. The paper is concluded in Sect. \[sec:conclusion\].
Related Works {#sec:related_work}
=============
**Robotics teach-and-repeat:** Many robotics teach-and-repeat works, especially for mobile robots, have been published in recent years. Most of them focus on improving the accuracy or robustness in repeating/following the path by operators, which is fundamentally different from our motivation. A lidar-based teach-and-repeat system is proposed in [@sprunk2013lidar], where laser scans are used to localize the ground vehicle against its taught path driven by the user. Furgale et al. [@furgale2014there] [@krusi2015lighting] also develop a lidar-based ground robot, which is specially designed for repeating long-term motions in highly dynamic environments. This system equips a local motion planner which samples local trajectory to avoid dynamic elements during route following. A map maintenance module is used to identify moving objects and estimate their velocities. An iterative learning controller is proposed in [@ostafew2013visual], to reduce the tracking error during the repeating of the robot. This controller can compensate disturbances such as unmodelled terrains and environmental changes by learning a feedforward control policy. Vision-based teaching-and-repeat systems are also proposed in several works, such as the visual localization used by the rover in [@furgale2010visual]. In this work, the authors build a manifold map during the teaching and then use it for localization in the repeating, In [@paton2016bridging], a multi-experience localization algorithm is proposed to address the issue of environmental changes. The ground robot is localized robustly against several past experiences. In [@paton2015s] and [@berczi2016s], to further improve the accuracy and robustness in localization, illumination and terrain appearances are considered in their proposed visual navigation system used for teach-and-repeat, Compared to ground teach-and-repeat works, research on aerial teach-and-repeat is few. In [@fehr2018visual], a vision-based drone is used to inspect infrastructure repetitively. In the teaching phase, the desired trajectory is demonstrated by the operator, and some keyframes in the visual SLAM are recorded as checkpoints. While repeating, local trajectories are generated to connect those checkpoints by using the minimum-snap polynomials [@MelKum1105]. To function properly, in this work, the teaching trajectory itself must be smooth, and the environments must have no changes during the whole repeating. In contrast, our proposed method can convert an arbitrarily poor path to a safe and efficient trajectory with expected flying aggressiveness. Also, our system is flexible. Since it records the teaching path by virtually controlling the drone in simulation, a manually piloted teaching process is not necessary. Finally, our proposed system is robust to environmental changing and can even avoid moving obstacles.
**Quadrotor trajectory planning:** Trajectory optimization is essential in generating safe and executable repeating trajectories from poor teaching ones. The minimum-snap trajectory optimization is proposed by Mellinger [@MelKum1105], where piecewise polynomials are used to represent the quadrotor trajectory and are optimized by quadratic programs (QP). A method for solving a closed-form solution of the minimum snap is proposed in [@RicBryRoy1312]. In this work, a safe geometric path is firstly found to guide the generation of the trajectory. By adding intermediate waypoints to the path iteratively, a safe trajectory is finally generated after solving the minimum-snap problem several times. Our previous works [@fei2018icra] [@fei2016ssrr] [@fei2018jfr] carve a flight corridor consisting of simple convex shapes (sphere, cube) in a complex environment. The flight corridor constructed by a series of axis-aligned cubes or spheres can be extracted very fast on occupancy map or Kd-tree. Then we use the flight corridor and physical limits to constrain a piecewise Bézier curve, to generate a guaranteed safe and kinodynamic feasible trajectory, Other works are proposed to find general convex polyhedrons for constraining the trajectory. In [@liu2017ral], a piecewise linear path is used to guide and initialize the polyhedron generation. In [@deits2015computing], by assuming all obstacles are convex, SDP and QP are iteratively solved to find the maximum polyhedron seeded at a random coordinate in 3-D space. Gradient information in maps is also valuable for local trajectory optimization. In CHOMP [@ratliff2009chomp], the trajectory optimization problem is formulated as a nonlinear optimization over the penalty of safety and smoothness. In [@oleynikova2016continuous], [@lin2018autonomous] and [@helen2019system], gradient-based methods are combined with piecewise polynomials for local planning of quadrotors. In this paper, we also utilize gradient-based optimization for local re-planning.
Time optimization or so-called time parametrization is used to optimize the time profile of a trajectory, given the physical limits of a robot. Methods can be divided as direct methods [@choset2005principles] and indirect methods[@roberts2016generating]. Direct methods generate an optimal spatial-temporal trajectory directly in the configuration space. For indirect methods, a trajectory independent of time is firstly generated, the relationship between time and the trajectory is optimized by an additional optimization process. The method in [@jamieson2016near] finds a mapping function between the time and the trajectory, which is done by recursively adding key points into the function, and squeeze out the infeasibility of the time profile. This method obtains an optimal local solution and is computationally expensive. [@roberts2016generating] also proposes a mapping function, which maps time to a virtual parametrization of the trajectory. The mapping function is then optimized under a complicated nonlinear formulation. However, the global optimality is not guaranteed, and a feasible initial solution is necessary to bootstrap the optimization. Convex optimization [@verscheure2009time] and numerical integration [@pham2014general] are two typical methods of robotics time optimal path parametrization (TOPP) problem. Although numerical integration [@pham2014general] [@pham2018new] has shown superior performance in computing efficiency, convex optimization [@verscheure2009time] has the advantage of adding regularization terms other than total time into its objective function. This specialty suits well for our application where the user defines the expected aggressiveness of the drone, and sometimes the drone may not be expected to fly as fast as possible. As for the efficiency, since we do temporal optimization off-line before the repeating, computing time is not critical.
System Overview {#sec:system_overview}
===============
System Architecture
-------------------
The overall software and hardware architecture of our quadrotor system are shown in Fig. \[fig:sys\_architecture\] and \[fig:sys\_hardware\]. The global mapping, flight corridor generator, and global spatial-temporal planner are done on an off-board computer. Other online processings are running onboard on the drone during the flight. Before teaching, the global map is built by onboard sensors. During teaching, a flight corridor is generated by inflating the teaching trajectory. Then the spatial and temporal trajectories are optimized iteratively within the flight corridor under a coordinate descent scheme [@wright2015coordinate]. The local planner using gradient-based optimization is running onboard to avoid unexpected obstacles observed in the repeating flights. For trajectory tracking, we use a geometric controller [@lee2010]. And the attitude is stabilized by the autopilot.
Globally Consistent Localization and Mapping {#subsec:localization_mapping}
--------------------------------------------
We use VINS [@qin2018vins], a robust visual-inertial odometry (VIO) framework, to localize the drone. Moreover, the loop closure detection and global pose graph optimization are used in our system, to globally correct the pose estimation. The global mapping is done by fusing depth measurements from the stereo cameras with the pose estimation. By using our previous research on deformable map [@wang2019surfel], our global mapping module maintains a series of sub-maps with each attached to a keyframe. In this way, the map is attached to the pose graph and is therefore globally driftless. During the mapping, when a loop closure is detected, keyframes in the global pose graph are corrected, and all sub-maps are deformed accordingly. The global pose graph optimization is also activated during the repeating. When loop closure is detected, the pose of the drone is corrected accordingly to eliminate the drift.
Global Spatial-Temporal Planning {#subsec:coordinate_descent}
--------------------------------
For an extremely poor teaching trajectory, both the geometric shape and time profile of it is far from optimal and therefore useless, or even harmful for conducting optimization. However, the topological information of the teaching trajectory is essential since it reflects the human’s intention. To preserve the topological information, we group the free space around the teaching trajectory to form a flight corridor (Sect. \[sec:corridor\_generation\]). The corridor contains the teaching trajectory within it, shares the same topological structure, and provides large freedom for optimization. It’s hard to concurrently optimize a trajectory spatially and temporally in the flight corridor. However, generating a safe spatial trajectory given a fixed time allocation (Sect. \[subsec:space\_optimization\]) and optimizing the time profile of a fixed spatial trajectory (Sect. \[subsec:time\_optimization\]) are both conquerable. Therefore, we iteratively optimize the trajectory in the space-time joint solution space by designing a coordinate descent [@wright2015coordinate] framework. An objective with weighting energy and time duration is defined for optimization. We firstly generate a spatial trajectory whose energy is minimized, then we use the temporal optimization to obtain the optimal time profile of it. The optimal time profile is used to parametrize a trajectory again for spatial optimization. The spatial-temporal optimizations are done iteratively until the total cost cannot be reduced any more.
Local Collision Avoidance {#subsec:local_collision_avoidance}
-------------------------
In practice, the accumulated drift of VIO is unavoidable, and the recall rate of loop closure is unstable. Although we have built a dense global map, when the drift is significant and not corrected by loop detection in time, the quadrotor may have collisions with obstacles. Moreover, the environment may change or contain moving obstacles. Our previous work [@fei2019ral] has to re-build the map when changes happen and can not deal with dynamic obstacles. To resolve the above issues, we integrate our previous local map fusion module [@han2019fiesta] into our system to detect collisions locally and serves the local trajectory optimization. Also, we propose a sliding-window local replanning method based on our previous research on quadrotor local planning [@boyu2019ral], to avoid collisions on the flight.
In the repeating phase, the drone controls its yaw angle to face its flying direction and build a local map by stereo cameras. We consistently check the local trajectory within a replanning time horizon. If collisions along the local trajectory are reported, replanning is triggered to wrap the trajectory out of obstacles by gradient-based optimization [@boyu2019ral].
![An illustration of free space captured by an axis-aligned cube and a general polyhedron. Obstacles are shown in dashed lines. The blue curve is the teaching trajectory of humans. The red triangle is the seed for finding local free space. The axis-aligned cube and a corresponding general convex polyhedron are shown in yellow and green, respectively. \[fig:cube\_polygon\_compare\]](cube_polygon_compare){width="0.8\columnwidth"}
Flight Corridor Generation {#sec:corridor_generation}
==========================
As stated in Sect. \[subsec:coordinate\_descent\], the first step of our global planning is to build a flight corridor around the teaching trajectory for spatial-temporal trajectory optimization. In our previous work [@fei2019ral], the flight corridor is constructed by finding a series of axis-aligned cubes, which may sacrifice much space, especially in a highly nonconvex environment, as is shown in Fig \[fig:cube\_polygon\_compare\]. A more illustrative comparison is shown in Fig. \[fig:poly\_cube\_rviz\], where the convex polyhedron captures much more free space than the simple cube. Using simple axis-aligned cubes significantly limit the solution space of trajectory optimization, which may result in a poor solution. What’s more, in situations where the free space is very limited, such as flying through a very narrow circle, a cube-based corridor [@fei2019ral] may even fail to cover all teaching trajectory and result in no solutions existing in the corridor. Therefore, to utilize the free space more sufficiently and adapt to even extremely cluttered maps, we propose a method to generate general, free, large convex polyhedrons.
Since the human’s teaching trajectory may be arbitrarily jerky, we cannot assume there is a piecewise linear path to initiate the polyhedron generation, as in [@liu2017ral]. Also, we make no requirements on the convexity of obstacles in the map as in [@deits2015computing]. Our method is based on convex set clustering, which is similar to [@blochliger2018topomap], but is different and advanced at:
1. We make no assumption on the growing directions of convex clusters and generate completely collision-free polyhedrons based on our dense occupancy map.
2. We introduce several careful engineering considerations which significantly speed-up the clustering.
3. We fully utilize the parallel structure of this algorithm and accelerate it over an order of magnitude in GPUs.
4. We introduce a complete pipeline from building the convex polyhedron clusters to establishing constraints in trajectory optimization.
Convex Cluster Inflation {#subsec:convex_polyhedron_inflation}
------------------------
The core algorithm for the construction of the flight corridor is to find the largest convex free polyhedron at a given coordinate. In this paper, we use an occupancy grid map to represent the environment. Each polyhedron in the flight corridor is the convex hull of a voxel set, which is convex and contains only free voxels. The voxel set is found by clustering as many free voxels as possible around an arbitrary seed voxel. In this paper, we name the voxel set as *convex cluster*, and the process of finding such a set as *convex cluster inflation*. Our method for finding such a *convex cluster* is based on the definition of the convex set:
*Definition*: A set $\mathcal{S}$ in a vector space over $\mathcal{R}$ is called a convex set if the line segment joining any pair of points of $\mathcal{S}$ lies entirely in $\mathcal{S}$. [@lay2007convex].
The pipeline for iteratively inflating such a cluster while preserving convexity is stated in Alg. \[alg:polyhedron\_inflation\]. Our method operates on a 3D occupancy map $\mathcal{M}$ where voxels are labeled as *obstacle* or *free*. Three voxel sets are maintained in the algorithm. $\mathcal{C}$ stands for the targeting convex voxel cluster. $\mathcal{C}^+$ is the set of voxels that are tentative to be added to $\mathcal{C}$ in this iteration. And $\mathcal{C}^*$ contains newly added voxels which preserve the convexity. The cluster inflation starts by adding the seed voxel $p$ to $\mathcal{C}$, and adding all neighboring voxels of $p$ to $\mathcal{C}^+$. In an iteration, each voxel $p^+$ in $\mathcal{C}^+$ is checked whether it can preserve convexity using the function CHECK\_CONVEXITY($p^+, \mathcal{C}$, $\mathcal{M}$). This function, as is shown in Alg. \[alg:convex\_check\], casts rays from $p^+$ to each existing voxels in $\mathcal{C}$. According to the definition of the convex set, $\mathcal{M}$ with $p^+$ is convex if and only if all rays are collision-free. Based on this criteria, qualified voxels are considered as *active* voxels and are added into $\mathcal{C}$ and $\mathcal{C}^*$. Neighboring voxels of all *active* voxels $p^*$ are traversed and collected by the function GET\_NEIGHBORS($\mathcal{C}^*$) for next iteration. The inflation ends when $\mathcal{C}^+$ becomes empty, which implies no additional voxels can be added into $\mathcal{C}$. Fig. \[fig:cluster\_ray\_cast\] also illustrates the procedure of the *convex cluster inflation*.
**Notation**: Seed Voxel: $p^s$, Grid Map: $\mathcal{M}$, Convex Cluster: $\mathcal{C}$,\
Candidate Voxel Set: $\mathcal{C}^+$, Active Voxel Set: $\mathcal{C}^*$ **Input**: $p^s$, $\mathcal{M}$ $\mathcal{C} \:\:\: \leftarrow \{p^s\}$ $\mathcal{C}^* \: \leftarrow \varnothing$ $\mathcal{C}^+ \leftarrow$ GET\_NEIGHBORS($\mathcal{C}$) $\mathcal{C} \:\:\: \leftarrow \mathcal{C} \:\:\cup p^+ $ $\mathcal{C}^* \: \leftarrow \mathcal{C}^* \cup p^+$ $\mathcal{C}^+ \leftarrow \varnothing$ $\mathcal{C}^+ \leftarrow \mathcal{C}^+ \cup $ GET\_NEIGHBORS($\mathcal{C}^*$) $\mathcal{C}^* \leftarrow \varnothing$ **Output**: $\mathcal{C}$
**Notation**: Ray Cast: $l$ $l \leftarrow$ CAST\_RAY($p^+$, $p$) False True
Having a *convex cluster* consists of a number of voxels, we convert it to the algebraic representation of a polyhedron for following spatial trajectory optimization. Quick hull algorithm [@barber1996quickhull] is adopted here for quickly finding the convex hull of all clustered voxels. The convex hull is in vertex representation (V-representation) $\{V_0, V_1, ..., V_m \}$, and is then converted to its equivalent hyperplane representation (H-representation) by using double description method [@fukuda1995double]. The H-representation of a 3-D polyhedron is a set of affine functions: $$\begin{aligned}
\label{eq:h_representation}
&a_0^x \cdot \textbf{x} + a_0^y \cdot \textbf{y} + a_0^z \cdot \textbf{z} \leq k_0, \nonumber \\
&a_1^x \cdot \textbf{x} + a_1^y \cdot \textbf{y} + a_1^z \cdot \textbf{z} \leq k_1, \nonumber \\
&\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \vdots \\
&a_n^x \cdot \textbf{x} + a_n^y \cdot \textbf{y} + a_n^z \cdot \textbf{z} \leq k_n, \nonumber\end{aligned}$$ where $\{a_n^x, a_n^y, a_n^z \}$ is the normal vector of the 3-D hyperplane and $k_n$ is a constant.
CPU Acceleration {#subsec:cpu_acceleration}
----------------
As is shown in Alg. \[alg:polyhedron\_inflation\], determining whether a voxel preserves the convexity needs to conduct ray-casting to all existing voxels in the *convex cluster*. Iterating with all voxels and rays makes this algorithm impossible to run in real-time, especially when the occupancy grid map has a fine resolution. To make the polyhedron generated in real-time, we take careful engineering considerations on the implementations and propose some critical techniques to significantly increase the overall efficiency.
### Polyhedron Initialization
We initialize each convex cluster as an axis-aligned cube using our previous method [@fei2018icra], which can be done very fast since only index query ($\mathcal{O}$(1)) operations are necessary. After inflating the cube to its maximum volume, as in Fig. \[fig:cube\_polygon\_compare\], we switch to the convex clustering to further group convex free space around the cube.
The proposed polyhedron initialization may result in a final polyhedron different from the one which is clustered from scratch. This is because an axis-aligned cube only inflates in $x, y, z$ directions while a *convex cluster* grows in all possible directions (26-connections in a 3D grid map). However, this initialization process is reasonable. Our purpose is not making each polyhedron optimal but capturing as much as possible free space than a simple cube cannot. In practice, the initialization provides a fast discovery of nearby space which is easy to group, and does not prevent the following *convex cluster inflation* to refine the polyhedron and find sizeable free space. In Sect. \[subsubsec:compare\_corridor\], we show that the initialization process significantly improves the computing efficiency with only a neglectable sacrifice on the volume of the final polyhedron.
### Early Termination
We label all voxels in the cluster as *inner* voxels which inside the *convex cluster*, and *outer* voxels which on the boundary of the *convex cluster*. When traversing a ray from a candidate voxel to a voxel in the *convex cluster*, we early terminate the ray casting when it arrives at a voxel labeled as *inner*.
*Theorem 1*: \[Theo: early\_termination\] The early termination at *inner* voxels is sufficient for checking convexity.
*Proof*: According to the definition of convex set, a ray connecting an *inner* voxel to any other voxel in the *convex cluster* lies entirely in the *convex cluster*. Hence, the extension line of an *inner* voxel must lie inside the *convex cluster* and therefore it must pass the convexity check.
### Voxel Selection
To further reduce the number of voxels that need to be cast rays, given a candidate voxel, only *outer* voxels are used to check its convexity.
*Theorem 2*: \[Theo: voxel\_selection\] Using *outer* voxels of a *convex cluster* is sufficient for checking convexity.
*Proof*: Obviously, the *convex cluster* is a closed set with *outer* voxels at its boundary. The candidate voxel is outside this set. Therefore, casting a ray from any *inner* voxel to the candidate voxel must pass one of the *outer* voxels. According to *Theorem 1*, checking convexity of this ray can terminate after the ray passes an *outer* voxels, which means for a candidate voxel, checking rays cast to *outer* voxels is sufficient.
By introducing above techniques, the proposed *convex cluster inflation* can work in real time for a mediate grid resolution ($0.2m$) on CPUs. The efficacy of these techniques is numerically validated in Sect. \[subsubsec:compare\_corridor\].
GPU Acceleration {#subsec:gpu_acceleration}
----------------
We propose a parallel computing scheme that significantly speeds up the inflation by one order of magnitude where a GPU is available. As is shown in Sec. \[subsec:convex\_polyhedron\_inflation\], when the *convex cluster* discovers a new neighboring voxel, sequentially traversing and checking all rays is naturally parallelizable. With the help of many core GPUs, we can cast rays and check collisions parallelly. Moreover, to fully utilize the massively parallel capability of a GPU, reduce the serialize operations, and minimize the data transferring between CPU and GPU, we examine all potential voxels of the cluster parallelly in one iteration. Instead of discovering a new voxel and checking its rays, we find all neighbors of the active set $\mathcal{C}^*$ and check their rays all in parallel. The detailed procedure is presented in Alg. \[alg:parallel\_polyhedron\_inflation\], where GET\_NEIGHBORS($\mathcal{C}$) collects all neighbors of a set of voxels, and PARA\_CHECK\_CONVEXITY($\mathcal{C}^+, \mathcal{C}$, $\mathcal{M}$) checks the convexity of all candidate voxels parallelly in GPUs. Note that in the serialized version of the proposed method, the voxel discovered earlier may prevent later ones from being clustered, as is illustrated in Fig. \[fig:cluster\_ray\_cast\]. However, in the parallel clustering, all voxels examined at the same time may add conflicting voxels to the cluster. Therefore, we introduce an additional variable, $r$ to record sequential information of voxels. As shown in Alg. \[alg:parallel\_convex\_check\], the kernel function is running on the GPU per block. It checks the ray cast from every candidate voxel in $\mathcal{C}^+$ to a cluster voxel in $\mathcal{C}$ and to each other candidate voxel which has a prior index. After that, the function CHECK\_RESULTS($r$) selects all qualified voxels and adds them into $\mathcal{C}$. Firstly, candidate voxels that have collisions with $\mathcal{C}$ are directly excluded. Then, candidate voxels having collisions with other candidates that have already been added into $\mathcal{C}$ are excluded. In this way, we finally get the same results as in the serialized version of the clustering. The efficacy of the parallel computing is shown in Sect. \[subsubsec:compare\_corridor\].
**Notation**: Parallel Raycasting Result: $r$ $\mathcal{C} \:\:\: \leftarrow \{ p^s \}$ $\mathcal{C}^* \: \leftarrow \varnothing$ $\mathcal{C}^+ \leftarrow$ GET\_NEIGHBORS($\mathcal{C}$)
/\* *GPU data uploads* \*/ $r \: \leftarrow$ PARA\_CHECK\_CONVEXITY($\mathcal{C}^+, \mathcal{C}$, $\mathcal{M}$) /\* *GPU data downloads* \*/ $\mathcal{C}^* \: \leftarrow$ CHECK\_RESULTS($r$) $\mathcal{C} \:\:\: \leftarrow \mathcal{C} \cup \mathcal{C}^*$ $\mathcal{C}^+ \leftarrow$ GET\_NEIGHBORS($\mathcal{C}^*$) **Output**: $\mathcal{C}$
$r[i].status \leftarrow True$ /\* *Kernel function starts* \*/ $l \leftarrow$ CAST\_RAY($p^+_i$, $p$) $r[i].status \leftarrow False$ /\* *Kernel function ends* \*/ /\* *Kernel function starts* \*/ $l \leftarrow$ CAST\_RAY($p^+_i$, $p^+_j$) $r[i].status \leftarrow Pending$ $r[i].list.push\_back(j)$ /\* *Kernel function ends* \*/ $r$
\[marker\] $\mathcal{C}^* \leftarrow \mathcal{C}^* \cup p^+$ $\mathcal{C}^* \leftarrow \mathcal{C}^* \cup p^+$ $\mathcal{C}^*$
Corridor Generation and Loop Elimination {#subsec:loop_elimination}
----------------------------------------
Since the trajectory provided by a user may behave arbitrarily jerky and contain local loops, we introduce a specially designed mechanism to elliminate unnecessary loops, i.e., repeatable polyhedrons. The exclusion of repeatable polyhedrons is essential since in following trajectory optimization (Sect. \[sec:trajectory\_optimization\]), each polyhedron is assigned with one piece of the trajectory. Repeatable polyhedrons would result in an optimized trajectory loops as the user does, which is obviously not efficient. The pipeline of the corridor generation is shown in Alg. \[alg:local\_loop\] and Fig. \[fig:corridor\_pipeline\]. At the beginning of the teaching, the flight corridor is initialized by finding the maximum polyhedron around the position of the drone. Then as the human pilots the drone to move, we keep checking the drone’s position. If it goes outside the last polyhedron ($\mathcal{G}$\[-1\]), we further check whether the drone discovers new free space or not. If the drone is contained within the second last polyhedron ($\mathcal{G}$\[-2\]), we can determine that the teaching trajectory has a loop, as shown in Fig. \[fig:corridor3\]. Then, the last polyhedron in the corridor is regarded as repeatable and is therefore popped out from the corridor. Otherwise, as shown in Fig. \[fig:corridor4\], the drone is piloted to discover new space. Then a new polyhedron $\mathcal{P}$ is inflated and added to the tail of the corridor. The corridor generation is terminated when the teaching finish. The final flight corridor shares the same topological structure with the teaching trajectory since no obstacles are included in the corridor. And it has no unnecessary loops.
**Notation**: Flight Corridor $\mathcal{G}$, Drone Position $p$, Convex Polyhedron $\mathcal{P}$ Initialize : $\mathcal{P} \leftarrow$ CONVEX\_INFLATION($p, \mathcal{M}$) $\mathcal{G}$.push\_back($\mathcal{P}$) $p \leftarrow$ UPDATE\_POSE() $\mathcal{G}$.pop\_back() $\mathcal{P}$ = CONVEX\_INFLATION($p, \mathcal{M}$) $\mathcal{G}$.push\_back($\mathcal{P}$) $\mathcal{G}$
Spatial-Temporal Global Trajecotry Optimization {#sec:trajectory_optimization}
===============================================
Spatial Trajecotry Optimization {#subsec:space_optimization}
-------------------------------
For the spatial optimization, we use the Bernstein basis to represent the trajectory as a piecewise Bézier curve, since it can be easily constrained in the flight corridor by enforcing constraints on control points. An $i^{th}$-order Bernstein basis is: $$b_{n}^{i}(t) = \binom{n}{i} \cdot t^i \cdot (1-t)^{n-i},$$ where $n$ is the degree of the basis, $\binom{n}{i}$ is the binomial coefficient and $t$ is the variable parameterizing the trajectory. An $N$-piece piecewise Bézier curve is written as: $$\label{eq:spatial_curve_n_piece}
\textit{f}_{\mu}\textit{(t)} =
\begin{cases}
\sum_{i=0}^n c_{\mu, 1}^ib_{n}^{i}(t / T_1), & t\in[0, T_1], \\
\sum_{i=0}^n c_{\mu, 2}^ib_{n}^{i}(t / T_2), & t\in[0, T_2], \\
\:\:\: \:\:\:\:\:\:\:\:\:\vdots &\:\:\:\:\:\:\:\:\:\vdots \\
\sum_{i=0}^n c_{\mu, N}^ib_{n}^{i}(t / T_N), &t\in[0, T_N].
\end{cases}$$ For the $m^{th}$ piece of the curve, $c_{\mu. m}^i$ is the $i^{th}$ control point, and $T_m$ is the time duration. The spatial trajectory is generated in $x, y, z$ dimensions, and $\mu \in {x, y, z}$. $\mu$ is omitted in following derivation for brevity. In this equation, $t$ is scaled by $T_m$ since a standard Bézier curve is defined on $[0,1]$.
Follow the formulation in minimum-snap [@MelKum1105], the squared jerk is minimized in this paper. Since the $3^{rd}$ order derivative of a curve corresponds to the angular velocity, the minimization of jerks alleviates the rotation and therefore facilitates visual tracking. The objective of the piecewise curve is: $$J = \sum_{\mu}^{x,y,z} \sum_{m=1}^N \int_{0}^{T_m} \left(\frac{d^3f_{\mu, m}(t)}{dt^3}\right)^2\, dt.$$ which is in a quadratic form denoted as $\mathbf{c}^T \mathbf{Q} \mathbf{c}$. Here $\mathbf{c}$ is composited by all control points in *x, y, z* dimensions. $\mathbf{Q}$ is a semi-definite Hessian matrix.
For a Bézier curve, its higher order derivatives can be represented by linear combinations of corresponding lower-order control points. For the $1^{st}$ and $2^{nd}$ order derivatives of the $m^{th}$ piece of the curve in Eq. \[eq:spatial\_curve\_n\_piece\], we have: $$\begin{aligned}
\label{eq:bezier_v_m}
& f^\prime_m(t) = \sum_{i=0}^{n-1} n (c_m^{i+1} - c_m^{i}) b_{n-1}^{i}(\frac{t}{T_m}), \\
& f^{\prime \prime}_m(t) = \sum_{i=0}^{n-2} n (n - 1) (c_m^{i+2} - 2 c_m^{i+1} + c_m^{i}) b_{n-2}^{i}(\frac{t}{T_m}). \nonumber\end{aligned}$$
### Boundary Constraints
The trajectory has the boundary constraints on the initial state ($p^0, v^0, a^0$) and the final state ($p^f, v^f, a^f$) of the quadrotor. Since a Bézier curve always passes the first and last control points, we enforce the boundary constraints by directly setting equality constraints on corresponding control points in each dimension: $$\begin{aligned}
\label{eq:boundary_constraints}
&c_0^{0} = p^0, \nonumber \\
&c_N^{n} = p^f, \nonumber \\
&n (c_0^{1} - c_0^{0}) = v^0, \\
&n (c_N^{n} - c_N^{n-1}) = v^f, \nonumber \\
&n (n - 1) (c_0^{2} - 2 c_0^{1} + c_0^{0}) = a^0, \nonumber \\
&n (n - 1) (c_N^{n} - 2 c_N^{n-1} + c_N^{n-2}) = a^f. \nonumber \end{aligned}$$
### Continuity Constraints
For ensuring smoothness, the minimum-jerk trajectory must be continuous for derivatives up to $2^{nd}$-order at all connecting points on the piecewise trajectory. The continuity constraints are enforced by setting equality constraints between corresponding control points of two consecutive curves. For the $j^{th}$ and $(j+1)^{th}$ pieces of the curve, we can write the equation in each dimension as: $$\begin{aligned}
\label{eq:continunous_constraints}
& c_j^{n} = c_{j+1}^{0}, \nonumber \\
& (c_j^{n} - c_j^{n-1}) / T_j = (c_{j+1}^{1} - c_{j+1}^{0}) / T_{j+1}, \\
& (c_j^{n} - 2 c_j^{n-1} + c_j^{n-2}) / T^{2}_j = (c_{j+1}^{2} - 2 c_{j+1}^{1} + c_{j+1}^{0}) / T^{2}_{j+1}, \nonumber \end{aligned}$$
### Safety Constraints
The safety of the trajectory is guaranteed by enforcing each piece of the curve to be inside the corresponding polyhedron. Thanks to the convex hull property, an entire Bézier curve is confined within the convex hull formed by all its control points. Therefore we constrain control points using hyperplane functions obtained in Eq. \[eq:h\_representation\]. For the $i^{th}$ control point $c_{j, x}^{i}, c_{j, y}^{i}, c_{j, z}^{i}$ of the $j^{th}$ piece of the trajectory in $x, y, z$ dimensions, constraints are: $$\begin{aligned}
\label{eq:safety_constraints}
&a_0^x \cdot c_{j, x}^{i} + a_0^y \cdot c_{j, y}^{i} + a_0^z \cdot c_{j, z}^{i} \leq k_0, \nonumber \\
&a_1^x \cdot c_{j, x}^{i} + a_1^y \cdot c_{j, y}^{i} + a_1^z \cdot c_{j, z}^{i} \leq k_1, \nonumber \\
&\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \vdots \\
&a_n^x \cdot c_{j, x}^{i} + a_n^y \cdot c_{j, y}^{i} + a_n^z \cdot c_{j, z}^{i} \leq k_n, \nonumber\end{aligned}$$
Constraints in Equs. \[eq:boundary\_constraints\] and \[eq:continunous\_constraints\] are affine equality constraints (*$\mathbf{A}_{eq}\mathbf{c} = \mathbf{b}_{eq}$*) and Eq. \[eq:safety\_constraints\] is in affine in-equality formulation (*$\mathbf{A}_{ie}\mathbf{c} \leq \mathbf{b}_{ie}$*). Finally, the spatial trajectory optimization problem is formulated as a QP as follows: $$\begin{aligned}
\label{eq:spatial_qp_program}
\text{min} \:\:\:\:\:\: &\mathbf{c}^T \mathbf{Q} \mathbf{c} \nonumber \\
\text{s.t.} \:\:\:\:\:\: &\mathbf{A}_{eq} \mathbf{c} = \mathbf{b}_{eq}, \\
&\mathbf{A}_{ie}\mathbf{c} \leq \mathbf{b}_{ie}. \nonumber\end{aligned}$$ Unlike our previous works on corridor constrained trajectory [@fei2018icra; @fei2018jfr], here the kinodynamic feasibility (velocity and acceleration) is not guaranteed by adding higher-order constraints into this program, but by temporal optimization (Sect. \[subsec:time\_optimization\]). For a rest-to-rest trajectory, the program in Eq. \[eq:spatial\_qp\_program\] is always mathematically feasible.
Temporal Trajectory Optimization {#subsec:time_optimization}
--------------------------------
![The effect of the temporal optimization. $t$ and $\tau$ are the time profile of the spatial trajectory before and after optimization. \[fig:t\_tau\_timeline\]](t_tau_timeline){width="0.99\columnwidth"}
In spatial optimization, a corridor-constrained spatial trajectory is generated given a fixed time allocation. To optimize the trajectory temporally, we design a re-timing function $\{t(\tau): \tau \rightarrow t \}$ to map the original time variable $t$ to a variable $\tau$. The relation between $\tau$ and $t$ is shown in Fig. \[fig:t\_tau\_timeline\]. In this paper, the re-timing function $t(\tau)$ is named as the temporal trajectory, and finding the optimal $t(\tau)$ is called the temporal optimization. For the N-piece spatial curve defined in Equ. \[eq:spatial\_curve\_n\_piece\], we write $t(\tau)$ as a corresponding N-piece formulation: $$\label{eq:piecewise_tau_function}
t(\tau) =
\begin{cases}
t_1(\tau), &t_1(0) = 0, t_1(\mathcal{T}^*_1) = T_1, t_1 \in [0, T_1] \\
t_2(\tau), &t_2(0) = 0, t_2(\mathcal{T}^*_2) = T_2, t_2 \in [0, T_2] \\
\:\:\: \vdots & \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\vdots \\
t_N(\tau), &t_N(0) = 0, t_N(\mathcal{T}^*_N) = T_N, t_N \in [0, T_N]
\end{cases}$$ where $T_1, T_2, ... T_N$ are original time durations of the spatial curve $f_{\mu}(t)$, and $\mathcal{T}^*_1, \mathcal{T}^*_2, ... \mathcal{T}^*_N$ are time durations after temporal optimization. Since physically time only increases, $t(\tau)$ is a monotonically increasing function. Therefore we have $\dot{t}(\tau) \geq 0$. For clarity, in what follows, we use $c^\prime = dc/dt$ to denote taking derivatives with respect to $t$, and $\dot{c} = dc/d\tau$ for taking derivatives with respect to $\tau$. By substituting $t$ with $t(\tau)$ in $f_{\mu}(t)$ and taking derivatives with chain rule, we can write the velocity as: $$\label{eq:velocity}
\dot{f}(t(\tau)) = f^\prime(t) \cdot \dot{t},\nonumber \\$$ \[eq:kinodynamic\] and acceleration as: $$\ddot{f}(t(\tau)) = f^\prime(t) \cdot \ddot{t} + f^{\prime\prime}(t) \cdot \dot{t}^2.$$ The velocity and acceleration are also piecewise functions.
Minimum-Time Formulation {#subsec:minimum_time_form}
------------------------
### Objective {#subsubsec:objective}
The total time $\mathcal{T}$ of the temporal trajectory can be written as: $$\label{eq:objective_raw}
\mathcal{T} = \int_{0}^{\mathcal{T}} 1 d\tau = \sum_{m = 1}^{N} \int_{0}^{T_m} \frac{1}{\dot{t_m}} dt,$$ considering $\dot{t} = dt/d\tau$. We can introduce a regularization term that penalizes the changing rate of $t$, to trade-off the minimization of time and control extremeness, or so-called motion aggressiveness, in our final temporal trajectory. The objective function is then written as: $$\label{eq:objective}
\mathcal{J} = \sum_{m = 1}^{N} \int_{1}^{T_m} \Big( \frac{1}{\dot{t_m}} + \rho \cdot \ddot{t_m}^2 \Big) dt,$$ where $\rho$ is a weight of the aggressiveness. By setting a larger $\rho$ we can obtain more gentle motions in the temporal trajectory. If $\rho = 0$, the temporal optimization is solved for generating motions as fast as possible. The motions generated with a large $\rho$ can be viewed in our previous work [@fei2019ral].
Following the direct transcription method in [@verscheure2009time], $\alpha(t)$ and $\beta(t)$ are introduced as two additional piecewise functions: $$\begin{aligned}
\label{eq:a_b_def}
\alpha_m(t) = \ddot{t}_m, \:\:\: \beta_m(t) = \dot{t}^2_m. \:\:\:\:\: m = 1,2,...,N.\end{aligned}$$ According to the relationship between $\ddot{t}_m$ and $\dot{t}$, we can have: $$\begin{aligned}
\beta_m(t) \geq 0, \:\:\: \beta_m^\prime(t) = 2\cdot \alpha_m(t). \label{eq:basic_con_2}\end{aligned}$$ Then the objective function in Equ. \[eq:objective\] is re-formulated as: $$\label{eq:objective_ab}
\mathcal{J} = \sum_{m = 1}^{N} \int_{0}^{T_m} \Big( \frac{1}{\sqrt{\beta_m(t)}} + \rho \cdot \alpha_m(t)^2 \Big) dt,$$
### Constraints {#subsubsec:temporal_constraints}
The continuities of $t(\tau)$ are enforced by setting constraints between every two consecutive pieces of it. In each dimension $\mu$ $\in {x, y, z}$, we have: $$\begin{aligned}
&f_{\mu, m}^\prime(T_m) \cdot \sqrt{\beta_m(T_m)} = f_{\mu, m+1}^\prime(0) \cdot \sqrt{\beta_{m+1}(0)},\\
&f_{\mu, m}^\prime(T_m) \cdot \alpha_m(T_m) + f_{\mu, m}^{\prime\prime}(T_m) \cdot \beta_m(T_m) \nonumber \\
= &f_{\mu, m+1}^\prime(0) \cdot \alpha_{m+1}(0) + f_{\mu, m+1}^{\prime\prime}(0) \cdot \beta_{m+1}(0)\end{aligned}$$ Then, to satisfy the initial and the final velocity and acceleration $a_{0}, v_0, a_f, v_f$, we set boundary constraints: $$\begin{aligned}
& f_{\mu, 1}^\prime(0) \cdot \sqrt{\beta_1(0)} &= v_0, \\
& f_{\mu, N}^\prime(T_N) \cdot \sqrt{\beta_N(T_N)} &= v_f, \\
& f_{\mu, 1}^\prime(0) \cdot \alpha_1(0) + f_{\mu, 1}^{\prime\prime}(0) \cdot \beta_1(0) &= a_0, \\
& f_{\mu, N}^\prime(T_N) \cdot \alpha_N(T_N) + f_{\mu, N}^{\prime\prime}(T_N) \cdot \beta_N(T_N) &= a_f,\end{aligned}$$ Finally, kinodynamic feasibility constraints are set as: $$\begin{aligned}
& - v_{max} \leq f_{\mu, m}^\prime(t) \cdot \sqrt{\beta_m(t)} \leq v_{max}, \\
& - a_{max} \leq f_{\mu, m}^\prime(t) \cdot \alpha_m(t) + f_{\mu, m}^{\prime\prime}(t) \cdot \beta_m(t) \leq a_{max},\end{aligned}$$ where $v_{max}$ and $a_{max}$ are the physical limits of the drone.
### SOCP Re-formulation {#subsubsec:socp_reform}
The above optimization problem has convex objective and constraints and is, therefore, a convex program. To make it easily solvable, for each piece of the trajectory, $t_m \in [0, T_m]$ is discretized to $t_m^0, t_m^1, ... t_m^{K_m}$ according to a given resolution $\delta t$. $K_m = \lceil T_m / \delta t\rceil + 1$. Then, $\alpha_m(t)$ becomes piecewise constant at each discretization point. According to Equ. \[eq:basic\_con\_2\], $\beta_m(t)$ is piecewise linear. In this way, $\alpha_m(t)$ and $\beta_m(t)$ are modeled by a series of discrete variables $\alpha_m^k$ and $\beta_m^k$, where $\beta_m^k$ is evaluated at $t_m^k$ and $\alpha_m^k$ is evaluated at $(t_m^k + t_m^{k+1}) / 2$.
By applying the above discretization, the objective in Eq. \[eq:objective\_ab\] is derived as: $$\begin{aligned}
\label{eq:objective_dis}
\mathcal{J} = \sum_{m = 1}^{N} \sum_{k = 0}^{K_i-1} \bigg( \frac{2}
{ \sqrt{\beta_{m}^{k+1}} + \sqrt{ \beta_m^{k} } } + \rho \cdot (\alpha_m^k)^2 \bigg) \cdot \delta t, \end{aligned}$$ which is mathematically equivalent to the affine formulation: $$\label{eq:objective_equ}
\sum_{m = 1}^{N} \sum_{k = 0}^{K_i-1} \Big( 2 \cdot \gamma_m^k + \rho \cdot (\alpha_i^k)^2 \Big) \cdot \delta t,$$ by introducing $\gamma_m^k$ and $$\label{eq:slack_equ_1}
\frac{1}{\sqrt{\beta_m^{k+1}} + \sqrt{ \beta_m^{k}}} \leq \gamma_m^k, \:\: k = 0,...K_i-1; m = 1, ... N$$ as slack variables and additional constraints.
Eq. \[eq:slack\_equ\_1\] is further derived to a quadratic form: $$\begin{aligned}
\frac{1}{\zeta_m^{k+1} + \zeta_m^k} &\leq \gamma_m^k, \:\:\:\:\: k = 0,...K_i-1; m = 1, ... N. \label{eq:slack_equ_2} \\
\zeta_m^k &\leq \sqrt{\beta_m^k}, \:\: k = 0,...K_i; \:\:\:\:\:\: m = 1, ... N. \label{eq:slack_equ_3}\end{aligned}$$ by introducing $\zeta_m^k$ as slack variables .
Eq. \[eq:slack\_equ\_2\] can be formulated as a standard rotated quadratic cone: $$\label{eq:objective_r_cone}
2 \cdot \gamma_m^k \cdot \Big( \zeta_m^{k+1} + \zeta_m^k \Big) \geq \sqrt{2}^2,$$ which is denoted as $$\label{eq:objective_r_cone_q}
(\gamma_m^k, \zeta_m^{k+1} + \zeta_m^k, \sqrt{2}) \in Q_r^3.$$ Also, Equ. \[eq:slack\_equ\_3\] can be written as a standard (non-rotated) quadratic cone: $$\label{eq:objective_cones}
\big( \beta_m^k + 1 \big) ^2 \geq \big( \beta_m^k - 1 \big) ^2 + \big( 2 \cdot \zeta_m^k \big)^2,$$ and is denoted as $$\label{eq:objective_cones_q}
(\beta_m^k + 1, \beta_m^k - 1, 2\zeta_m^k) \in Q^3.$$ Finally, a slack variable $s$ is introduced to transform the objective in Equ. \[eq:objective\_equ\] to an affine function: $$\label{eq:objective_equ_2}
\sum_{m = 1}^{N} \sum_{k = 0}^{K_i-1} (2 \cdot \gamma_m^k + \rho \cdot s) \cdot \delta t,$$ with a rotated quadratic cone: $$\label{eq:objective_cone}
2 \cdot s \cdot 1 \geq \sum_{m=1}^{N}\sum_{k=0}^{K_i-1}(\alpha_m^k)^2,$$ i.e. $$\label{eq:objective_cone_q_r}
(s, 1, \boldsymbol{\alpha}) \in Q_r^{2+\sum_{m=1}^{N}(K_i)},$$ where $\boldsymbol{\alpha}$ contains $\alpha_m^k$ in all pieces of the trajectory.
Also, the discretization is applied to $\alpha_m(t)$ and $\beta_m(t)$ in constraints listed in Sect. \[subsubsec:temporal\_constraints\]. Details are omitted for brevity. After that, we re-formulate these constraints as affine equality and in-equality functions. Besides, although we assume $\alpha_k$ is piecewise constant, we bound the changing rate of $\alpha_k$ considering the response time of the actuators of our quadrotor. We also write this changing rate constraint in an affine form: $$\label{eq:a_d_constraints}
-\delta \alpha \leq (\alpha_m^k - \alpha_m^{k-1}) / \delta t \leq -\delta \alpha,$$ where $\delta \alpha$ (not jerk) is a pre-defined bound of the changing rate of acceleration. Since the difference of $\tau$ between $t_m^k$ and $t_m^{k-1}$ cannot be determined during the optimization, we only bound the changing rate of $\alpha_k$ in $t$ domain.
The temporal optimization problem in Sect. \[subsec:minimum\_time\_form\] is formulated as a standard Second Order Cone Program (SOCP): $$\begin{aligned}
\label{eq:socp_final_form}
\text{min} \:\:\:\:\:\: & \mathbf{h}^T \boldsymbol{\gamma} + \rho \cdot s, \nonumber \\
\text{s.t.} \:\:\:\:\:\: & \mathbf{A}_{eq} \cdot \mathbf{x} = \mathbf{b}_{eq}, \nonumber \\
& \mathbf{A}_{ie} \cdot \mathbf{x} \leq \mathbf{b}_{ie}, \nonumber \\
&(s, 1, \boldsymbol{\alpha}) \in Q_r^{2+\sum_{m=1}^{N}(K_i)}, m = 1, ... N. \\
&(\gamma_m^k, \zeta_m^{k+1} + \zeta_m^k, \sqrt{2}) \in Q_r^3, k = 0,..., K_i-1,\nonumber \\
&(\beta_m^k + 1, \beta_m^k - 1, 2\zeta_m^k) \in Q^3, k = 0,..., K_i. \nonumber\end{aligned}$$
Here $\boldsymbol{\gamma}$ and $\mathbf{x}$ consist of all $\gamma^k$ and $\alpha^k, \beta^k, \zeta^k, \gamma^k$. $\delta t$ is the resolution of discretization of the problem. The effect of different $\rho$ and $\delta t$ to the temporal trajectory and a more detailed derivation of the SOCP can be viewed in [@fei2018iros].
In our *teach-repeat-replan* system, since the global repeating trajectory always has static initial and final states, Equ. \[eq:socp\_final\_form\] is always mathematically feasible regardless of the solution of spatial optimization. Because a feasible solution of the optimization program can always be found by infinitely enlarging the time. Combined with the fact that the spatial optimization also always has a solution (Sect. \[subsec:space\_optimization\]), once a flight corridor is given, a spatial-temporal trajectory must exist.
Online Local Re-planning {#sec:online_local_replanning}
========================
In our previous work [@fei2019ral], once the global planning finished, the drone would execute the trajectory without other considerations. This strategy is based on assumptions that 1) the map of the environment is perfectly built and remains intact; 2) globally consistent pose estimation is provided. We use a VIO system with loop closure to correct local pose drifts, and our dense map is globally deformed according to the global pose graph. However, the first assumption does not always hold, especially when new obstacles suddenly appear or the environment changes. As for the second assumption, our global pose estimation relies on the loop closure detection, which also does not guarantee an extremely high recall rate. In situations where there are significant pose drifts but no timely loop closure corrections, the drone may have collisions with obstacles, as in Fig. \[fig:drift\_crash\].
Local Re-planning Framework
---------------------------
To address above issues fundamentally, we propose a local re-planning framework which reactively wraps the global trajectory to avoid unmodeled obstacles. A sliding local map is maintained onboard, where obstacles are fused, and an ESDF (Euclidean Signed Distance Field) [@felzenszwalb2012distance] is updated accordingly. Note that the dense global map is attached to the global pose graph but the local map introduced here is associated with the local VIO frame and sliding with the drone.
![An illustration of colliding with obstacles when there are significant pose drifts but no timely loop closure corrections. Obstacles are depicted in the global frame. The flight path of the drone in the VIO frame is shown in the red curve. But the actual trajectory in the global frame is the blue curve, which collides with obstacles on the global map. \[fig:drift\_crash\] ](drift_crash){width="0.9\columnwidth"}
### ESDF Mapping
![\[fig:esdf\_example\] The local occupancy map its corresponding ESDF map visualized at a given height of $0.6m$. ](esdf_local_rviz){width="0.9\columnwidth"}
We adopt our previous work FIESTA [@han2019fiesta], which is an advanced incremental ESDF [@schouten2010incremental] mapping framework, to build the local map for online re-planning. FIESTA fuses the depth information into a voxel-hashed occupancy map [@klingensmith2015chisel] and updates the distance value of voxels as few as possible using a breadth-first search (BFS) framework. It is lightweight, efficient, and produces near-optimal results. Details can be checked in [@han2019fiesta]. The ESDF is necessary for the following gradient-based trajectory wrapping. An example of a local occupancy map and its corresponding ESDF map are shown in Fig. \[fig:esdf\_example\]. Note, in our system the range of the local map is decided by the range of current depth observation.
### Sliding Window Re-planning
Due to the limited onboard sensing range and computing resources, it is impossible and unnecessary to conduct global re-planning. In this article, we maintain a temporal sliding window over the global trajectory and conduct local re-planning within it. As is shown in Fig. \[fig:replan\_traj\], when obstacles are observed to block the trajectory in the sliding window, a re-planed trajectory is generated to avoid obstacles, and rejoin the global trajectory afterward.
Gradient-Based B-spline Optimization
------------------------------------
### B-spline Trajectory Formulation
A B-spline is a piecewise polynomial function defined by a series of control points $ \{ \mathbf{Q}_{0},\mathbf{Q}_{1}, \cdots, \mathbf{Q}_{N} \} $ and knot vector $ [ t_{0}, t_{1}, \cdots, t_{m} ] $. For a $p$-degree B-spline, we have $ m = N+p+1 $. Following the matrix representation of the De Boor–Cox formula [@de1971subroutine], the value of a B-spline can be evaluated as: $$\label{equ:matrix}
P(u) =
\left[
1, u, \dots, u^{p}
\right]
\cdot
\mathbf{M}_{p+1}
\cdot
\left[
\mathbf{Q}_{i-p}, \mathbf{Q}_{i-p+1}, \dots, \mathbf{Q}_{i}
\right]^T$$ here $ \mathbf{M}_{p+1} $ is a constant matrix depends only on $ p $. And $ u = (t - t_{i})/ (t_{i+1} - t_{i})$, for $ t \in [t_{i}, t_{i+1}) $.
### B-spline Initialization
We initialize the local trajectory optimization by re-parameterizing the trajectory in the re-planning horizon as a uniform B-spline. The reason we use uniform B-spline is that it has a simple mathematical formula that is easy to evaluate in the following optimization. For a uniform B-splines, each knot span $ \Delta t_{i} = t_{i+1} - t_{i} $ has identical value $ \Delta t $. The local trajectory is first discretized to a set of points according to a given $\Delta t$. Then these points are fitted to a uniform B-spline by solving a min-least square problem.
Note that, a $ p $ degree uniform B-spline is naturally $ p-1 $ order continuous between consecutive spans. Therefore, there is no need to enforce continuity constraints in the following optimization explicitly. Besides, for a $ p $ degree B-spline trajectory defined by $ N+1 $ control points, the first and last $ p $ control points are fixed due to the continuous requirement of the starting and ending states of the local trajectory.
### Elastic Band Optimization
The basic requirements of the re-planed B-spline are three-folds: smoothness, safety, and dynamical feasibility. We define the smoothness cost $ J_{s} $ using a jerk-penalized elastic band cost function[@quinlan1993elastic; @zhu2015convex]: $$\begin{aligned}
\label{equ:elastic_cost}
& J_{s} = \nonumber \\
&\sum\limits_{i=1}^{N-1} \Vert \underbrace{(\mathbf{Q}_{i+2}-2\mathbf{Q}_{i+1}+\mathbf{Q}_{i})}_{\mathbf{F}_{i+1,i}} \ - \ \underbrace{(\mathbf{Q}_{i+1}-2\mathbf{Q}_{i}+\mathbf{Q}_{i-1})}_{\mathbf{F}_{i-1,i}} \Vert^{2} \nonumber \\
&= \sum\limits_{i=1}^{N-1} \Vert \mathbf{Q}_{i+2}-3\mathbf{Q}_{i+1}+3\mathbf{Q}_{i}-\mathbf{Q}_{i-1} \Vert^{2},\end{aligned}$$ which can be viewed as a sum of the squared jerk of control points on the B-spline. Note here we use this formulation which is independent of the time parametrization of the trajectory instead of the traditional time integrated cost function [@MelKum1105]. Because the time duration in each span of the B-spline may be adjusted after the optimization (Sec. \[subsubsec:time\_adjust\]), Eq. \[equ:elastic\_cost\] captures the geometric shape of the B-spline regardless of the time parametrization. Besides, Eq. \[equ:elastic\_cost\] bypasses the costly evaluation of the integration and is, therefore, more numerically robust and computationally efficient in optimization.
The safety and dynamical feasibility requirements of the B-spline are enforced as soft constraints and added to the cost function. Also, the collision cost $J_c$, dynamical feasibility cost $J_v$, and $J_a$ are evaluated at only control points. The collision cost $J_c$ is formulated as the accumulated L2-penalized closest distance to obstacles along the trajectory, which is written as $$\label{equ:colli}
J_{c} = \sum\limits_{i=p}^{N-p} F_{c}(d(\mathbf{Q}_{i})),$$ where $d(\mathbf{Q}_{i})$ is the distance between $ \mathbf{Q}_{i} $ to its closet obstacle and is recorded in the ESDF. $ F_{c} $ is defined as $$\label{equ:potential}
F_{c}(d) = \left\{
\begin{array}{ccl}
(d-d_{0})^{2} & & d \le d_{0} \\
0 & & d > d_{0}
\end{array}
\right.$$ where $ d_{0} $ is the expected path clearance. $J_v$ and $J_a$ are applied to velocities and accelerations, which exceed the physical limits. The formulations of $J_v$ and $J_a$ are similar to Eq. \[equ:colli\] and are omitted here. The overall cost function is: $$\label{equ:cost}
J_{total} = \lambda_{1} J_{s} + \lambda_{2} J_{c} + \lambda_{3} (J_{v} + J_{a}),$$ where $\lambda_{1}, \lambda_{2}, \lambda_{3}$ are weighting coefficients. $J_{total}$ can be minimized for a local optimal solution by general optimization methods such as Gauss-Newton or Levenberg-Marquardt.
### Iterative Refinement {#subsubsec:time_adjust}
In the above-unconstrained optimization problem, although collisions and dynamical infeasibilities are penalized, there is no hard guarantee on generating a strictly feasible solution. To improve the success rate in practice, we add a post-process to refine the trajectory iteratively. In each iteration, we check collisions and feasibilities of all optimized control points. If collisions are detected, we increase the collision term $J_c$ by increasing $\lambda_2$ and solve the optimization problem (Eq. \[equ:cost\]) again.
Since we wrap the local trajectory to go around obstacles, the trajectory is always lengthened after the optimization. Consequently, using the original time parametrization will unavoidably result in a higher aggressiveness, which means the quadrotor tends to fly faster. Then its velocity and acceleration would easily exceed the predefined limits. Therefore, we adjust the time parameterization of the local trajectory to squeeze out dynamical infeasibilities. We slightly enlarge infeasible knots spans of the B-spline by the following heuristic. $$\label{equ:adj2}
\Delta t_{i}^{'} = \min \{ \alpha, \max \{\frac{v_{m}}{v_{max}}, (\frac{a_{m}}{a_{max}})^{\frac{1}{2}}\}\} \cdot \Delta t_{i},$$ where $ \alpha $ is a constant slightly larger than $ 1 $. $ v_{m}, a_{m} $ are infeasible velocity and acceleration and $ v_{max}, a_{max}$ are maximum allowed acceleration and velocity of the drone. The time duration is iteratively enlarged until obtaining a feasible solution or exceeding the maximum iteration limit. If no feasible solution exists after the time adjustment, $\lambda_3$ is increased, and the trajectory is optimized again.
Results {#sec:results}
=======
Implementation Details {#subsec:implementation_details}
----------------------
The global planning method proposed in this paper is implemented with a QP solver OOQP[^4] and a SOCP solver Mosek[^5]. The local re-planning depends on a nonlinear optimization solver NLopt[^6]. The source code of all modules in our quadrotor system, including local/global localization, mapping, and planning, are released as ros-packages[^7] for the reference of community. Readers of this paper can easily replicate all the presented results. The state estimation, pose graph optimization, local mapping, local re-planning, and the controller is running onboard on a Manifold-2C[^8] mini-computer. Other modules are running on an off-board laptop which has a GTX 1080[^9] graphics card.
Our global map is built to attach to a global pose graph. Both the map and the pose graph are saved for repeating. Before the repeating, the drone is handheld to close the loop of the current VIO frame with the saved global pose graph. The relative transformation of these two frames is used to project the control commands to the VIO frame. During the repeating, pose graph optimization is also activated to calculate the pose drift and compensate for the control command.
Simulated Flight Test {#subsec:simulation}
---------------------
We first test our global and local planning methods in simulations. The simulated environments are randomly deployed with various types of obstacles and circles for drone racing, as shown in Fig. \[fig:simulation\_rviz\]. The simulating tool we use is a light-weight simulator MockaFly[^10], which contains quadrotor dynamics model, controller, and map generator. And the simulator’s is also released as an open-source package with this paper. In the simulation, a drone is controlled by a joystick to demonstrate the teaching trajectory. The simulated drone is equipped with a depth camera whose depth measurements are real-time rendered in GPU by back-projecting the drone’s surrounding obstacles. We randomly add noise on the depth measurements to mimic a real sensor. The re-planning module is activated in the simulation and is triggered by the noise added on the depth. The teaching trajectory and the flight corridor is shown in Fig. \[fig:simu1\]. The global trajectory, local re-planned trajectory, and depth measurement are shown in Fig. \[fig:simu6\]. More details about the simulation are presented in the attached video.
Benchmark Comparisons {#subsec:benchmark}
---------------------
### Corridor Generation {#subsubsec:compare_corridor}
We test the performance of the flight corridor generation methods (Sect. \[sec:corridor\_generation\]), to show the efficacy of the proposed techniques for CPU (Sect. \[subsec:cpu\_acceleration\]) and GPU (Sect. \[subsec:gpu\_acceleration\]) accelerations. For convenience, we denote the basic process for doing *convex cluster inflation* as CPU\_raw; CPU\_raw added cube initialization as CPU+; the one with cube initialization, vertex selection and early termination as CPU++; and the parallel version of the *convex cluster inflation* as GPU. We first compare the time consumed for finding the largest flight corridor with these methods, to validate the improvements of efficiency by using our proposed CPU and GPU acceleration techniques. Then, we compare the ratio of space capturing by methods with and without the polyhedron initialization, and by our previous method [@fei2019ral]. The motivation of the latter comparison is two-fold:
1. It serves to show superior performance by replacing cubes with polyhedrons.
2. As discussed in Sect. \[subsec:cpu\_acceleration\], the initialization process would result in different final clustering results compared to the pure *convex cluster inflation*. This comparison also validates that the initialization process only makes neglectable harm to free space capturing.
We generate 10 random maps, with 10 $\sim$ 20 random teaching trajectories given in each map. The average length of teaching trajectories is $20 m$. Results are given in Tabs. \[tab:benchmark\_compare\_corridor\_time\] and \[tab:benchmark\_compare\_corridor\_space\].
[|c|c|c|c|c|]{} & & & &\
&&&&\
Res. = $0.25m$ & 0.031& 0.111& 0.162& 0.359\
Res. = $0.20m$ & 0.055& 0.310& 0.503& 1.309\
Res. = $0.15m$ & 0.169& 1.423& 2.803& 9.583\
Res. = $0.10m$ & 0.942 & 13.940 & 30.747 & 141.659\
Res. = $0.075m$ &3.660 & 71.862 &157.181 &927.131\
As shown in Tab. \[tab:benchmark\_compare\_corridor\_time\], as the resolution of the map being finer, the computing time of the simple *convex cluster inflation* quickly becomes unacceptable huge. In CPU, with the help of *polyhedron initialization*, the computational efficiency is improved several times. Moreover, according to Tab. \[tab:benchmark\_compare\_corridor\_time\], introducing the *voxel selection* and *early termination* can increase the speed more than one order of magnitude in a fine resolution. The efficacy of the GPU acceleration is even more significant. As shown in Tab. \[tab:benchmark\_compare\_corridor\_time\], the GPU version improves the computing speed 30 times at a fine resolution ($0.075m$), and 10 times at a coarse resolution ($0.25m$). For a finer resolution, more candidate voxels are discovered in one iteration of Alg. \[alg:parallel\_polyhedron\_inflation\], thus more computations are conducted parallelly to save time.
----------------- ------- -------- -------
Res. = $0.25m$ 99.22 100.00 82.28
Res. = $0.20m$ 99.56 100.00 82.92
Res. = $0.15m$ 98.93 100.00 81.82
Res. = $0.10m$ 97.06 100.00 82.78
Res. = $0.075m$ 97.14 100.00 83.03
----------------- ------- -------- -------
: Comparison of Space Captured of Corridor Generation[]{data-label="tab:benchmark_compare_corridor_space"}
For the second comparison, we count the number of free voxels included in the flight corridor found by each method. At each resolution, we take the result of the method without initialization as 100% and compare others against it. Tab. \[tab:benchmark\_compare\_corridor\_space\] indicates two conclusions:
1. Using polyhedrons instead of axis-aligned cubes can significantly increase the volume of the flight corridor.
2. Using initialization only slightly sacrifices the volume of the flight corridor. And the sacrifice is neglectable in a medium or coarse resolution ($0.15 \sim 0.25 m$).
The first conclusion holds because a simple cube only discovers free space in $x, y, z$ directions and sacrifices much space in a highly nonconvex environment, as in Fig. \[fig:cube\_polygon\_compare\]. The second conclusion comes from the fact that in a highly nonconvex environment, a regular shaped polyhedron (a cube) does not prevent the following voxel clustering in its nearby space. It shows that the initialization plus the clustering refinement does not harm the volume of the final polyhedron, and is acceptable in practice, especially for a resolution not very fine.
### Global Planning {#subsubsec:compare_planning}
![ \[fig:benchmark\_compare\_rviz\] The comparison of trajectories optimized by different methods. The manual flight trajectory is shown as the purple curve. Blue, red, green, and yellow trajectories are generated by our proposed method, our previous method [@fei2019ral], gradient-based method [@fei2017iros] and waypoints-based method [@RicBryRoy1312]. ](rviz_benchmark_2){width="0.99\columnwidth"}
[|c|c|c|c|]{} & & &\
&&&\
**Proposed Method** &**84.607** &**55.154** &**83.350**\
Previous Method &86.723 &57.736 &89.883\
Gradient-based [@fei2017iros] &89.622 &111.398 &109.575\
Waypoint-based [@RicBryRoy1312] &97.045 &94.895 &204.267\
We compare the proposed global planning method against our previous work [@fei2019ral] and other representative optimization-based trajectory generation methods, such as the waypoint-based method [@RicBryRoy1312] and the gradient-based method [@fei2017iros]. For the latter two benchmarked methods, there is no explicit way to capture the topological structure of the teaching trajectory. Therefore, we convert the teaching trajectory to a piecewise path by recursively finding a collision-free straight line path along with it. Then we use this path to initialize the waypoint-based [@RicBryRoy1312] method and the gradient-based method [@fei2017iros]. Benchmarked methods are also integrated into the coordinate descent framework with temporal optimization. Some parameters dominate the performance of these benchmarked methods, especially for the gradient-based method [@fei2017iros] where the trade-off between collision and smoothness is essential. For a fair comparison, parameters are tuned to achieve the best performances before the test. We randomly generate 10 simulated environments with dense obstacles, as in Sect. \[subsec:simulation\], and conduct 10 teach-and-repeat trials in each map. A sample result of generated trajectories is shown in Fig. \[fig:benchmark\_compare\_rviz\].
As shown in Tab. \[tab:benchmark\_compare\], our proposed method outperforms in all length, time, and energy aspects. The waypoint-based [@RicBryRoy1312] method and the gradient-based [@fei2017iros] method both require a piecewise linear path as initialization. The waypoint-based [@RicBryRoy1312] method can only add intermediate waypoints on the initial path. Therefore, it is mostly dominated by its initialization and tends to output a solution with low quality. The gradient-based [@fei2017iros] method has no such restriction and can adjust the path automatically by utilizing gradient information. However, its optimization formulation is underlying non-convex, since the collision cost is defined on a non-convex ESDF. Therefore, the gradient-based [@fei2017iros] method always finds a locally optimal solution around its initial guess. Compared to these two methods, our method is initialization-free. Both the spatial and temporal optimization of our proposed method enjoys the convexity in its formulation. They are guaranteed to find the global energy-optimal and time-optimal solutions in the flight corridor. Naturally, a smoother trajectory also tends to generate a faster time profile. So finally, under the same coordinate descent framework, our method always outperforms [@fei2017iros] and [@RicBryRoy1312]. Compared to [@fei2019ral], the advanced corridor generation proposed in this paper (Sect. \[sec:corridor\_generation\]) can always capture more free space than using our previous axis-aligned corridor. Naturally, it provides much more freedom for global planning and results in much better solutions.
Indoor Flight Test {#subsec:indoor_exp}
------------------
### Fast Flight in a Static Environment
Firstly, we conduct experiments in a cluttered drone racing scenario. This experiment validates the robustness of our proposed system, and also pushes the boundary of aggressive flight of quadrotors. Several different types of obstacles, including circles, arches, and tunnels, are deployed randomly to composite a complex environment, as shown in Fig. \[fig:indoor\_map\]. The smallest circle only has a diameter of $0.6 m$, which is very narrow compared to the $0.3m \times 0.3m$ tip-to-tip size of our drone. The maximum velocity and acceleration of the drone are set as $3m/s$ and $3m/s^2$, respectively. And the parameter $\rho$ in Equ. \[eq:objective\] is set as 0, which means the quadrotor is expected to fly as fast as possible as long as it respects the kinodynamic limits. A dense global consistent map is pre-built using the method stated in Sect. \[subsec:localization\_mapping\]. During the teaching phase, the quadrotor is virtually piloted by a human to move amid obstacles. Then the quadrotor autonomously converts this teaching trajectory to a global repeating trajectory and starts to track it. Snapshots of the drone in the flight are shown in Fig. \[fig:indoor\_exp\]. The teaching trajectory and the convex safe flight corridor are visualized in Fig \[fig:indoor\_teach\_rviz\_2\]. And the global repeating trajectory is in Fig. \[fig:indoor\_repeat\_rviz\_2\].
### Local Re-planning Against Unknown Obstacles {#subsec:indoor_dynamic_exp}
Our system can deal with changing environments and moving obstacles. In this experiment, we test our system also in the drone racing site to validate our local re-planning module. Several obstacles are moved or added to change the drone racing environment significantly, and some others are dynamically added during the repeating flight, as shown in Fig. \[fig:indoor\_dyn\_exp\] In this experiment, the maximum velocity and acceleration for the quadrotor are set as $2m/s$ and $2m/s^2$. The local ESDF map is sliding with the drone using a ring-buffered updating mechanism [@usenko2017real]. The resolution of the local perception is $0.075 m$. The size of the map is decided by points observed spreading in the current frame. The horizon and frequency of the local re-planning are $3.5 s$ and $15 Hz$, respectively. Re-planning is triggered 8 times during the flight in this experiment, and local safe and dynamical feasible splines are generated on time accordingly. Local trajectories, local maps, and the overview of this experiment are shown in Fig. \[fig:indoor\_dyn\_repeat\_rviz\]. We refer readers to the attached video for more details.
![ The repeating trajectory in outdoor experiments, trial 1. Marks are interpreted as the same as in previous figures. \[fig:rviz\_outdoor\_1\]](rviz_outdoor_1_1){width="0.99\columnwidth"}
Outdoor Flight Test {#subsec:outdoor_exp}
-------------------
Finally, we conduct quadrotor flight experiments with a much higher aggressiveness in two different outdoor scenes, as in Fig. \[fig:outdoor\_exp\], to show the robustness of our system in natural environments. Although these experiments are conducted outdoor, GPS or other external positioning devices are not used. The teach-repeat-replan pipeline is as the same as before indoor experiments \[subsec:indoor\_exp\]. The maximum allowed velocity and acceleration limits for these two trials are set as $5m/s$, $6m/s^2$ and $7m/s$, $6m/s^2$, respectively. The drone’s desired and estimated positions and velocities in the second trial are given in Fig. \[fig:outdoor\_pv\_plot\], which shows acceptable tracking errors. Since the flight speed is significantly higher than indoor experiments, we set a smaller re-planning horizon as $2.0 s$. Results such as the global and local trajectory and the global map are visualized in Figs. \[fig:rviz\_outdoor\_1\] and \[fig:rviz\_outdoor\_2\]. More clearly visualizations of outdoor experiments are given in the video.
![ Profiles of the desired and estimated position and velocity. The position and velocity are estimated by our localization module VINS [@qin2018vins]. \[fig:outdoor\_pv\_plot\] ](outdoor_pv_plot){width="0.9\columnwidth"}
Conclusion {#sec:conclusion}
==========
In this paper, we propose a framework, *teach-repeat-replan* for quadrotor aggressive flights in complex environments. The main idea of this work is to find the topological equivalent free space of the user’s teaching trajectory, use spatial-temporal trajectory optimization to obtain an energy-efficient repeating trajectory, and incorporate online perception and re-planning to ensure the safety against environmental changes and moving obstacles. The teaching process is conducted by virtually controlling the drone in simulation. The generated repeating trajectory captures users’ intention and respect an expected flight aggressiveness, which enables autonomous flights much more aggressive than human’s piloting in complex environments. The online re-planning guarantees the safety of the flight and also respects the reference of the repeating trajectory.
To group large free space around the teaching trajectory, we propose a GPU-accelerated convex polyhedron clustering method to find a flight corridor. The optimal global trajectory generation problem is decoupled as spatial and temporal sub-problems. Then these two sub-problems are iteratively optimized under the coordinate descent framework. Moreover, we incorporate the local perception and local trajectory re-planning modules into our framework to deal with environmental changes, dynamic obstacles, and localization drifts.
The proposed system is complete and robust. Users of our system do not have to pilot the drone carefully to give a teaching trajectory. Instead, an arbitrarily jerky/poor trajectory can be converted to an efficient and safe global trajectory. Moreover, when the environment changes or the global localization drifts, the local perception and re-planning modules guarantee the safety of the drone while tracking the global trajectory. Our system is also flexible and easily replicable, as evidenced by various types of experiments presented in this paper, and a third-party application[^11]. We release all components of our system for the reference of the community.
[^1]: <https://github.com/HKUST-Aerial-Robotics/Teach-Repeat-Replan>
[^2]: All authors are with the Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong, China. [$\{$fgaoaa, lwangax, bzhouai, luxin.han, jpanaa, eeshaojie$\}$@ust.hk]{}.
[^3]: $^*$These authors contributed equally to this work
[^4]: <http://pages.cs.wisc.edu/~swright/ooqp/>
[^5]: <https://www.mosek.com>
[^6]: <https://nlopt.readthedocs.io>
[^7]: <https://github.com/HKUST-Aerial-Robotics/Teach-Repeat-Replan>
[^8]: <https://store.dji.com/product/manifold-2?vid=80932>
[^9]: <https://www.nvidia.com/en-us/geforce/20-series/>
[^10]: <https://github.com/HKUST-Aerial-Robotics/mockasimulator>
[^11]: Flight demonstration at the Electrical and Mechanical Services Department (EMSD), Hong Kong government. Video: <https://youtu.be/Ut8WT0BURrM>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We prove martingale-ergodic and ergodic-martingale theorems with continuous parameter for vector valued Bochner integrable functions. We first provide almost everywhere convergence of vector valued martingales with continuous parameter. The norm as well as almost everywhere convergence of martingale-ergodic and ergodic-martingale averages are given. We also obtain dominant and maximal inequalities. Finally, we show that a.e. martingale-ergodic and ergodic-martingale theorems will coincide under certain assumptions.
0.3cm [*Mathematics Subject Classification*]{}: 28D10, 46G10, 47A35, 60G44.\
[*Key words*]{}: Continuous parameter, Vector valued martingale , vector valued martingale-ergodic process and ergodic-martingale processes, Bochner integrable functions.
address: |
[*[Department of Mathematics,\
The Pennsylvania State University,\
University Park, 16802, PA, USA]{}*]{}
author:
- 'F.A. Shahidi [^1]'
title: Vector valued unified martingale and ergodic theorems with continuous parameter
---
+6pt
\[section\] \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Definition]{} \[thm\][Remark]{} \[thm\][Problem]{} ø Ø
\
Introduction
============
An interesting connection in terms of the behavior and convergence between two fundamental mathematical objects — martingales and ergodic averages has been known since S. Kakutani [@Kak], who asked for a possible unification of martingale convergence and ergodic theorems. Several attempts have been done since then (see [@Kach2] for review and references), but none of them was comprehensive. Quite recently, A.G. Kachurovskii [@Kach2],[@Kach1] solved this problem by defining a martingale-ergodic processes as the composition of martingales and ergodic averages. For $f\in L_p, p\ge 1,$ if $f_n=E(f|F_n)$ is a regular martingale, where $E(\cdot|F)$ is a conditional expectation operator and $A_mf=\frac1m\sum\limits_{i=0}^{m-1}
T^if,$ where $T$ is an $L_1-L_{\infty}$ contraction, then he proved the following
[@Kach2],[@Kach1].
1. - If $f\in L_p,\ p\ge 1,$ then $E(A_mf|F_n)$ converges in $L_p$ norm as $n,m\to\infty;$
- If $f\in L_1$ and $sup_n |E(f|F_n)|$ is integrable, then $E(A_mf|F_n)$ converges almost everywhere as $n,m\to\infty$.
2. - If $f\in L_p,\ p\ge 1,$ then $A_mE(f|F_n)$ converges in $L_p$ norm as $n,m\to\infty;$
- If $f\in L_1$ and $sup_m |A_mf|$ is integrable, then $A_mE(f|F_n)$ converges almost everywhere as $n,m\to\infty$.
While the first part of this theorem is referred as a martingale-ergodic theorem, second part is known as ergodic-martingale theorem. In fact, this theorem puts martingale convergence and ergodic theorems into one superstructure, from which both martingale convergence and ergodic theorems can be obtained as degenerate cases.
The continuous parameter analogue of the above theorem was solved by I.V. Podvigin as follows
[@pod1]
1. - If $f\in L_p,\ p\ge 1$ then $E(A_tf|F_s)$ converges in $L_p$ norm as $t,s\to\infty$;
- If $f\in L_1$ and $sup_s |E(f|F_s)|$ is integrable, then $E(A_tf|F_s)$ converges almost everywhere as $t,s\to\infty$.
2. - If $f\in L_p,\ p\ge 1,$ then $A_tE(f|F_s)$ converges in $L_p$ norm as $t,s\to\infty$;
- If $f\in L_1$ and $sup_t |A_tf|$ is integrable, then $A_tE(f|F_s)$ converges almost everywhere as $t,s\to\infty$.
Here $F_s-$ an increasing family of $\sigma-$ subalgebras $A_tf=\frac1t\int\limits_0^tT_{\tau}fd\tau$ and $\{T_t,t\ge 0\}$ is a semigroup of linear $L_1-L_{\infty}$ contractions.
Note that there many analogues and generalizations of martingale convergence and ergodic theorems. For example, vector valued ergodic theorem for $1-$ parameter semigroup of operators was given by Sh. Hasegawa, R. Sato and Sh. Tsurumi in [@Sato1]. The result was also extended to multiparameter case under suitable assumptions in [@Sato2]. Related problems are also considered in [@Sato4]. This motivates us to provide the above theorem in other settings. The purpose of this paper is to give the latter theorem in vector valued settings. Namely, we prove martingale-ergodic and ergodic-martingale theorems with continuous parameter for vector valued Bochner integrable functions. As is done by [@Kach1], [@pod1], we also prove dominant and maximal inequalities. We also show that the condition of integrability of supremum is not necessary under the assumption that conditional expectation operator and ergodic average commute. This is the vector valued analogue of the result given in [@pod2] for continuous parameter processes. We also note that the vector valued analogue of Theorem 1.1 has been considered in [@ShGa].
To our knowledge, we do not seem to have vector valued a.e. martingale convergence theorem with continuous parameter. Hence in the next section we prove this convergence. The main result of the paper is given in section 3. We use the notation and terminology as used in [@pod1], [@ShGa].
Preliminaries
=============
In this section we prove a vector valued martingale convergence theorem with continuous parameter.
Throughout this paper by $X$ we mean a reflexive Banach space with the norm $||\cdot||_X$ and by $(\O ,\beta, \mu)$ a finite measure space. By $L_p(X)=L_p(\O, X), \ 1\le p<\infty$ we denote the Banach space of $X$ valued measurable functions $f$ on $\O$ with the norm defined as
$$||f||_p=\left(\int_{\O}||f(\o)||_X^pd\mu\right)^{\frac 1p}.$$
We just write $L_p$ when $X=R.$
Let $\{T_t, t\ge 0\}$ be a flow of linear $L_1-L_{\infty}$ contractions acting in $L_1(\O, X)$. That is, for any $t\ge 0,$
$||T_tf||_1\le ||f||$ and $||T_tf||_{\infty}\le ||f||_{\infty}$, where $$||f||_1=\int\limits_{\O}||f(\o)||_X d\mu$$ and $$||f||_{\infty}=inf\{\lambda: ||f(\o)||_X\le\lambda \ a.e\}.$$
A flow of linear operators $\{T_t, t\ge 0\}$ in $L_1(\O,X)$ is *strongly continuous semigroup* if
- $T_0=id$
- $T_{t_1}T_{t_2}=T_{t_1+t_2}$ for all $t_1,t_2>0$
- $\lim\limits_{t_1\to t_2}||T_{t_1}f-T_{t_2}f||_1=0$ for any $f\in L_1(\O, X)$ and $t_2>0.$
Henceforth, $\{T_t, t\ge 0\}$ will be a strongly continuous semigroup of linear $L_1-L_{\infty}$ contractions unless otherwise mentioned.
In [@Sato1] it is shown that if $f\in L_p(\O, X), p\ge 1,$ then $\frac1t\int\limits_o^tT_{\tau}f(\o)d\tau\in L_p(\O, X).$ In this settings, we define the ergodic average as follows $$A_tf(\o)=\frac1t\int\limits_o^tT_{\tau}f(\o)d\tau,\ \ f\in L_1(\O, X),\ \ t>0.$$
The following theorem is an a.e. convergence theorem for the above ergodic average.
[@Sato1] Let $X$ be a reflexive Banach space and $\{T_t, t\ge 0\}$ be a strongly continuous semigroup of linear $L_1-L_{\infty}$ contractions on $L_1(\O, X).$ If $1\le p< \infty$ and $f\in L_p(\O, X),$ then the limit $$\lim\limits_{t\to\infty}\frac1t\int\limits_o^tT_{\tau}f(\o)d\tau$$ exists for almost all $\o\in\O.$
It is to note that the above theorems were given for slightly general type of operators $\{T_t\},$ that is, the operators $\{T_t\}$ should be contractions with respect to $L_1$ norm and bounded with respect to $L_{\infty}$ norm.
Let $F$ be a $\sigma-$ algebra and $F_1$ be its $\sigma-$ subalgebra.
[@neveu]
1. There exists a linear operator $E(\cdot|F):L_1(\O, X)\rightarrow L_1(\O, X)$ such that $$\int\limits_BE(f|F)d\mu=\int\limits_Bfd\mu$$ for any $f\in L_1(\O, X)$ and $B\subset F_1.$
2. For every continuous linear functional $g$ and $f\in L_1(\O, X),$ the function $g(f)$ is integrable and $$g(E(f|F))=E(g(f)|F).$$
By $E(f|F)$ we denote the conditional expectation of $f\in L_p.$ Let $F_s,\ s\in R$ be a family of monotonically increasing (decreasing) sub-$\sigma-$algebras such that $F_s\uparrow F_{\infty}$ ($F_s\downarrow F_{\infty}$) as $s\to\infty.$ Unless otherwise stated, we assume that the family of sub-$\sigma-$ algebras is increasing. We also keep in mind that the results in this section, which hold for increasing family also hold for decreasing family of sub-$\sigma-$algebras. A stochastic process $f_s$ in $L_p(\O, X), \ 1\le p<\infty$ is said to be an *ordinary (reversed) martingale* if for all $s_1, s_2\in S$ with $s_1<s_2 (s_1>s_2)$ one has $E(f_{s_2}|F_{s_1})=f_{s_1}.$ A *regular* martingale is given by $f_s=E(f|F_s),$ where $f\in L_p(\O, X), \ 1\le p<\infty.$ There is a norm convergence theorem for vector valued martingales with continuous parameter [@Vaxan]. But, we were not able to find any theorem concerning a.e. convergence for them. Below we are going to provide this convergence.
Let $\{(g_s^i, s\in R), i\in I\}$ be a countable family of real valued submartingales such that $$\sup\limits_{s\in R}\int\sup\limits_{i\in I}(g_s^i)^+d\mu< \infty .$$
Then each submartingale converge a.e. to an integrable limit $g_{\infty}^i,\ i\in I$ and $$\sup\limits_{i\in I}g_s^i=\sup\limits_{i\in I}g_{\infty}^i$$ as $s\to\infty.$
The condition of the lemma implies that $\sup\limits_{s\in R}\int(g_s^i)^+d\mu$ is finite for all $i\in I,$ therefore by Doob’s convergence theorem for submartingales (see, for example [@Oksendal], Appendix C) the limits $$g_{\infty}^i=\lim\limits_{s\to\infty}g_s^i$$ exists a.e. Since $g_s^i$ is a submartingale for all $i\in I,$ then $\sup\limits_{i\in I}g_s^i$ is also a submartingale. Due to the condition of the lemma and Doob’s convergence theorem for submartingales (see [@Oksendal])we conclude again that the limit
$$g_{\infty}=\lim\limits_{s\to\infty}\sup\limits_{i\in I}(g_s^i)$$ exists a.e. This limit clearly dominates each $g_{\infty}^i(i\in
I)$ and thus also their supremum, i.e. $g_{\infty}\ge\sup\limits_{i\in I}g_{\infty}^i.$
We will show that $\int g_{\infty}d\mu=\int\sup\limits_{i\in
I}(g_{\infty}^i)d\mu$ in order to show that the above inequality in fact an equality.
Let $(I_p), p\in N$ be a sequence of finite subsets of $I$ increasing to $I$ as $p\to\infty.$ Then the integral $\int\sup\limits_{i\in I_p}g_s^id\mu$ clearly increases as $p$ increases. Moreover, it also increases with $s (s\in R)$ since $(\sup\limits_{i\in I_p}g_s^i,\ s\in R)$ is a submartingale for every $p.$
Note that the expression
$$S=\sup\limits_{p\in N, s\in R}\int\sup\limits_{i\in I_p}g_s^id\mu=
\sup\limits_{s\in R}\int\sup\limits_{i\in I}g_s^id\mu$$ is dominated by $\sup\limits_{s\in R}\int\sup\limits_{i\in
I}(g_s^i)^+d\mu$ and hence is finite. Therefore, for every $\e>0$ there exists at least one pair $p_{\e}\in N,\ s_{\e}\in R^+$ such that $$\int\sup\limits_{i\in I_p}g_s^id\mu\ge S-\e$$ if $p=p_{\e},\ s=s_{\e}.$ Since the above supremum increases with $p$ as well as with $s,$ then the above inequality holds for $p\ge
p_{\e},\ s\ge s_{\e}.$ Note that the function $g_{\infty}-\sup\limits_{i\in I_p}g_{\infty}^i$ is the limit of positive sequence of functions $(\sup\limits_{i\in I}g_s^i-
\sup\limits_{i\in I_p}g_s^i,\ s\in R)$ so that Fatou’s lemma implies that
$$\int(g_{\infty}-\sup\limits_{i\in I_p}g_{\infty}^i)d\mu\le\liminf\limits_{s\to\infty}\int((\sup\limits_{i\in I}g_s^i-
\sup\limits_{i\in I_p}g_s^i)d\mu\le S-(S-\e)=\e.$$
Therefore, $\int(g_{\infty}-\sup\limits_{i\in
I_p}g_{\infty}^i)d\mu\le\e$ and so $g_{\infty}=\sup\limits_{i\in
I}g_{\infty}^i.$
Let $X$ be a separable Banach space which is the dual of a separable Banach space and $F_s$ be an increasing family of sub-$\sigma-$algebras. Then for any $f\in L_1(\O, X)$
$$\lim\limits_{s\to\infty}E(f|F_s)= E(f|F_{\infty})$$ a.e. on $\O.$
Note that every separable reflexive Banach space satisfies the condition put on $X.$
Firstly, note that for any continuous linear functional $g\in X',$ the sequence $g(E(f|F_s))$ is a martingale as (2) of Theorem 2.2 shows that for $s_2>s_1$ $$g(E(f|F_{s_2}))=E(g(f)|F_{s_2})=E(g(f)|F_{s_1})=g(E(f|F_{s_1})).$$ One can also see that for any $g\in X'$
$$g(E(f|F_s))=E(g(f)|F_s)\rightarrow E(g(f)|F_{\infty})=g(E(f|F_{\infty}))$$ outside a set $\O_g$(which actually depends on $g$) of zero measure as $s\to\infty$ by convergence of read valued martingale [@Oksendal].
Now assume that the separable Banach space $X$ is the dual of a (necessarily) separable space $Y$ and let us identify this space with the subspace of $X',$ the dual of $X.$ Let us denote by $D$ a dense subset of unit ball in $Y$ which we can choose countable as $Y$ is separable. Then the equality $\sup\limits_{g\in
D}g(x)=||x||_X$ holds for all $x\in X. $ Indeed, one can see that $g(x)\le ||g||||x||_X$ implies $||x||_X\ge\frac{g(x)}{||g||},$ and so $||x||_X\ge \sup\limits_{g\in D}\frac{g(x)}{||g||}.$ Since there exist $x_0\in X$ and $g_0\in X'$ such that $g_0(x_0)=||x_0||_X||g_0||,$ then $||x||_X= \sup\limits_{g\in
D}\frac{g(x)}{||g||}.$
Further, take any fixed $a\in X,$ and consider the countable family of martingales $$\{(g(E(f|F_s)-a), s\in R), g\in D\}.$$
Since $$|g(E(f|F_s))|\le ||E(f|F_s)||_X\le E(||f||_X|F_s)$$ for all $g\in D$ by contraction property of the conditional expectation, then the above family satisfies the condition of Lemma 2.3 and hence by applying it we get
$$||E(f|F_s)-a||_X\rightarrow ||E(f|F_{\infty})-a||_X$$ a.e. as $s\to\infty,$ for all $a\in X.$ From this it follows that
$$\mu\{\lim\limits_{s\to\infty}||E(f|F_s)-a||_X= ||E(f|F_{\infty})-a||_X\ \forall a\in X\}=1$$
Since $X$ is separable and we can take $a=E(f(\o)|F_s)$ at every $\o\in\O,$ we find that $E(f|F_s)\rightarrow E(f|F_{\infty})$ a.e. as $s\to\infty.$
Martingale-ergodic and ergodic-martingale theorems
==================================================
In this section we prove norm as well as a.e. convergence for vector valued martingale-ergodic and ergodic-martingale averages with continuous parameter. In this section we consider only regular martingales.
Following Kachurovskii [@Kach2], we define martingale-ergodic and ergodic-martingale averages as follows.
A *martingale-ergodic* average is an average of the form $\{E(A_tf|F_s)\}_{t>0, s\ge 0},$ where $E(\cdot|F_s)$ is the conditional expectation operator and $A_tf$ is the ergodic average while an *ergodic-martingale* average is an average $A_tE(f|F_s).$
Let us introduce the following notations $$f_{\infty}(\o)=\lim\limits_{t\to\infty}A_tf(\o),$$ $$f^*(\o)=\lim\limits_{s\to\infty}E(f_{\infty}|F_s), \ \ f_*(\o)=\lim\limits_{t\to\infty}A_tE(f|F_{\infty})(\o).$$ The existence of above limit will be discussed below.
For $f\in L_p(\O, X),\ p\ge 1$ the following assertions hold.
1. $$E(A_tf|F_s)\rightarrow f^*$$ in norm as $t,s\rightarrow \infty;$
2. $$A_tE(f|F_s)\rightarrow f_*$$ in norm as $t,s\rightarrow \infty.$
The idea is the same with real valued cases [@Kach2], [@pod1].
Note that $$||E(A_tf|F_s)-f^*||_p\le ||E(A_tf|F_s)-E(f_{\infty}|F_s)||_p+||E(f_{\infty}|F_s)-f^*||_p$$
The expression $||E(f_{\infty}|F_s)-f^*||_p$ converges due to vector valued norm convergence theorem for continuous parameter martingales [@Vaxan].
Note that
$$||E(A_tf|F_s)-E(f_{\infty}|F_s)||_p=||E(A_tf-f_{\infty})|F_s)||_p\le ||A_tf-f_{\infty}||_p.$$
Since $||A_tf-f_{\infty}||_p$ convergent according to vector valued ergodic theorem 2.1.5 [@kren], then we get the assertion (1).
Now, we prove the second part. According to Riesz convexity theorem [@kren], [@Phil] an $L_1-L_{\infty}$ contraction is a contraction in $L_p$ norm. Therefore, we have the following estimate
$$||A_tE(f|F_s)-f_*||_p\le ||A_tE(f|F_s)-A_tE(f|F_{\infty})||_p+||A_tE(f|F_{\infty})-f_*||_p\le$$ $$\le ||E(f|F_s)-E(f|F_{\infty})||_p+||A_tE(f|F_{\infty})-f_*||_p.$$
The norm $||E(f|F_s)-E(f|F_{\infty})||_p$ converges due to vector valued norm convergence theorem for continuous parameter martingales [@Vaxan], and the norm $||A_tE(f|F_{\infty})-f_*||_p$ from theorem 2.1.5 of [@kren].
We say that a linear operator $T$ in $L_1(\O, X)$ is *positively dominated* if there exists a positive linear contraction $T'$ in $L_1,$ called a *positive dominant* of $T,$ such that $$||Tf||_X\le T'(||f||_X).$$
Let us now provide some useful examples that we will use (see [@SuchF]).
1\. If $X=R$, then it is positively dominated by some positive linear contraction on $L_1.$ For the vector valued $T$, a positive dominant may not exist in general.
2\. Let $\tau$ be a measure preserving transformation on $(\O
,\beta, \mu).$ Then the linear operator $T:L_1(\O, X)\rightarrow
L_1(\O, X)$ given by $Tf=f\circ\tau$ is said to be generated by $\tau.$ $T$ is positively dominated by $T'$ with $T'(||f||_X)=||f||_X\circ\tau.$
3\. Assume that the Banach space $X$ has the Radon-Nikodym property(the Banach space is said to have the Radon-Nykodim property with respect to $(\O, \beta, \mu)$ if any vector measure $\phi:\beta\to X$ with finite variation, which is absolutely continuous with respect to $\mu$ is just the integral of countable valued function $f:\O\to X$ ). If $X$ is reflexive, then it has the Radon-Nykodim property [@Vaxan]. Consider the conditional expectation $E(f|F)$ with respect to $\sigma-$ subalgebra $F$ of $\beta.$ For $f\in L_1(\O, X),$ the conditional expectation $E(f|F)$ is Radon-Nikodym density with respect to the finite measure $\mu$ on $F.$ Since $||E(f|F)||_X\le E'(||f||_X|F)$ a.e. for all $f\in L_1(\O, X),$ where $E'(\cdot|F)$ is a conditional expectation on $L_1,$ then the operator $E(\cdot|F)$ is positively dominated by $E'(\cdot|F).$
We say that the flow $\{T_t, t\ge 0\}$ in $L_1(\O, X)$ is positively dominated by the flow $\{P_t, t\ge 0\}$ in $L_1$ if for any $f\in L_1(\O, X)$ and $t\ge0$ one has $||T_tf||_X\le
P_t(||f||_X)$ a.e. Now, we provide a.e. convergence theorem.
Let $X$ be a separable Banach space. Assume that $\{T_t, t\ge 0\}$ is positively dominated by some semigroup $\{P_t, t\ge 0\}$ of strongly continuous linear $L_1-L_{\infty}$ contractions. Then for the function $f\in L_1(\O, X)$ the following assertions hold true.
1. If $\sup\limits_{t>0}||A_tf||_X\in L_1 $ (this holds for example, if $f\in L(\O, X)logL(\O, X)$ ) then for any $t>0, s\ge 0,$ $E(A_tf|F_s)\rightarrow f^*$ a.e. as $t,s\rightarrow \infty.$
2. If $\sup\limits_{s\ge 0}||E(f|F_s)||_X\in L_1,$ then $A_tE(f|F_s)\rightarrow
f_*$ a.e. as $t,s\rightarrow \infty.$
We prove the first assertion. Note that
$$||E(A_tf|F_s)-f^*||_X\le ||E(A_tf|F_s)-E(f_{\infty}|F_s)||_X+||E(f_{\infty}|F_s)-f^*||_X.$$
According to martingale convergence Theorem 2.4, the norm $||E(f_{\infty}|F_s)-f^*||_X$ converges to $0$ a.e. as $s\to\infty.$
(Let $0<t_1\le t.$) Further, since conditional expectation operator is positively dominated, then
$$||E(A_tf|F_s)-E(f_{\infty}|F_s)||_X=||E(A_tf-f_{\infty}|F_s)||_X\le E'(||A_tf-f_{\infty}||_X|F_s)\le E'(h_{t_1}|F_s),$$ where $h_{t_1}(\o)=\sup\limits_{t\ge
t_1}||A_tf(\o)-f_{\infty}(\o)||_X$ and $E'$ is a positive dominant of $E.$ Due to the condition of the theorem, we have $h_{t_1}\in
L_1$ and $h_{t_1}\to 0$ a.e. from Theorem 2.1. Now applying first part of Theorem 1.2, $E'(h_{t_1}|F_s)\rightarrow 0,$ a.e. as $t_1\to\infty.$ Therefore, we have $||E(A_tf|F_s)-E(f_{\infty}|F_s)||_X\rightarrow 0,$ a.e. Hence, $||E(A_tf|F_s)-f^*||_X\rightarrow 0$ a.e. as $t,s\to\infty.$
Now we prove the second part. We have
$$||A_tE(f|F_s)-f_*||_X\le ||A_tE(f|F_s)-A_tE(f|F_{\infty})||_X+||A_tE(f|F_{\infty})-f_*||_X$$
The norm $||A_tE(f|F_{\infty})-f_*||_X$ is a.e. convergent due to Theorem 2.1.
We have the following $$||A_tE(f|F_s)-A_tE(f|F_{\infty})||_X=||\frac1t\int\limits_0^tT_{\tau}(E(f|F_s)-E(f|F_{\infty}))d\tau||_X\le$$ $$\le \frac1t\int\limits_0^t||T_{\tau}(E(f|F_s)-E(f|F_{\infty}))||_Xd\tau\le$$$$\le \frac1t\int\limits_0^tP_{\tau}\big(||(E(f|F_s)-E(f|F_{\infty}))||_X\big)d\tau=$$ $$=A'_t\big(||(E(f|F_s)-E(f|F_{\infty}))||_X\big)$$
where $P_{t}$ is a positive dominant of $T_t$ for each $t$ and $A'_tf=\frac1t\int\limits_0^tP_{\tau}fd\tau.$ According to our assumption, the flow $\{P_t, t\ge 0\}$ is strongly continuous semigroup.
Note that the real valued function $h_s(\o)=||(E(f(\o)|F_s)-E(f_{\infty}(\o)|F_s))||_X$ is integrable according to the conditions of theorem. Moreover, according to the martingale convergence Theorem 2.3 $h_s(\o)\rightarrow 0$ a.e. as $s\to\infty.$
Now applying second part of Theorem 1.2, we get $A'_t(h_s)\rightarrow 0$ a.e. as $t,s\to\infty.$ Therefore, $||A_tE(f|F_s)-A_tE(f|F_{\infty})||_X\rightarrow 0$ a.e. as $s,t\to\infty.$
**Remark**. When we consider real valued functions, that is when $X=R,$ then for any semigroup $\{T_t, t\ge 0\}$ of linear $L_1-L_{\infty}$ contractions there always exists a semigroup $\{P_t, t\ge 0\}$ of positive linear $L_1-L_{\infty}$ contractions such that $|T_tf|\le P_t|f|$ a.e. However, in vector valued positive dominant semigroup may not exist in general. It is also known that $\{T_t, t\ge 0\}$ is not positively dominated by its linear modulus [@Sato3]. Therefore in the above theorem, despite real valued case, we need an additional assumption that $\{T_t, t\ge 0\}$ should be positively dominated by $\{P_t, t\ge
0\}.$ Of course one can ask to provide the above theorems without this condition, but we fail to answer to this question.
The following theorem is dominant and maximal inequalities for martingale-ergodic processes.
Under the assumption of Theorem 3.2, $f\in L_p(\O, X),\ p>1,$ $\sup\limits_{t>0}||A_tf||_X\in L_1$ and $F_s\downarrow F,\ s\to\infty$ then the following assertions hold true.
1. $$||\sup\limits_{t,s}||E(A_tf|F_s)||_X||_p\le \big(\frac p{p-1}\big)^2||f||_p,$$
2. $$\mu\big\{\sup\limits_{t,s}||E(A_tf|F_s)||_X\ge \varepsilon\big\}\le \frac p{p-1}\frac{||f||_p}{\varepsilon}$$
We first prove the dominant inequality. Note that the conditional expectation operator is positively dominated, then
$$||\sup\limits_{t,s}||E(A_tf|F_s)||_X||_p\le||\sup\limits_{t,s}E'(||A_tf||_X|F_s)||_p\,$$
where $E'$ is a positive dominant of $E.$
Since $\{T_t, \ t\ge 0\}$ is positively dominated by $\{P_t, \
t\ge 0\}$, then
$$||A_tf||_X=||\frac1t\int\limits_0^tT_tfd\tau||_X\le \frac1t\int\limits_0^t||T_tf||_Xd\tau\le$$ $$\le \frac1t\int\limits_0^tP_t(||f||_X)d\tau=A'_t(||f||_X).$$
Since $E$ is positively dominated by $E'$ and $A_t$ by $A'_t,$ then we have the following inequality. $$\sup\limits_{t,s}E'(||A_tf||_X|F_s)||_p\le \sup\limits_{t,s}E'(A'_t(||f||_X)|F_s)||_p,$$
Since the flow $P_t$ is a strongly continuous semigroup, applying Theorem 3 of [@pod1] for the process $E'(A'_t(||f||_X)|F_s)$, we get
$$||\sup\limits_{t,s}E'(A'_t(||f||_X)|F_s)||_p\le \big(\frac p{p-1}\big)^2||f||_p.$$
The above chain of inequalities imply part (1) of the theorem.
Now we prove part (2). Since the operator $E$ is positively dominated by some $E'$ , then we have the following inequalities
$$\mu\big\{\sup\limits_{t,s}||E(A_tf|F_s)||_X\ge \varepsilon\big\}\le \mu\big\{\sup\limits_{t,s}E'(||A_tf||_X|F_s)\ge \varepsilon\big\}\le$$ $$\le \mu\big\{\sup\limits_{t,s}E'(A'_t||f||_X|F_s)\ge \varepsilon\big\}$$ where $A'_t(||f||_X)=\frac1t\int\limits_0^tP_{\tau}(||f||_X)d\tau.$ Now applying second part of Theorem 3 of [@pod1], for the process $E'(A'_t(||f||_X)|F_s)$, we get
$$\mu\big\{\sup\limits_{t,s}E'(A'_t||f||_X|F_s)\ge \varepsilon\big\}\le \frac p{p-1}\frac{||f||_p}{\varepsilon}.$$
Hence (2) is proved.
Now, we provide dominant and maximal inequalities for ergodic-martingale average.
Under the assumption of Theorem 3.2, $f\in L_p(\O, X),\ p>1,$ $\sup\limits_{s\ge0}||E(f|F_s)||_X\in L_1$ and $F_s\downarrow F,\ s\to\infty.$ and the following assertions hold true.
1. $$||\sup\limits_{t,s}||A_tE(f|F_s)||_X||_p\le \big(\frac p{p-1}\big)^2||f||_p,$$
2. $$\mu\big\{\sup\limits_{t,s}||A_tE(f|F_s)||_X\ge \varepsilon\big\}\le \frac p{p-1}\frac{||f||_p}{\varepsilon}.$$
This theorem can easily be proven using Theorem 4 of [@pod1] and the way of proof of Theorem 3.3. So we omit details.
It is known that in $L_1,$ the condition of integrability of supremum can not be omitted in all unified theorem [@ArgRos]. The following theorem is given without this assumption, but the conditional expectation operator and ergodic average should commute.
Let $F_s\downarrow F,\ s\to\infty$ and $T_t$ be a semigroup of strongly continuous measure preserving transformation and $T_tE(f|F_s)=E(T_tf|F_s),$ for all $t,s\ge 0.$ Then for any $f\in L_1(\O, X),$ the averages $A_tE(f|F_s)$ and $E(A_tf|F_s)$ converge a.e. as $t,s\to\infty.$
The idea is almost the same as Theorem 4 of [@pod2].
Let $n=[t],$ then $n=t+\a,$ where $0\le\a<1.$ For any $t>0,\ s\ge
0$ we have
$$A_tE(f|F_s)=\frac1t\int\limits_0^tT_{\tau}E(f|F_s)d\tau=\frac1t\int\limits_0^nT_{\tau}E(f|F_s)d\tau+\frac1t\int\limits_n^{n+\a}T_{\tau}E(f|F_s)d\tau=$$ $$=\frac1t\sum\limits_{k=0}^{n-1}\int\limits_k^{k+1}T_{\tau}E(f|F_s)d\tau+\frac1t\int\limits_n^{n+\a}T_{\tau}E(f|F_s)d\tau=$$ $$=\frac1t\sum\limits_{k=0}^{n-1}\int\limits_0^1T_{\tau+k}E(f|F_s)d\tau+\frac1t\int\limits_0^{\a}T_{\tau+n}E(f|F_s)d\tau=$$ $$=\frac1t\sum\limits_{k=0}^{n-1}T_k\int\limits_0^1T_{\tau}E(f|F_s)d\tau+\frac1tT_n\int\limits_0^{\a}T_{\tau}E(f|F_s)d\tau=$$ $$=\frac1t\sum\limits_{k=0}^{n-1}(T_1)^kE(A_1f|F_s)d\tau+\frac{\a}t(T_1)^nA_{\a}E(f|F_s)=$$ $$=\frac nt[S_n(T_1)E(g_1|F_s)+\frac{\a}n(T_1)^nA_{\a}E(f|F_s)],$$ where $g_1=A_1f$ and $S_n(T)f=\frac
1n\sum\limits_{i=0}^{n-1}T_if.$
Now let us estimate the expressions $S_n(T_1)E(g_1|F_s)$ and $\frac{\a}n(T_1)^nA_{\a}E(f|F_s).$ Evidently, the former is a.e. convergent. If $P^1$ and $E'$ be positive dominants of $T_1$ and $E$ respectively, then the latter converges a.e. since $$||\frac{\a}n(T_1)^nA_{\a}E(f|F_s)||_X\le \frac 1n (P^1)^nE'(A_1||f||_X|F_s)=$$ $$=\frac{n+1}nS_{n+1}(P^1)E'(||f||_X|F_s)-S_n(P^1)E'(||f||_X|F_s)\rightarrow 0$$ a.e. as $s,n\to\infty$ from Theorem 1.2.
[99]{} Argiris, G., Rosenblatt, J.M. 2006. Forcing divergence when supremum is not integrable, Positivity, 10: 261–284.
Chacon, R.V. 1962. An ergodic theorem for operators satisfying norm condition, J. Math Mech, 11: 165–172.
Diestel, J. and Uhl, J.J., 1977. Vector measures, AMS, pp.322.
Doob, J.L., 1990. Stochastic processes, Wiley, pp.654.
Dunford N., Schwartz J.T., 1956 Convergence almost everywhere of operator averages. J. Ration. Mech. Anal. 5(1): 129-178.
Frangos, N.E., Sucheston, L. 1986. On multiparameter ergodic and martingale theorems in infinite measure spaces. Probab. Th. Rel. Fields, 71: 477–490.
Hasegawa S., Sato R., Tsurumi S, 1978, Vector valued ergodic theorem for a 1-parameter semigroup of linear operators. Tohoku Math. Journ. II(30): 95-106.
Hasegawa S., Sato R., 1997, on $d-$ parameter ergodic theorem for continuous semigroup of operators satisfying norm conditions. Comment. Math. Univ. Carolinae, 3(38):453-462.
Kakutani, S.,1950. Ergodic Theory. in Proc. Int. Congr. Math. Cambridge, MA, (Am. Math. Soc. Providence, 2(1952) 128–142).
Kachurovskii, A.G. 2007. General theories unifying ergodic averages and martingales, Proceeding of the Steklov Institute of Mathematics, 256: 160–187.
Kachurovskii, A.G. 1998. Martingale-ergodic theorem, Math Notes, 64: 266–269.
Krengel, U., 1985. Ergodic Theorems. Walter de Grugwer.–Berlin, New-York. pp.357.
Neveu J. 1975. Discrete parameter martingales. Elsevier pp. 236.
Óksendal, Bernt K., 2003. Stochastic Differential Equations: An Introduction with Applications (Sixth ed.). Berlin: Springer. pp. 352.
Podvigin, I.V., 2009. Martingale-ergodic and ergodic-martingale processes with continuos parameter, Mt. Sb., 5(200): 55–70.
Podvigin, I.V., 2010. A martingale-ergodic theorem, Siberian, Math. Jour., 6(51): 55–70. Philips, R.S. 1943. On weakly compact subset of a Banach space. Amer. JM, 65: 108-136.
Sato R., 1978. Contraction semigroups in Lebesgue space. Pacific Journal of Mathematics 1(78): 251-259. Sato R., 1994. Ergodic properties of contraction semigroups in $L_p, 1<p<\infty.$ Comment. Math. Univ. Carolinae. 35: 337-346.
Shahidi F.A., Ganiev I.G., 2012. Vector valued martingale-ergodic and ergodic-martingale theorems. Stochastic Analysis and its Applications 5(30): 916-932. . arxiv: 1201.1682.
Shahidi Farruh, Ganiev Inomjon, 2012, Mean ergodic theorems in Hilbert-Kaplansky spaces, arxiv: 1208.5561.
Vakhaniia, N.N., Tariladze, V.I., Chabanyan, S.A. 1987. Probability distributions of Banach spaces, Springer Science & Business, pp.482.
[^1]: Email: farruh.shahidi@@gmail.com
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Superluminous supernova (SLSN) lightcurves exhibit a superior diversity compared to their regular luminosity counterparts in terms of rise and decline timescales, peak luminosities and overall shapes. It remains unclear whether this striking variety arises due to a dominant power input mechanism involving many underlying parameters, or due to contributions by different progenitor channels. In this work, we propose that a systematic quantitative study of SLSN lightcurve timescales and shape properties, such as symmetry around peak luminosity, can be used to characterize these enthralling stellar explosions. We find that applying clustering analysis on the properties of model SLSN lightcurves, powered by either a magnetar spin–down or a supernova ejecta–circumstellar interaction mechanism, can yield a distinction between the two, especially in terms of lightcurve symmetry. We show that most events in the observed SLSN sample with well–constrained lightcurves and early detections strongly associate with clusters dominated by circumstellar interaction models. Magnetar spin–down models also show association at a lower–degree but have difficulty in reproducing fast–evolving and fully symmetric lightcurves. We believe this is due to the truncated nature of the circumstellar interaction shock energy input as compared to decreasing but continuous power input sources like magnetar spin–down and radioactive $^{56}$Ni decay. Our study demonstrates the importance of clustering analysis in characterizing SLSNe based on high–cadence photometric observations that will be made available in the near future by surveys like [*LSST*]{}, [*ZTF*]{} and [*Pan–STARRS*]{}.'
author:
- 'E. Chatzopoulos'
- Richard Tuminello
bibliography:
- 'refs.bib'
title: A SYSTEMATIC STUDY OF SUPERLUMINOUS SUPERNOVA LIGHTCURVE MODELS USING CLUSTERING
---
Introduction {#intro}
============
Superluminous supernovae (SLSNe; @2012Sci...337..927G [@2018arXiv181201428G; @2018SSRv..214...59M]) possess a striking diversity in terms of photometric and spectroscopic properties. SLSNe are often divided in two classes based on the presence of hydrogen (H) in their spectra: H–poor (SLSN–I) and H–rich (SLSN–II) events. In terms of photometry, SLSNe are characterized by reaching very high peak luminosities ($\gtrapprox 10^{44}$ erg s$^{-1}$) over timescales ranging from a few days to several months. The overall evolution and shape of SLSN lightcurves (LCs) can significantly vary from one event to another. Some SLSN LCs appear to have a symmetric, bell–like shape around peak luminosity [@2009ApJ...690.1358B; @2011Natur.474..487Q] while others are highly skewed with a fast rise followed by a slow, long–term decline [@2011ApJ...735..106D; @2016ApJ...831..144L]. Most SLSNe appear to be hosted in low–metallicity dwarf galaxies similar to long–duration Gamma–ray bursts (LGRBs) [@2011ApJ...727...15N; @2014ApJ...787..138L].
Several power input mechanisms have been proposed to interpret the extreme peak luminosities and diverse observational properties of SLSNe. Most SLSN–II show robust signs of circumstellar interaction with a hydrogen medium in their spectra indicating that effective conversion of shock heating to luminosity can reproduce their LCs [@2007ApJ...671L..17S; @2013ApJ...773...76C]. SLSN–I, on the other hand, do not show the usual signatures of circumstellar interaction and are often modelled by magneto–rotational energy release due to the spin–down of a newly–born magnetar following a core–collapse supernova (CCSN) explosion [@2010ApJ...717..245K; @2010ApJ...719L.204W; @2013ApJ...770..128I].
Nonetheless, the association between power input mechanism and SLSN type is still ambiguous. The magnetar spin–down model is occasionally invoked as an explanation to SLSN–II that exhibit P–Cygni H$\alpha$ line profiles, like SN 2008es, . On the other hand, circumstellar interaction cannot be completely ruled out for SLSN–I events because H lines may be hidden due to complicated circumstellar matter geometries [@2018ApJ...856...29M; @2018MNRAS.475.3152K], details of non–local thermal equillibrium line transfer physics in non–homologously expanding shocked, dense regions yet unexplored by numerical radiation transport models [@2013ApJ...773...76C; @2015MNRAS.449.4304D] or, simply, interaction with a H–deficient medium [@2012ApJ...760..154C; @2016ApJ...828...94C; @2016ApJ...829...17S]. A sub–class of SLSNe are found to transition from SLSN–I at early times to SLSN–II of Type IIn at late times indicating late–time interaction adding to the complexity of the problem [@2017ApJ...848....6Y].
Breaking the degeneracy between SLSNe powered by magnetar spin–down, circumstellar interaction and other mechanisms will help address a variety of important questions surrounding massive stellar evolution and explosive stellar death: the link between LGRBs and SLSNe, the formation of extremely magnetized stars following CCSN and their effect on the dynamics of the expansion of the supernova (SN) ejecta, the mass–loss history of massive stars in the days to years prior to their explosion and how their environments affect the radiative properties of their explosion, to name a few.
The advent of automated, wide–field, high–cadence transient surveys like the [*Panoramic Survey Telescope and Rapid Response System; Pan–STARRS*]{} [@2002SPIE.4836..154K], the [*Zwicky Transient Facility; ZTF*]{} [@2019PASP..131a8002B] and, of course, the [*Large Synoptic Survey Telescope (LSST)*]{} [@2008SerAJ.176....1I] will significantly enhance the SLSN discovery rate and equip us with more complete photometric coverage that includes detections shortly after the SN explosion tightly constraining the LCs of these events.
This work aims to illustrate how well–sampled LCs can be used to unveil the power input mechanism of SLSNe. This is done by quantitatively characterizing several key properties of SLSN LCs such as rise and decline timescales [@2015MNRAS.452.3869N] and LC symmetry around peak luminosity. Using the power of machine learning and $k$–means clustering analysis we are able to distinguish between groups of LC shape parameters corresponding to different power input mechanisms, and calculate their association with the properties of observed SLSN LCs.
Our paper is organized as follows: Section \[obs\] presents the observed SLSN LC sample that we use in this work and introduces the LC shape properties that are utilized in our analysis. Section \[mod\] introduces the SLSN power input models adopted to obtain large grids of semi–analytic LCs across the associated parameter spaces. Section \[cluster\] introduces the $k$–means clustering analysis method that we employ to characterize observed and model SLSN LCs and Section \[results\] details the results of this analysis. Finally, Section \[disc\] summarizes our discussion.
Observed SLSN Lightcurve Sample {#obs}
===============================
We use the [*Open Supernova Catalog*]{} (OSC; @2017ApJ...835...64G) to access publicly available photometric data on a sample of 126 events that are spectroscopically classified as SLSN–I (68% of the sample) or SLSN–II (32% of the sample).
For events with available redshift measurements, we compute pseudo–bolometric LCs using the [*SuperBol*]{}[^1] code [@2018RNAAS...2d.230N]. [*SuperBol*]{} is a user–friendly [*Python*]{} software instrument that uses the available observed fluxes in different filters to fit blackbodies to the Spectral Energy Distribution (SED) of a SN. The resulting pseudo–bolometric SN LCs can also be corrected for time dilation, distance and converted to the rest frame (K–correction). Using extrapolation techniques, missing near–infrared (NIR) and ultraviolet (UV) flux can also be accounted for. Subsequently, all rest–frame LCs are translated in time so that $t =$ 0 is coincident with the time corresponding to peak luminosity ($t_{\rm 0} = t_{\rm max}$), and scaled by the peak luminosity ($L_{\rm max}$).
For the purposes of our study, we select a sub–sample of SLSNe defined by rest–frame LCs with near–complete temporal photometric coverage, that we define as including observed data in the range $L_{\rm max}/{\rm e}<L(t)<L_{\rm max}$ (or $1/{\rm e} < L(t) < 1$ in the scaled form). Thus, we only focus on SLSN LCs with observed evolution within one ${\rm e}$–folding timescale from the peak luminosity ensuring that our analysis relies only on real data and not approximate, often model–based, extrapolations to explosion time (see \[lcshape\]). In this regard, our sample selection criterion for LC coverage is similar to that used in (@2015MNRAS.452.3869N; hereafter referred to as N15) but our SLSN sample is larger from their “gold” sample by 8 events due to our inclusion of SLSN–II events and the availability of more SLSN discoveries since their publication. This process leaves us with a reduced sample of 25 SLSNe with well–covered LCs: 21 SLSN–I and 4 SLSN–II events. Table \[T1\] presents the details of the SLSN sample used in our analysis including the photometric band with the longest (in time) LC coverage that was used in generating their pseudo–bolometric LC.
[*Quantitative properties of SLSNe LC shapes*]{} {#lcshape}
------------------------------------------------
In order to quantitatively constrain the shapes of SLSN LCs, we define the following scaled luminosity thresholds:
- [Primary luminosity threshold: $L_{\rm 1} =1.0/{\rm e}$ or 36.79% of the peak luminosity.]{}
- [Secondary luminosity threshold: $L_{\rm 2} = 1.0/(0.5{\rm e})$ or 73.58% of the peak luminosity.]{}
- [Tertiary luminosity threshold: $L_{\rm 3} = 1.0/(0.4{\rm e})$ or 91.97% of the peak luminosity.]{}
At each luminosity threshold we can compute a “rise–time” to peak luminosity and a “decline–time” from peak. As such, we accordingly define the primary, secondary and tertiary rise ($tr_{\rm 1}$, $tr_{\rm 2}$, $tr_{\rm 3}$) and decline ($td_{\rm 1}$, $td_{\rm 2}$, $td_{\rm 3}$) timescales. It is evident that $t[d,r]_{\rm 3} < t[d,r]_{\rm 2} < t[d,r]_{\rm 1}$ and that all of the SLSNe in our selected LC sample have observations that include these timescales. We note that our choice for the primary luminosity threshold and corresponding rise and decline timescales is the same as the one used in N15 to study how closely these timescales correlate with different power input models.
Next, for the sake of quantifying how symmetric a LC is around peak luminosity, we define three corresponding “LC symmetry” parameters: $s_{\rm 1,2,3} = tr_{\rm 1,2,3}/td_{\rm 1,2,3}$. The closer these parameters are to unity, the more symmetric the LC is at the corresponding luminosity threshold. Obviously, to consider a LC as “fully symmetric” all of the three LC symmetry parameters need to be close to unity. For the purposes of this study we define a symmetric LC one that satisfies the criterion $0.9 < s_{\rm 1,2,3} < 1.1$. For the remainder of this paper we refer to the nine ($tr_{\rm 1,2,3}$, $td_{\rm 1,2,3}$, $s_{\rm 1,2,3}$) LC parameters as “LC shape parameters”.
[lccccccccccccc]{} & & & [*SLSN–I*]{} & & & & & & & & &\
PTF09cnd & @2011Natur.474..487Q & 0.258 & UBgRi & 29.5 & 56.3 & 0.52 & 18.9 & 26.9 & 0.7 & 10.6 & 12.9 & 0.82\
SN2011kg & @2013ApJ...770..128I & 0.192 & UBgrizJ & 20.5 & 30.0 & 0.68 & 12.5 & 15.9 & 0.79 & 6.9 & 7.9 & 0.88\
SN2010md & @2013ApJ...770..128I & 0.098 & UBgriz & 30.4 & 31.9 & 0.95 & 16.1 & 16.6 & 0.97 & 8.4 & 8.4 & 1.0\
SN2213–1745 & @2012Natur.491..228C & 2.046 & g$^{\prime}$r$^{\prime}$i$^{\prime}$ & 10.4 & 25.5 & 0.41 & 6.7 & 8.6 & 0.78 & 3.7 & 4.3 & 0.87\
PTF09atu & @2011Natur.474..487Q & 0.501 & gRi & 48.8 & 50.9 & 0.96 & 29.9 & 30.2 & 0.99 & 16.4 & 16.0 & 1.02\
iPTF13ajg & @2014ApJ...797...24V & 0.740 & uBgR$_{\rm s}$iz & 21.9 & 28.8 & 0.76 & 14.3 & 16.4 & 0.87 & 8.0 & 8.6 & 0.93\
PS1–10pm & @2015MNRAS.448.1206M & 1.206 & griz & 27.9 & 25.4 & 1.1 & 14.9 & 15.0 & 0.99 & 7.9 & 7.9 & 1.0\
PS1–14bj & @2016ApJ...831..144L & 0.522 & grizJ & 81.6 & 138.2 & 0.59 & 49.2 & 64.9 & 0.76 & 27.2 & 32.4 & 0.84\
SN2013dg & @2014MNRAS.444.2096N & 0.265 & griz & 15.6 & 29.7 & 0.52 & 10.4 & 14.0 & 0.74 & 5.9 & 6.8 & 0.87\
iPTF13ehe & @2015ApJ...814..108Y [@2017ApJ...848....6Y] & 0.343 & gri & 53.4 & 62.1 & 0.86 & 32.2 & 35.4 & 0.91 & 18.1 & 18.1 & 1.0\
LSQ14mo & @2015ApJ...815L..10L & 0.253 & Ugri & 16.2 & 25.3 & 0.64 & 10.9 & 14.0 & 0.78 & 6.2 & 7.1 & 0.87\
PS1–10bzj & @2013ApJ...771...97L & 0.650 & griz & 14.6 & 22.5 & 0.65 & 10.3 & 13.8 & 0.75 & 6.1 & 7.2 & 0.84\
DES14X3taz & @2016ApJ...818L...8S & 0.608 & griz & 31.9 & 41.8 & 0.76 & 19.9 & 23.0 & 0.87 & 11.0 & 11.7 & 0.94\
LSQ14bdq & @2015ApJ...807L..18N & 0.345 & griz & 54.6 & 90.2 & 0.61 & 37.1 & 48.8 & 0.76 & 21.7 & 24.4 & 0.89\
SNLS 07D2bv & @2013ApJ...779...98H & 1.500 & griz & 18.9 & 17.7 & 1.07 & 12.5 & 12.8 & 0.98 & 7.1 & 7.0 & 1.01\
SNLS 06D4eu & @2013ApJ...779...98H & 1.588 & griz & 15.0 & 17.6 & 0.85 & 9.4 & 10.6 & 0.89 & 5.3 & 5.7 & 0.92\
PTF12dam & @2018ApJ...860..100D & 0.107 & UBgVrizJHK & 46.2 & 75.0 & 0.62 & 28.8 & 37.5 & 0.77 & 16.6 & 18.3 & 0.91\
SN2011ke & @2018ApJ...860..100D & 0.143 & UBgVriz & 22.1 & 26.6 & 0.83 & 12.3 & 13.8 & 0.97 & 6.8 & 7.0 & 0.97\
PTF12gty & @2018ApJ...860..100D & 0.177 & gri & 46.4 & 65.9 & 0.70 & 24.9 & 27.0 & 0.92 & 14.0 & 15.2 & 0.92\
PS1–11ap & @2018ApJ...852...81L & 0.524 & grizy & 26.7 & 52.5 & 0.51 & 18.5 & 26.3 & 0.71 & 11.0 & 12.9 & 0.85\
SCP 06F6 & @2009ApJ...690.1358B & 1.189 & iz & 31.8 & 32.7 & 0.97 & 19.5 & 19.5 & 1.0 & 10.6 & 10.4 & 1.02\
& & & [*SLSN–II*]{} & & & & & & & & &\
SN2006gy & @2007ApJ...666.1116S & 0.019 & BVR & 41.0 & 54.3 & 0.76 & 24.4 & 27.8 & 0.88 & 13.3 & 14.1 & 0.94\
CSS121015:004244+132827 & @2014MNRAS.441..289B & 0.287 & UBVRGI & 20.3 & 30.9 & 0.66 & 12.5 & 15.2 & 0.82 & 7.0 & 7.6 & 0.92\
SN2016jhn & @2018arXiv180108240M & 1.965 & GI2zY & 12.4 & 27.0 & 0.46 & 10.3 & 20.7 & 0.5 & 6.3 & 10.6 & 0.6\
SDSSII SN2538 &@2018PASP..130f4002S & 0.530 & u$^{\prime}$g$^{\prime}$r$^{\prime}$i$^{\prime}$z$^{\prime}$ & 31.6 & 37.8 & 0.84 & 19.0 & 19.2 & 0.99 & 10.0 & 10.0 & 1.0\
[lccccc|cccccccc]{} &&& SLSN–I & & & & & SLSN–II & &\
$tr_{\rm 1}$ & 31.6 & 27.9 & 17.3 & 81.6 & 10.4 & 26.3 & 25.9 & 10.9 & 41.0 & 12.4\
$td_{\rm 1}$ & 45.1 & 31.9 & 28.5 & 138.2 & 17.6 & 37.5 & 34.3 & 10.4 & 54.3 & 27.0\
$s_{\rm 1}$ & 0.74 & 0.70 & 0.19 & 1.10 & 0.41 & 0.68 & 0.71 & 0.14 & 0.84 & 0.46\
$tr_{\rm 2}$ & 19.5 & 16.1 & 10.5 & 49.2 & 6.7 & 16.6 & 15.8 & 5.6 & 24.4 & 10.3\
$td_{\rm 2}$& 23.4 & 16.6 & 13.6 & 64.9 & 8.6 & 20.7 & 19.9 & 4.5 & 27.8 & 15.3\
$s_{\rm 2}$ & 0.85 & 0.87 & 0.10 & 1.00 & 0.70 & 0.80 & 0.85 & 0.18 & 0.99 & 0.50\
$tr_{\rm 3}$ & 10.9 & 8.4 & 5.9 & 27.2 & 3.7 & 9.1 & 8.5 & 2.8 & 13.3 & 6.2\
$td_{\rm 3}$ & 11.9 & 8.6 & 6.7 & 32.4 & 4.3 & 10.6 & 10.3 & 2.3 & 14.1 & 7.6\
$s_{\rm 3}$ & 0.93 & 0.92 & 0.06 & 1.02 & 0.82 & 0.86 & 0.93 & 0.16 & 1.00 & 0.60\
We have developed a [*Python*]{} script that fits a high–degree polynomial to the scaled observed LCs of the SLSN in our sample. This provides with interpolation between missing photometric datapoints and an accurate measurement of the LC shape parameters discussed above. An example of such fit is shown in Figure \[Fig:06gypoly\] for SN2006; unarguably one of the most well–observed SLSN–II of Type IIn [@2007ApJ...666.1116S]. In this figure, the light blue horizontal lines show the three luminosity thresholds that were introduced earlier. Based on these thresholds, we find $tr_{\rm 1} =$ 41.0 days and $td_{\rm 1} =$ 54.3 days for this SN, implying primary symmetry, $s_{\rm 1} =$ 0.76. The rest of the LC shape parameters for SN2006gy are given in Table \[T1\]. Table \[T2\] lists the main LC shape statistical properties of the observed SLSN–I and SLSN–II in our sample. The SLSN–II sample only includes 4 events therefore preventing us from performing an accurate statistical comparison against the SLSN–I sample to look for potential systematic differences in the two distributions.
Our sample overlaps with that presented in Table 3 of N15 for 11 SLSNe: SN2011ke, SN2013dg, LSQ14mo, LSQ13bdq, PTF12dam, CSS121015:004244+132827, PS1–11ap, SCP 06F6, PTF09cnd, PS1–10bj and iPTF13ajg. This is due to the fact that for the purposes of our study we decided to include only events with real detections shortly after the explosion and a good coverage of the LC in order to tightly constrain their LC shape parameters. N15, on the other hand, opted to use polynomial extrapolation to earlier times for some of the SLSNe in their sample in order to obtain estimates for $tr_{\rm 1}$ and $td_{\rm 1}$. For objects where this extrapolation is done only by a few days this may not be a bad approximation, however the LCs for cases like SN2007bi [@2009Natur.462..624G], SN2005ap [@2007ApJ...668L..99Q], and PS1-10ky [@2011ApJ...743..114C], $tr_{\rm 1}$ is poorly constrained using this method.
For the 11 events that are common between our sample and that of N15, we calculate the mean value of $tr_{\rm 1}$ to be 27.2 days versus 25.7 days in their case, and the mean value of $td_{\rm1}$ to be 42.8 days compared to 51.6 days in their case. While our results are consistent in terms of $tr_{\rm 1}$, the discrepancy observed in $td_{\rm 1}$ could be due to a variety of reasons including different combinations of filters used to calculate the rest–frame pseudo–bolometric LC of each event. In our work, we have used all available filters with more than 2 data points for each event to construct LCs using [*SuperBol*]{} as described earlier. We caution that more accurate consideration for near–IR and IR fluxes may lead to flattening of the true bolometric LC at late times and therefore longer primary decline timescales.
We note that comparing the mean $tr_{\rm 1}$ and $td_{\rm 1}$ values of our entire sample ($tr_{\rm 1} =$ 30.8 days, $td_{\rm 1} =$ 43.9 days from Table \[T2\]) against those of the full SLSN sample of N15 (their Table 3; $tr_{\rm 1} =$ 22.9 days, $td_{\rm 1} =$ 46.4 days) the agreement is somewhat better, within uncertainties. We also derive a linear fit for the observed $tr_{\rm 1}$ and $td_{\rm 1}$ values of the form: $$td_{\rm 1} = \gamma_{\rm 0} + \gamma_{\rm 1} \times tr_{\rm 1}\label{Eq1},$$ where $\gamma_{\rm 0} =$ -1.962 and $\gamma_{\rm 1} =$ 1.489 (see also \[Fig:tr1td1fit\]). In contrast, N15 derive a steeper correlation for their “gold” SLSN sample with $\gamma_{\rm 0,N15} =$ -0.10 and $\gamma_{\rm 1,N15} =$ 1.96.
An investigation of Table \[T1\] reveals yet another interesting property of our observed SLSN sample: five SLSN–I events SN2010md, PTF09atu, PS1–10pm, SNLS 07D2bv and SCP 06F6) or, equivalently, 23.81% of the entire SLSN–I sample have fully symmetric LC around peak luminosity, following the criterion we established earlier for full LC symmetry ($0.9 < s_{\rm 1,2,3} < 1.1$). This can be said for more certainty for SN2010md and PTF09atu (with redshifts 0.098 and 0.5 accordingly) as compared to the other three events with large redshifts ($>$ 1), because in this case the observed band correspond to near–UV fluxes in the rest–frame. Bias toward UV fluxes may correspond to faster post–maximum decline rate and thus steeper, more symmetric LCs. Neverthelss, we have attempted to account for this effect by making use of approximate extrapolations to the IR flux by using the techniques available in [*SuperBol*]{}.
The upper left panel of Figure \[Fig:symmLCs\] shows 2 examples of SLSNe with “fully–symmetric” LCs. Given that symmetric LCs are present in about a quarter of our SLSN–I sample, a considerable fraction of LC models corresponding to the proposed power input mechanisms must be able to reproduce this observation. [*This raises the question of whether LC symmetry is a property shared amongst all the proposed power input mechanisms for different combinations of model parameters or is uniquely tied to one power input mechanism. In the latter case, we can use photometry alone to characterize the nature of SLSNe*]{}.
Lastly, another LC shape property that will be interesting to constrain with future, high–cadence photometric follow–up of SLSNe would be the convexity (second derivative) of the bolometric LC during the rise to peak luminosity [@2017ApJ...851L..14W]. Given the low temporal resolution of the observed LC in our sample, we opt to not provide estimates of the percentages of concave–up and concave–down LCs, yet we briefly discuss the predictions for these parameters coming from semi–analytical models in the following section.
SLSN Power Input Models {#mod}
=======================
A number of models have been proposed to explain both the unprecedented peak luminosities but, more importantly, the striking diversity in the observed properties of SLSNe, both photometrically (LC timescales and shapes) and spectroscopically (SLSN–I versus SLSN–II class events). The three most commonly cited SLSN power input mechanisms are the radioactive decay of several masses of $^{56}$Ni produced in a full–fledged Pair–Instability Supernova explosion (PISN; @2009Natur.462..624G [@2012ApJ...748...42C; @2015ApJ...799...18C]), the magneto–rotational energy release from the spin–down of a newly born magnetar following a core–collapse SN (CCSNe) [@2010ApJ...717..245K; @2010ApJ...719L.204W] and the interaction between SN ejecta and massive, dense circumstellar shells ejected by the progenitor star prior to the explosion [@2007ApJ...671L..17S; @2008ApJ...686..467S; @2016ApJ...828...94C; @2017ApJ...851L..14W].
We have decided to leave the PISN model outside of our analysis because of several reasons that make it unsuitable for contemporary SLSNe. First, given that the known hosts of SLSNe have metallicities $Z >$ 0.1 [@2013ApJ...771...97L; @2014ApJ...787..138L], very massive stars formed in these environments are likely to suffer strong radiatively–driven mass–loss preventing them from forming the massive carbon–oxygen cores ($\gtrapprox$ 40–60 $M_{\odot}$, depending on Zero Age Main Sequence rotation rate @2012ApJ...748...42C) required to encounter pair–instability . Second, the majority of PISN models do not yield superluminous LCs. Yet even many of the PISN superluminous LCs require total SN ejecta masses that are comparable to – or smaller in some cases – to the predicted $^{56}$Ni mass needed to explain the high peak luminosity [@2013ApJ...773...76C]. Finally, while radiation transport models of PISNe can reproduce superluminous LCs and provide good fits to the LCs of some SLSNe [@2009Natur.462..624G; @2017ApJ...846..100G], the model spectra are too red compared to the observed SLSN spectra at contemporaneous epochs [@2013MNRAS.428.3227D; @2015ApJ...799...18C]. Full–fledged PISN may however still be at play in lower metallicity environments and massive, Population III primordial stars. For an alternative perspective on the viability of low–redshfit full–fledged PISNe we refer to .
We add that a model that is recently gaining popularity is energy input by fallback accretion into a newly–formed black hole following core collapse [@2013ApJ...772...30D]. One caviat of this model is that unrealistically large accretion masses are needed in order to fit the observed LCs of SLSNe given a fiducial choice for the energy conversion efficiency for the most cases [@2018ApJ...867..113M]. While the fallback accretion model is a very interesting suggestion that may be relevant to a small fraction of SLSNe, we opt to exclude it from our model LC shape analysis at least until it is further investigated in the literature. This leaves us with two main channels to power SLSNe often discussed today, the magnetar spin–down and the cirumstellar interaction model. From hereafter, we refer to the magnetar spin–down model as “MAG” and to the SN ejecta–circumstellar interaction model as “CSM”.
For both the MAG and the CSM model, we adopt the semi–analytic formalism presented in [@2012ApJ...746..121C; @2013ApJ...773...76C] (hereafter C12, C13) and based on the seminal works of [@1980ApJ...237..541A; @1982ApJ...253..785A] on modeling the LCs of Type Ia and Type II SNe. While these models invoke many simplifying assumptions (centrally concentrated input source – in terms of energy density, homologous expansion of the SN ejecta and constant, Thompson scattering opacity for the SN ejecta to name a few), they remain a powerful tool to study the LC shapes of SNe assuming different power inputs because of their ability to provide reasonable estimates of the associated physical parameters when fit to observed data. In addition, these semi–analytic models are numerically inexpensive to compute, allowing us to compute large grids of LC models throughout the associated, multi–dimensional parameter space. As such they remain a popular SN LC modeling tool with a few codes that have been made publicly available to compute them such as [*TigerFit*]{} [@2017ApJ...851L..14W] and [*MOSFiT*]{} [@2018ApJS..236....6G]. We caution, however, that comparisons against rigorous, numerical radiation transport models have shown that semi–analytic SLSN LC models have their limitations, especially in regimes where the SN expansion is not homologous (for example due to circumstellar interaction) and due to the assumption of constant opacity in the SN ejecta and constant diffusion timescale [@2013MNRAS.428.1020M; @2018arXiv181206522K]. For this reason, we include some analysis of the LC shape properties of numerically–computed SLSN LCs that are available in the literature for both the MAG and the CSM model.
[*The SN–ejecta circumstellar interaction model (CSM)*]{} {#csi}
---------------------------------------------------------
Massive stars can suffer significant mass–loss episodes, especially during the late stages of their evolution, due to a variety of mechanisms: super–Eddington strong winds during a Luminous Blue Variable (LBV) stage similar to the $\eta$–Carina [@2007ApJ...671L..17S; @2011MNRAS.415..773S; @2018arXiv180910187J; @2018MNRAS.480.1466S], gravity–wave driven mass–loss excited during vigorous shell Si and O shell burning [@2012MNRAS.423L..92Q; @2014ApJ...780...96S; @2017MNRAS.470.1642F], binary interactions [@1994ApJ...429..300W] or a softer version of PISN that does not lead to complete disruption of the progenitor star (Pulsational Pair–Instability or PPISN; @2007Natur.450..390W [@2012ApJ...760..154C; @2017ApJ...836..244W]). PPISNe originate from less massive progenitors than full–fledged PISNe and can thus occur in the nearby Universe offering a channel to produce a sequence of SLSN–like transients originating from the same progenitor as successively ejected shells can collide with each other before the final CCSN takes place [@2016ApJ...828...94C; @2017ApJ...836..244W; @2018NatAs.tmp..125L].
As a result, both observational evidence and theoretical modeling suggest that the environments around massive stars can be very complicated with diverse geometries (circumstellar (CS) spherical or bipolar shells, disks or clumps) and, in some cases, very dense and at the right distance from the progenitor star that a violent interaction will be imminent following the SN explosion. This SN ejecta–circumstellar matter interaction (CSI) leads to the formation of forward and reverse shocks and the efficient conversion of kinetic energy into luminosity [@1994ApJ...420..268C; @2041-8205-729-1-L6] that can produce superluminous transients with immense diversity in their LC shapes and maybe even spectra [@2012ApJ...747..118M; @2013MNRAS.430.1402M; @2016MNRAS.tmp..117D; @2018MNRAS.475.3152K].
C12 combined the self–simular CSI solutions presented by @1994ApJ...420..268C with the @1980ApJ...237..541A [@1982ApJ...253..785A] LC modeling formalism to compute approximate, semi–analytical CSM models that were then successfully fit to the LCs of several SLSN–I and SLSN–II events in C13. Given a SN explosion energy ($E_{\rm SN}$), SN ejecta mass ($M_{\rm ej}$), the index of the outer (power–law) density profile of the SN ejecta ($n$, related to the progenitor radius), the distance of the CS shell ($R_{\rm CS}$), the mass of the CS shell $M_{\rm CS}$, the (power–law) density profile of the CS shell ($s$) and the progenitor star mass–loss rate ($\dot{M}$) a model, semi–analytic CSM LC can be computed. The energy input originates from the efficient conversion of the kinetic energy of both the forward and the reverse shock to luminosity. As such, forward shock energy input is terminated when it breaks out to the optically–thin CS while reverse shock input is terminated once it sweeps–up the bulk of the SN ejecta. [*This is a property unique to the CSM model and not present in other, continuous heating sources such as radioactive decay of $^{56}$Ni and magnetar spin–down input: during CSI energy input terminates abruptly, thus affecting the shape of the LC in a way that can yield a faster decline in luminosity at late times.*]{}
While the CSM model can naturally explain the observed diversity of SLSN LCs and is consistent with observation of narrow emission lines in the spectra of SLSN–II events of IIn class, it has been challenged as a viable explanation for SLSN–I due to the lack of spectroscopic signatures associated with interaction (@2013ApJ...770..128I, N15). There is, however, a “hybrid” class of SLSNe that transition from SLSN–I to SLSN–II at late times indicating possible interaction with H–poor material early on before the SN ejecta reach the ejected H envelope and interact with it producing Balmer emission lines [@2015ApJ...814..108Y]. Another concern for the CSM model is the necessity to include many parameters in the model that can lead to overfitting observed data and to parameter degeneracy issues [@2013MNRAS.428.1020M]. Detailed radiation hydrodynamics and radiation transport modeling of the CSI process across the relevant parameter space, including in cases of H–poor CSI, is still needed in order to resolve whether SLSN–I can be powered by this mechanism.
[*The magnetar spin–down model (MAG)*]{} {#mag}
----------------------------------------
The spin–down of a newly born magnetar following CCSN can release magneto–rotational energy that, if efficiently thermalized in the expanding SN ejecta, can produce a superluminous display [@2010ApJ...717..245K; @2010ApJ...719L.204W]. Given a dipole magnetic field for the magnetar, an initial rotation period of $P_{\rm mag}$ in units of 1 ms and an initial magnetar magnetic field $B_{\rm 14, mag}$ in units of $10^{14}$ G, the associated SN LC can be computed by making use of Equation 13 of C12. This model LC can also provide estimates for the SN ejecta mass, $M_{\rm ej}$, that is controlled by the diffusion timescale (Equaton 10 of C12).
Numerical radiation transport simulations of SNe powered by magnetars have yielded additional insights on the efficiency of this model in powering SLSNe, primarily of the hydrogen–poor (SLSN–I) type . Some observational evidence linking the host properties of SLSN–I to those of long–duration Gamma–ray bursts [@2014ApJ...787..138L] and the discovery of double–peaked SLSN LCs, a feature that can be produced by magnetar–driven shock breakout [@2015ApJ...807L..18N; @2016ApJ...821...36K] seem to strongly suggest that most, if not all, SLSN–I are powered by this mechanism. This is strengthened by the suggestion that a lot of SLSN LCs can be successfully fit but a semi–analytical MAG LC model [@2017ApJ...850...55N; @2018ApJ...860..100D]. There is, however, on–going discussion on whether the MAG model is always efficient in thermalizing the magnetar luminosity in the SN ejecta or even allowing for the efficient conversion of the magnetar energy to radiated luminosity [@2006MNRAS.368.1717B], instead of kinetic energy for the inner ejecta [@2016ApJ...821...22W]. Recent, 2D simulations of magnetar–powered SNe appear to enhance these concerns [@2016ApJ...832...73C; @2017ApJ...839...85C].
[*Grids of Models with the TigerFit code*]{} {#tigerfit}
--------------------------------------------
We have adapted the [*TigerFit*]{} code [@2016ApJ...828...94C; @2017ApJ...851L..14W] to run grids of CSM and MAG models throughout a large parameter space in order to systematically study the statistical LC shape properties and determine their association with the observed SLSN sample presented in Section \[obs\].
![Same as Figure \[Fig:tr1td1fit\] but for $s_{\rm 1}$, $s_{\rm 2}$ and $s_{\rm 3}$.[]{data-label="Fig:s1s2s3fit"}](s1_s2_s3_num_analytical_obs.png){width="9cm"}
For the CSM model we consider cases with H–poor opacity (CSM–I; $\kappa =$ 0.2 cm$^{2}$ g$^{-1}$) and H–rich opacity ($\kappa =$ 0.4 cm$^{2}$ g$^{-1}$) and run two sets of grids: (a) CSM–I$\kappa$/CSM–II$\kappa$ models, where the parameter grid is identical and (b) CSM–I/CSM–II models where the parameter grid is constrained in each case, motivated by assumptions about the nature of the progenitor stars in Type I versus Type II SNe respectively that are further discussed later in this section. For case (a) the ranges used for each parameter are as following:
- [$E_{\rm SN, 51} \in [1.0,1.2,1.5,2.0]$, where $E_{\rm SN} = E_{\rm SN,51} \times 10^{51}$ erg]{}
- [$M_{\rm ej} \in [5,8,10,15,20,25,30,40]$, where $M_{\rm ej}$ is in units of $M_{\odot}$]{}
- [$n \in [7,8,9,10,11,12]$]{}
- [$R_{\rm CS,15} \in [10^{-5},10^{-4},10^{-3},10^{-2},10^{-1}]$, where $R_{\rm CS} = R_{\rm CS,15} \times 10^{15}$ cm]{}
- [$M_{\rm CS} \in [0.1,0.2,0.5,1.0,2.0,5.0,8.0]$, where $M_{\rm CS}$ is in units of $M_{\odot}$]{}
- [$\dot{M} \in [0.001,0.01,0.05,0.1,0.2,0.5,1]$, where $\dot{M}$ is in units of $M_{\odot}$ yr$^{-1}$.]{}
For case (b) and the CSM–I subset, the ranges used are:
- [$E_{\rm SN, 51} \in [1,1.2,1.5,1.75,2]$]{}
- [$M_{\rm ej} \in [5,8,10,12,15,20,25,30]$]{}
- [$n \in [7,8,9]$]{}
- [$R_{\rm CS,15} \in [10^{-5},10^{-4},5 \times 10^{-4},10^{-3},5 \times 10^{-3},10^{-2}]$]{}
- [$M_{\rm CS} \in [0.1,0.2,0.5,0.7,1.0,2.0,5.0]$]{}
- [$\dot{M} \in [10^{-5},10^{-4},10^{-3},0.01,0.1,0.2,0.5,1.0,2.0]$,]{}
and accordingly for the CSM–II subset:
- [$E_{\rm SN, 51} \in [1,1.2,1.5,1.75,2]$]{}
- [$M_{\rm ej} \in [12,15,20,25,30,40,50,60]$]{}
- [$n \in [10,11,12]$]{}
- [$R_{\rm CS,15} \in [0.01,0.05,0.08,0.10,0.20,0.30]$]{}
- [$M_{\rm CS} \in [0.5,1.0,2.0,5.0,8.0,10.0,15.0]$]{}
- [$\dot{M} \in [10^{-5},10^{-4},10^{-3},0.01,0.1,0.2,0.5,1.0,2.0]$]{}
For all CSM models we are focusing on the $s =$ 0 cases implying a fiducial, constant–density circumstellar shell. While the $s =$ 2 case is of interest since it implies a radiatively–driven wind structure that is common around red supergiant stars (RSGs) we omit it in this work because it is inconsistent with episodic mass–loss, that is more likely to be the case for luminous SNe. Also, for the vast majority of cases where the $s =$ 2 choice yields luminous LCs other parameters obtain unrealistic values (for example, $M_{\rm CS}$ values in excess of $\sim$ 100 $M_{\odot}$ are commonly found; C13). As a result, a total of 47,040 models were generated for the CSM–I$\kappa$/CSM–II$\kappa$ cases and 45,360 models for the CSM–I/CSM–II cases.
[lccccc|ccccccc]{} &&& CSM–I & & & & & CSM–II & &\
$tr_{\rm 1}$ & 12.2 & 11.0 & 5.9 & 36.1 & 2.3 & 45.1 & 46.6 & 8.8 & 59.5 & 17.3\
$td_{\rm 1}$ & 29.7 & 28.6 & 13.2 & 82.8 & 4.0 & 72.6 & 69.5 & 16.1 & 101.1 & 44.9\
$s_{\rm 1}$ & 0.43 & 0.41 & 0.15 & 0.87 & 0.13 & 0.64 & 0.61 & 0.13 & 1.00 & 0.37\
$tr_{\rm 2}$ & 7.0 & 6.0 & 4.0 & 28.2 & 1.3 & 18.7 & 19.4 & 3.7 & 25.4 & 7.1\
$td_{\rm 2}$& 9.1 & 8.4 & 5.0 & 33.3 & 1.5 & 24.4 & 25.9 & 4.7 & 31.7 & 11.8\
$s_{\rm 2}$ & 0.78 & 0.77 & 0.15 & 1.15 & 0.52 & 0.77 & 0.74 & 0.14 & 1.14 & 0.60\
$tr_{\rm 3}$ & 2.9 & 2.2 & 2.6 & 18.9 & 0.5 & 6.1 & 6.1 & 1.2 & 8.3 & 2.5\
$td_{\rm 3}$ & 3.1 & 2.4 & 3.1 & 24.4 & 0.5 & 7.0 & 7.2 & 1.4 & 10.2 & 3.0\
$s_{\rm 3}$ & 0.92 & 0.93 & 0.11 & 1.10 & 0.73 & 0.88 & 0.85 & 0.10 & 1.09 & 0.73\
[lccccc|ccccccc]{} &&& CSM–I$\kappa$ & & & & & CSM–II$\kappa$ & &\
$tr_{\rm 1}$ & 15.1 & 12.6 & 8.3 & 50.3 & 2.5 & 11.9 & 11.5 & 3.5 & 22.1 & 3.3\
$td_{\rm 1}$ & 32.2 & 30.5 & 16.0 & 83.2 & 3.3 & 25.3 & 22.9 & 10.7 & 49.7 & 5.0\
$s_{\rm 1}$ & 0.50 & 0.48 & 0.18 & 1.03 & 0.16 & 0.52 & 0.48 & 0.16 & 0.86 & 0.26\
$tr_{\rm 2}$ & 7.7 & 6.1 & 5.4 & 39.3 & 1.7 & 6.8 & 6.3 & 3.5 & 20.8 & 2.2\
$td_{\rm 2}$ & 10.4 & 8.5 & 6.4 & 38.9 & 1.9 & 8.4 & 7.5 & 4.0 & 22.3 & 1.9\
$s_{\rm 2}$ & 0.75 & 0.72 & 0.26 & 1.16 & 0.53 & 0.82 & 0.80 & 0.16 & 1.15 & 0.55\
$tr_{\rm 3}$ & 3.1 & 2.3 & 3.4 & 26.3 & 0.6 & 2.6 & 2.1 & 2.4 & 15.2 & 0.9\
$td_{\rm 3}$ & 3.6 & 2.5 & 4.2 & 32.5 & 0.6 & 2.9 & 2.5 & 2.9 & 18.6 & 1.0\
$s_{\rm 3}$ & 0.90 & 0.89 & 0.10 & 1.10 & 0.74 & 0.90 & 0.89 & 0.10 & 1.09 & 0.74\
[lcccccc]{} &&& MAG & &\
$tr_{\rm 1}$ & 22.8 & 18.7 & 14.3 & 64.4 & 4.9\
$td_{\rm 1}$ & 50.8 & 43.3 & 28.4 & 123.9 & 10.7\
$s_{\rm 1}$ & 0.44 & 0.46 & 0.08 & 0.54 & 0.20\
$tr_{\rm 2}$ & 15.2 & 12.5 & 9.3 & 41.4 & 3.3\
$td_{\rm 2}$ & 22.2 & 18.4 & 13.0 & 56.4 & 4.7\
$s_{\rm 2}$ & 0.68 & 0.69 & 0.05 & 0.78 & 0.52\
$tr_{\rm 3}$ & 8.8 & 7.2 & 5.3 & 23.5 & 1.9\
$td_{\rm 3}$ & 10.5 & 8.7 & 6.4 & 27.1 & 2.06\
$s_{\rm 3}$ & 0.85 & 0.84 & 0.07 & 1.09 & 0.73\
[lccccccccccccc]{} & @2016MNRAS.tmp..117D & CSM–I & 5.9 & 43.0 & 0.14 & 4.3 & 9.8 & 0.44 & 2.7 & 3.9 & 0.70\
[T130D-b]{} & @2017ApJ...836..244W & CSM–I & 6.9 & 11.9 & 0.59 & 4.3 & 5.8 & 0.75 & 2.3 & 2.9 & 0.80\
[D2]{} & @2013MNRAS.428.1020M & CSM–II & 29.9 & 50.1 & 0.60 & 19.0 & 22.7 & 0.84 & 10.5 & 11.3 & 0.93\
[F1]{} & @2013MNRAS.428.1020M & CSM–II & 33.5 & 82.0 & 0.41 & 23.3 & 43.1 & 0.54 & 13.7 & 18.8 & 0.73\
[R3]{} & @2016MNRAS.tmp..117D & CSM–II & 5.4 & 11.4 & 0.47 & 3.7 & 5.7 & 0.65 & 2.0 & 2.7 & 0.75\
[T20]{} & @2017ApJ...836..244W & CSM–II & 10.7 & 20.0 & 0.53 & 7.0 & 9.8 & 0.71 & 3.7 & 4.7 & 0.80\
(Black curve) & @2010ApJ...717..245K & MAG & 21.4 & 38.5 & 0.56 & 13.7 & 18.8 & 0.73 & 7.7 & 9.4 & 0.82\
[KB 2]{} (Red curve) & @2010ApJ...717..245K & MAG & 38.5 & 117.9 & 0.33 & 25.37 & 40.9 & 0.62 & 14.7 & 18.0 & 0.82\
[Model 2]{} & @2016ApJ...821...36K & MAG & 48.2 & 100.3 & 0.48 & 33.6 & 49.3 & 0.68 & 20.2 & 24.1 & 0.84\
[RE3B1]{} & @dessartaudit & MAG & 58.8 & 96.7 & 0.61 & 46.5 & 43.9 & 1.06 & 31.1 & 19.2 & 1.62\
[RE0p4B3p5]{} & @dessartaudit & MAG & 57.3 & 68.0 & 0.84 & 34.8 & 35.6 & 0.97 & 19.0 & 18.6 & 1.02\
[lccccc|ccccccc]{} &&& CSM–I/CSM–II & & & & & MAG & &\
$tr_{\rm 1}$ & 15.4 & 8.8 & 33.5 & 5.4 & 11.7 & 44.8 & 48.2 & 58.8 & 22.4 & 13.8\
$td_{\rm 1}$ & 36.4 & 31.5 & 82.1 & 11.4 & 25.2 & 84.3 & 96.7 & 117.9 & 38.5 & 28.0\
$s_{\rm 1}$ & 0.46 & 0.50 & 0.60 & 0.14 & 0.16 & 0.56 & 0.56 & 0.84 & 0.33 & 0.17\
$tr_{\rm 2}$ & 10.3 & 5.7 & 23.3 & 3.7 & 7.9 & 30.8 & 33.6 & 46.5 & 13.7 & 10.9\
$td_{\rm 2}$ & 16.13 & 9.8 & 43.1 & 5.7 & 13.3 & 37.7 & 40.9 & 49.3 & 18.8 & 10.5\
$s_{\rm 2}$ & 0.66 & 0.68 & 0.84 & 0.44 & 0.133 & 0.81 & 0.73 & 1.06 & 0.62 & 0.17\
$tr_{\rm 3}$ & 5.8 & 3.2 & 13.7 & 2.7 & 4.6 & 18.5 & 19.0 & 31.1 & 7.7 & 7.7\
$td_{\rm 3}$ & 7.4 & 4.3 & 18.8 & 2.7 & 5.9 & 17.9 & 18.6 & 24.1 & 9.4 & 4.8\
$s_{\rm 3}$ & 0.79 & 0.78 & 0.93 & 0.70 & 0.07 & 1.02 & 0.84 & 1.62 & 0.82 & 0.31\
Our motivation for adopting different parameter ranges for the CSM–I and CSM–II models stems from several factors. First, larger $M_{\rm CS}$ values are possible in the CSM–II case as suggested by spectroscopic observations of SLSN-II of Type IIn [@2010ApJ...709..856S] where stronger mass–loss pertains due to LBV–type or PPISN processes . That, in turn, also implies larger progenitor masses (and therefore $M_{\rm ej}$) for CSM–II, as is the case for regular luminosity SNe where LC fits imply larger $M_{\rm ej}$, and therefore larger diffusion timescales, for Type II events than for Type I SNe. Finally, lower values of $n$ are more typical of compact, blue supergiant (BSG) progenitors with radiative envelopes while while higher values imply extended, RSG–type convective envelopes that are more appropriate for SLSN–II [@2003LNP...598..171C]. In summary, the CSM–II parameters are associated with RSG–type progenitors with extended H–rich envelopes while the CSM–I parameters with more compact, BSG–type stars.
We caution that one potential issue with our choices for model parameter grids, is there are no good observational constrains yet on what the shape of the distribution of SN ejecta and circumstellar shell masses should be, so using these models in a clustering analysis (Section \[cluster\]) might be misleading as it can create dense clusters of models that might actually be very sparsely populated in nature, or conversely an underdensity of points in regions where more MAG or CSM SNe might lie in reality. Our grid selection for $M_{\rm CS}$ is largely driven by published observations of nebular shells around massive, LBV–type stars indicating $M_{\rm CS} \simeq$ 0.1–20 $M_{\odot}$ . The ranges for $M_{\rm ej}$ are within typical ranges for stars massive enough to experience a SN, and in agreement with observations of SN progenitor stars in pre–explosion images and supernova remnants ($M_{\rm ej} \simeq$ 8–25 $M_{\odot}$) . Higher–mass progenitors cannot be excluded given observations of stars as massive as $>$ 150 $M_{\odot}$ in the Milky Way galaxy [@2010MNRAS.408..731C].
For the MAG model, we investigate a dense grid of models with $10^{12} < B_{\rm MAG} < 10^{15}$ G and $1.0 < P_{\rm MAG} < 50$ ms, where $B_{\rm MAG}$ and $P_{\rm MAG}$ is the magnetic field and the initial rotational period of the magnetar respectively. We are also varying the diffusion timescale, $t_{\rm d}$, that further controls the shape of MAG model LCs (Equation 13 of C12), in the range $3 < t_{\rm d} < 100$ days. The grid resolution we use for these parameters results to a total of 46,656 MAG model LCs generated.
A large fraction of CSM and MAG models did not produce superluminous LCs, which we take to be those reaching $L_{\rm max} = 10^{44}$ erg s$^{-1}$ or more [@2012Sci...337..927G]. These models are ignored from each of our CSM and MAG model samples for further analysis. In addition, we exclude model LCs that result in physically inconsistent parameters such as combinations of $B_{\rm MAG}$ and $P_{\rm MAG}$ values in the MAG model that are incompatible with the convective dynamo process in magnetars [@1992ApJ...392L...9D], and CSM models that yield $M_{\rm CS}$ too large compared to the associated $M_{\rm ej}$ values that represent a measure of the total progenitor mass.
As a result, our original CSM–I/CSM–II, CSM–I$\kappa$/CSM–II$\kappa$ and MAG model samples are each reduced into smaller subsamples of nearly equal size that are then used in our final LC shape parameter analysis. More specifically, a total of 306 CSM–I/CSM–II, 248 CSM–I$\kappa$/CSM–II$\kappa$ and 304 MAG superluminous LC models are used in this work. The statistical properties of the LC shape parameters of all models are summarized in Tables \[T3\] through \[T5\]. Figures \[Fig:tr1td1\] and \[Fig:s1s2s3\] show the distribution of a few LC shape parameters ($tr_{\rm 1}$, $td_{\rm 1}$, $s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$) for the CSM–I/CSM–II and MAG model samples and Figure \[Fig:symmLCs\] examples of some of the most symmetric LCs in these samples.
For comparison against our semi–analytical LCs, we have also included a sample numerical CSM and MAG LCs available in the literature. Table \[T6\] lists the details of the numerical model LCs and Table \[T7\] summarizes the statistics of their shape parameters. Figure \[Fig:tr1td1fit\] is a scatter plot between $tr_{\rm 1}$ and $td_{\rm 1}$ for all samples in this work, including the numerical MAG and CSM models. A linear best–fit to the observed SLSN–I and SLSN–II data is also shown (see Equation \[Eq1\]). Although we chose to not use different symbols for the CSM models as presented in Figure \[Fig:tr1td1fit\], it is evident by inspecting Table \[T4\] that CSM–II models occupy the upper right corner of this plot given their longer primary rise and decline timescales. A few SLSN–I thus appear to be associated with the CSM–II data that were chosen based on assumptions for the progenitors of H–rich SLSNe. The situation is different when looking at the CSM–I$\kappa$/CSM–II$\kappa$ distribution, however, where the parameter grids are identical and the only difference is due to different SN ejecta + CS shell opacity. In this case, the primary timescales of the models are consistent. Very slowly evolving H–poor SLSNe may be hard to produce under the assumption of H–poor CSM interaction given the large, H–deficient CS shell mass needed to account for the long primary rise and decline timescales. Interaction with a H–poor CS shells of non–spherical geometry in combination with viewing–angle effects may be a way out of this apparent discrepancy [@2018MNRAS.475.3152K]. Accordingly, Figure \[Fig:s1s2s3fit\] shows a 3D scatter plot for the primary, secondary and tertiary LC symmetry parameter for all samples. The superluminous LCs recovered infer the following mean values for the parameters of each model:
- [CSM–I: $E_{\rm SN, 51} =$ 1.75, $M_{\rm ej} =$ 10 $M_{\odot}$, $n =$ 8, $R_{\rm CS,15} =$ 0.006, $M_{\rm CS} =$ 1 $M_{\odot}$ and $\dot{M} =$ 0.01 $M_{\odot}$ yr$^{-1}$,]{}
- [CSM–II: $E_{\rm SN, 51} =$ 2.00, $M_{\rm ej} =$ 13 $M_{\odot}$, $n =$ 12, $R_{\rm CS,15} =$ 0.2, $M_{\rm CS} =$ 10 $M_{\odot}$ and $\dot{M} =$ 0.01 $M_{\odot}$ yr$^{-1}$,]{}
- [CSM–I$\kappa$: $E_{\rm SN, 51} =$ 1.80, $M_{\rm ej} =$ 10 $M_{\odot}$, $n =$ 9, $R_{\rm CS,15} =$ 0.08, $M_{\rm CS} =$ 2 $M_{\odot}$ and $\dot{M} =$ 0.15 $M_{\odot}$ yr$^{-1}$,]{}
- [CSM–II$\kappa$: $E_{\rm SN, 51} =$ 2.00, $M_{\rm ej} =$ 7 $M_{\odot}$, $n =$ 9, $R_{\rm CS,15} =$ 0.1, $M_{\rm CS} =$ 0.3 $M_{\odot}$ and $\dot{M} =$ 0.3 $M_{\odot}$ yr$^{-1}$,]{}
- [MAG: $B_{\rm MAG} = 1.4 \times 10^{13}$ G and $P_{\rm MAG} =$ 1.3 ms.]{}
These parameters are within the range of semi–analytical and numerical fits of the CSM and MAG models to observed SLSN LCs commonly found in the literature.
A careful examination of the computed LC shape parameter distributions for the CSM and MAG models reveals a lot of interesting insights. First, the primary rise and decline timescales appear to have a binary distribution for the CSM models with CSM–I models typically reaching shorter $tr_{\rm 1}$ and $td_{\rm 1}$ values than CSM–II models. This is both due to the physically–motivated choices for the parameter grids discussed earlier, but also because of the opacity difference between H–rich and H–poor models. On the other hand, the MAG models show a more continuous and single–peaked distribution with typical values $tr_{\rm 1} \simeq$ 5–15 days and $td_{\rm 1} \simeq$ 20–30 days. In terms of LC symmetry, the majority of models do not appear to produce symmetric LCs around the primary luminosity threshold as $0.9 < s_{\rm 1} < 1.1$ values are rarely recovered. In fact, CSM is the only set of models reaching $s_{\rm 1}$ values close to unity while MAG is unable to produce any models with symmetric LCs both in terms of $s_{\rm 1}$ and $s_{\rm 2}$. Even the most symmetric MAG LCs in our sample appear to have this issue (Figure \[Fig:symmLCs\]) [*This is an important issue for MAG models given that a significant fraction of observed SLSN–I are symmetric around these luminosity thresholds*]{} (Section \[obs\]). This seems to be the case for numerically–computed MAG LC models as well, with the most symmetric one being model [RE0p4B3p5]{} [@dessartaudit] with $s_{\rm 1} =$ 0.84. Numerical CSM models tend to yield more rapidly–evolving LCs than their semi–analytical counterparts. The primary source of this difference is the assumption of a constant diffusion timescale in the semi–analytical CSM models [@2013MNRAS.428.1020M; @2018arXiv181206522K].
We explore the possiblity that gamma–ray leakage produces faster–declining MAG LCs, therefore enhancing symmetry, by adopting the same formalism employed in the case of LCs powered by the radioactive decay of $^{56}$Ni [@1984ApJ...280..282S; @1997ApJ...491..375C; @2008MNRAS.383.1485V; @2013ApJ...773...76C]. Using a fiducial SN ejecta gamma–ray opacity of $\kappa_{\rm \gamma} =$ 0.03 cm$^{2}$ g$^{-1}$ and the implied SN ejecta mass for the two most symmetric MAG models shown in the top right panel of Figure \[Fig:symmLCs\], we adjust the output luminosity as $L^{\prime}(t) = L(t) (1-\exp{-A t^{-2}})$, where $A t^{-2} = \kappa_{\rm \gamma} \rho R$. The two most symmetric MAG models with high gamma–ray leakage are then plotted as dashed curves. Allowing for gamma–rays to escape can increase the decline rate of the LC at late times leading to shorter $td_{\rm 1}$ and slightly higher $s_{\rm 1}$ values. The change, however, still falls short in producing symmetric MAG LCs since $s_{\rm 1}$ only increases by 14–22% and the maximum value for $s_{\rm 1} \lessapprox$ 0.6.
Second, the observed tight $tr_{\rm 1}$–$td_{\rm 1}$ correlation in SLSN LCs is reproduced by both CSM and MAG models. CSM models generally predict faster–evolving LCs at late times than MAG models, consistent with the observations. This is mainly due to the continuous power input in the MAG model that sustains a flatter LC at late times while in the CSM model the energy input is terminated abruptly leading to rapid decline after peak luminosity (C12). An example of a SLSN with a very flat late–time LC is SN2015bn [@2018ApJ...866L..24N], indicating that this may be a good candidate for the MAG model. The observed LC symmetry parameter distributions (Figure \[Fig:s1s2s3fit\]) reveal a more distinct dichotomy between CSM and MAG models. MAG models fail to produce fully symmetric LCs and are clustered in a confined region of the 3D ($s_{\rm 1}$, $s_{\rm 2}$ and $s_{\rm 3}$) parameter space while CSM models more scatter.
Finally, we estimate the fraction of CSM and MAG model SLSN LCs that have a concave–up shape during the rise to peak luminosity or, in other words, positive second derivative for $t<t_{\rm max}$. An example of an observed SLSN with concave–up LC during the rise is SN 2017egm [@2017ApJ...851L..14W]. Not a single MAG LCs is found to be concave–up during the rise. On the contrary, $\sim$ 20% of CSM–I, $\sim$ 60% of CSM–II and $\sim$ 50% of CSM–I$\kappa$/CSM–II$\kappa$ models are found to have concave–up rise to peak luminosity. The implication is that the shape of the rising part of SLSN LCs may also be tied to the nature of the power input mechanism and, specifically, the functional form of the input luminosity. Continuous, monotonically declining power inputs like $^{56}$Ni decay and magnetar spin–down energy correspond to concave–down SLSN LCs while truncated CSM shock luminosity input depends on the details of the SN ejecta and the circumstellar material density structure and can yield either concave–up or concave–down LCs during the early, rising phase. This further enforces the need to obtain high–cadence photometric coverage of these events in the future transient surveys.
[lccccccccc]{} CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 2 & 0.62 & 0.66 & 5.95/18.45/75.6 & 59.28/0.68/40.05 & -\
& & & & & & 33.33/25.00 & 66.67/75.00 & -\
CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 3 & 0.46 & 0.58 & 61.11/0.00/38.89 & 27.70/12.16/60.14 & 0.00/19.05/80.95\
& & & & & & 57.14/50.00 & 28.57/50.00 & 14.29/0.00\
CSM–I/CSM–II & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 2 & 0.77 & 0.63 & 99.59/0.41 & 48.44/51.56 & -\
& & & & & & 57.14/75.00 & 42.86/25.00 & -\
CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 2 & 0.68 & 0.65 & 44.61/11.03/44.36 & 11.19/0.00/88.81 & -\
& & & & & & 66.67/75.00 & 33.33/25.00 & -\
CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 3 & 0.49 & 0.56 & 38.89/1.85/59.26 & 42.90/13.53/43.56 & 1.30/0.0/98.70\
& & & & & & 28.57/50.0 & 57.14/50.00 & 14.29/0.00\
CSM–I$\kappa$/CSM–II$\kappa$ & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 2 & 0.66 & 0.57 & 77.18/22.82 & 88.76/11.24 & -\
& & & & & & 47.62/50.00 & 52.38/50.00 & -\
CSM–I/CSM–II/MAG & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 2 & $<$0.01 & 0.43 & 34.55/4.07/61.38 & 86.44/11.86/1.69 & -\
& & & & & & 23.81/25.00 & 76.19/75.00 & -\
CSM–I/CSM–II/MAG & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 3 & $<$0.01 & 0.32 & 26.19/4.76/69.05 & 82.35/17.65/0.00 & 71.34/2.44/26.22\
& & & & & & 28.57/25.00 & 71.43/75.00 & 0.00/0.00\
CSM–I/CSM–II & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 2 & $<$0.01 & 0.33 & 82.31/17.69 & 93.75/6.25 & -\
& & & & & & 80.95/75.00 & 19.05/25.00 & -\
CSM–I$\kappa$/CSM–II$\kappa$/MAG & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 2 & $<$0.01 & 0.60 & 31.12/5.81/63.07 & 73.33/26.67/0.00 & -\
& & & & & & 42.86/25.00 & 57.14/75.00 & -\
CSM–I$\kappa$/CSM–II$\kappa$/MAG & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 3 & $<$0.01 & 0.33 & 42.31/7.69/50.00 & 24.67/5.26/70.07 & 75.00/25.00/0.00\
& & & & & & 47.62/25.0 & 52.38/75.00 & 0.00/0.00\
CSM–I$\kappa$/CSM–II$\kappa$ & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 2 & $<$0.01 & 0.50 & 84.18/15.82 & 73.77/26.23 & -\
& & & & & & 38.10/25.00 & 61.90/75.00 & -\
CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 2 & 0.71 & 0.66 & 2.44/18.90/78.66 & 60.09/0.67/39.24 & -\
& & & & & & 38.10/25.00 & 61.90/75.00 & -\
CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 3 & 0.54 & 0.56 & 0.00/16.47/83.53 & 61.97/0.00/38.03 & 26.17/13.42/60.41\
& & & & & & 19.05/0.00 &52.38/50.00 & 28.57/50.00\
CSM–I/CSM–II & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 2 & 0.84 & 0.63 & 46.77/53.23 & 99.59/0.41 & -\
& & & & & & 42.86/50.00 & 57.14/50.00 & -\
CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 2 & 0.77 & 0.64 & 44.81/11.14/44.05 & 11.56/0.00/88.44 & -\
& & & & & & 61.90/75.00 & 38.10/25.00 & -\
CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 3 & 0.57 & 0.54 & 38.18/2.42/59.39 & 43.88/13.61/42.52 & 2.41/0.00/97.59\
& & & & & & 33.33/50.0 & 47.62/50.00 & 19.05/0.00\
CSM–I$\kappa$/CSM–II$\kappa$ & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 2 & 0.76 & 0.55 & 88.51/11.49 & 77.48/22.52 & -\
& & & & & & 61.90/75.00 & 38.10/25.00 & -\
CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 2 & 0.74 & 0.65 & 60.00/0.67/39.33 & 3.03/18.79/78.18 & -\
& & & & & & 61.90/75.00 & 38.10/25.00 & -\
CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 3 & 0.57 & 0.55 & 62.11/0.26/37.63 & 0.00/15.48/84.52 & 24.66/13.70/61.64\
& & & & & & 52.38/50.00 & 19.05/0.00 & 28.57/50.00\
CSM–I/CSM–II & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 2 & 0.86 & 0.62 & 46.77/53.23 & 99.59/0.41 & -\
& & & & & & 42.86/50.00 & 57.14/50.00 & -\
CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 2 & 0.80 & 0.64 & 45.11/11.03/43.86 & 9.79/0.00/90.21 & -\
& & & & & & 61.90/75.00 & 38.10/25.00 & -\
CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 3 & 0.60 & 0.52 & 37.65/2.35/60.00 & 2.38/0.00/97.62 & 44.44/13.89/41.67\
& & & & & & 28.57/50.00 & 23.81/0.00 & 47.62/50.00\
CSM–I$\kappa$/CSM–II$\kappa$ & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 2 & 0.82 & 0.54 & 77.18/22.82 & 88.76/11.24 & -\
& & & & & & 33.33/25.00 & 66.67/75.00 & -\
![Same as in Figure \[Fig:cluster2D\] but for the 3D ($s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$) CSM–I/CSM–II/MAG dataset. The computed clusters associate with the underlying model categories better than in the 2D case (see \[cluster\]).[]{data-label="Fig:cluster3D"}](scatt_3D_csmmag_k2.png){width="9cm"}
$k$–Means Clustering Analysis {#cluster}
=============================
$k$–means clustering is a powerful machine learning algorithm used to categorize data via an iterative method [@cluster1; @cluster2]. The standard version of this algorithm finds the locations and boundaries of “clusters” of data by repeatedly minimizing their Euclidian distances from cluster centroids. The user can either input the number of clusters, $k$, based on some assumption about the nature of the data, or can use a density–based (“DBSCAN”) approach [@Ester] to determine the optimal number of clusters. While $k$–means assumes clusters separated by straight–line boundaries, there exist clustering algorithms that relax that criterion. For the scope of this work to quantitatively characterize the LC shape properties of CSM and MAG models, and determine if they occupy distinct areas of the parameter space, we employ $k$–means clustering analysis. More specifically, we use the [*Python scikit–learn*]{} ([sklearn]{}) package.
$k$–means clustering analysis is often used in astronomical applications aiming to classify astronomical objects in transient search projects . Recently, it was utilized to classify the properties of SLSNe, based on both LC and spectroscopic features, showcasing the importance it holds for the future of the field. @2018arXiv180800510N presented their work on $k$–means clustering analysis of SLSN nebular spectra properties. @2018ApJ...854..175I illustrated how the method can be used to identify SLSN–I and probe their observed diversity and identified two distinct groups: “fast” and “slow” SLSN–I depending on the evolution of the LC and the implied spectroscopic velocities and SN ejecta velocity gradients.
In this work, we use $k$–means clustering to investigate if the SLSN LC shape properties implied by different power input models (MAG, CSM–I and CSM–II) concentrate in distinct clusters. This may allow us to associate observed SLSNe with proposed power input mechanisms based only on the LC properties and thus provide a framework for SLSN characterization for future, big data transient searches like [*LSST*]{}. To do so, we focus on different combinations of $k$ values and LC parameter space dimensionality ($N_{\rm D}$). Given our prior knowledge that we are using LC shape parameter data from two categories (CSM, MAG) of models we focus on two cases: $k =$ 2 (CSM models of both I and II type and MAG) and $k =$ 3 (distinct CSM–I, CSM–II and MAG models). We also look at different values for $N_{\rm D}$: 2D datasets focusing on the primary LC timescales ($tr_{\rm 1}$, $td_{\rm 1}$), 3D datasets focusing on the LC symmetry parameters ($s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$), 4D datasets focusing on the primary and secondary LC timescales ($tr_{\rm 1}$, $td_{\rm 1}$, $tr_{\rm 2}$, $td_{\rm 2}$) and 6D datasets focusing on the primary, secondary and tertiary LC timescales ($tr_{\rm 1}$, $td_{\rm 1}$, $tr_{\rm 2}$, $td_{\rm 2}$, $tr_{\rm 3}$, $td_{\rm 3}$) thus covering all the LC shape parameters defined in this work (since given the 6 timescales the symmetry parameters can be constrained). Although we only opted to perform clustering analysis for $k =$ 2,3 based on prior knowledge of the number of models used in the datasets, we also estimated the optimal number of clusters in all cases using the “elbow” method [@elbow]. This method is based on plotting the normalized squared error of clustering ($E_{\rm N}$, defined in the next paragraph) as a function of $k$ and finding the value of $k$ that corresponds to the sharpest gradient. This test confirmed that the optimal number of clusters for all datasets is $k =$ 2.
While for the 2D and the 3D clustering we can provide visual representations of the clusters, that is impossible for the 4D and the 6D cases. For this reason, and in order to quantify the quality and accuracy of our clustering results, we use silhouette analysis [@silhouette]. Silhouette analysis yields a mean silhouette score, $\bar{S}$, and silhouette diagrams that visualize the sizes of the individual clusters and the $S$ score distribution of the individual data within each cluster. Negative values of $S$ correspond to falsely classified data while values closer to unity indicate stronger cluster association. Silhouette diagrams with clusters of comparable width and with $S$ values above the mean are indicative of accurate clustering. An example silhouette diagram for the $k =$ 2, 3 and $N_{\rm D} =$ 4 case we study in this work is shown in Figure \[Fig:silhouette\]. Figures \[Fig:cluster2D\] and \[Fig:cluster3D\] show the distribution of the computed clusters in the $N_{\rm D} =$ 2 and $N_{\rm D} =$ 3 cases for $k =$ 2 with the SLSN–I/SLSN–II observations overplotted for comparison. The cluster centroids are also marked with black star symbols. Table \[T8\] presents the results of clustering analysis for each $k$–$N_{\rm D}$ combination that we investigated including normalized classification error ($E_{\rm N}$; the square–root of the sum of squared distances of samples to their closest cluster center divided by the cluster size) and $\hat{S}$ as well as the computed cluster compositions (percentage of CSM–I/CSM–II and MAG models within each cluster) and observed SLSN–I/SLSN–II cluster associations.
Results
=======
$N_{\rm D} =$ 2 {#nd2}
---------------
Our clustering analysis on the primary LC timescales ($tr_{\rm 1}$, $td_{\rm 1}$) reveals a clear dichotomy between H–rich and H–poor CSM models in the CSM–I/CSM–II case where the first cluster ($C_{\rm 0}$) is composed by CSM–I (and, respectively CSM–I$\kappa$) models by almost 100%. The observed SLSN–I and SLSN–II sample is not clearly associated with either cluster in the CSM–I/CSM–II case. For all combinations of model datasets and values of $k$ we find the $k =$ 2 choice to correspond to more accurate clustering (higher $\bar{S}$ scores). This is indicative that the value $k =$ 2 may be optimal in distinguishing between CSM–type models of either type against MAG models. The CSM–I/CSM–II/MAG, $k =$ 2 case has the highest $\bar{S}$ score and yields the first cluster ($C_{\rm 0}$) dominated by MAG models ($\sim$ 76% of the cluster data) and the second cluster ($C_{\rm 1}$) dominated by CSM–I/CSM–II models ($\sim$ 60% of the cluster data). Nearly $\sim$ 75% of observed SLSN–I/SLSN–II are associated with $C_{\rm 1}$ implying that, practically, both CSM and MAG type models can reproduce SLSN LCs in terms of the primary LC timescales. As such, the $N_{\rm D} =$ 2 case does not represent a robust way to distinguish between SLSN powered by the CSM or the MAG mechanism.
$N_{\rm D} =$ 3 {#nd3}
---------------
In this case we explore clustering for the three main LC symmetry parameters as defined in Section \[lcshape\]. As can be seen in Tables \[T8\] the $k =$ 2 cases have, in general, better $\bar{S}$ scores than the $k =$ 3 cases. Another interesting outcome is the very low normalized mean error ($<$ 0.01) for all cases suggesting that clustering based on the \[$s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$\] dataset yields denser, more concentrated clusters around the computed centroids.
Rergardless, the most important result in this case is the strong association of observed SLSN symmetries with $C_{\rm 1}$: $\sim$ 75–76% of SLSN–I and SLSN–II are associated with $C_{\rm 1}$ in the CSM–I/CSM–II/MAG, $k =$ 2 case. In addition, $C_{\rm 1}$ is almost entirely composed of CSM models ($\sim$ 98%). This strengthens our previous suggestion (Section \[tigerfit\]) that CSM models are superior to MAG models in reproducing the observed SLSN LC symmetry properties including some fully symmetric LCs. The same result holds in the CSM–I$\kappa$/CSM–II$\kappa$/MAG, $k =$ 2 case with more than half of observed SLSN LCs associated with the cluster that is mostly composed of CSM models. This result appears to hold up in the $k =$ 3 cases. Overall, CSM and MAG models appear to be clearly distinguished in terms of LC symmetry properties (Figure \[Fig:s1s2s3fit\]). [*This indicates that LC shape symmetry may be critical in identifying the power input mechanism associated with observed SLSNe, based only on photometry.*]{}
$N_{\rm D} =$ 4 {#nd4}
---------------
In this case we investigate $k$–means clustering for the primary and the secondary rise and decline timescales. We elect to focus on the $k =$ 2 cases since, again, they yield higher $\bar{S}$ scores. Clear distinction is recovered between H–poor and H–rich CSM models in the CSM–I/CSM–II and the CSM–I$\kappa$/CSM–II$\kappa$ cases: $\sim$ 100% of H–poor CSM models constitute the $C_{\rm 1}$ data in the CSM–I/CSM–II case and $\sim$ 89% of H–poor CSM models constitute the $C_{\rm 0}$ data in the CSM–I$\kappa$/CSM–II$\kappa$ case.
For the CSM–I/CSM–II/MAG dataset we recover a cluster that is mostly composed of CSM–type models ($C_{\rm 1}$; 60% CSM–I/CSM–II models and 40% MAG models) and a cluster that is dominated by MAG models ($C_{\rm 0}$; $\sim$ 20% CSM–I/CSM–II models and $\sim$ 80% MAG models). The majority ($\sim$ 66–75%) of SLSN–I/SLSN–II are associated with $C_{\rm 1}$ indicating preference toward CSM models yet the correlation is not as strong as in the $N_{\rm D} =$ 3 case.
$N_{\rm D} =$ 6 {#nd6}
---------------
The last clustering analysis was performed on a six–dimensional dataset comprised of the primary, secondary and tertiary rise and decline timescales. This is the most complete LC shape parameter dataset we investigate since it encapsulates the three LC symmetry values, uniquely defined by their corresponding timescales. Furthermore, the use of all relevant LC shape parameters yields the highest $\bar{S}$ scores ($\sim$ 0.8 in some cases) compared to the lower–dimensionality cases. As with all other cases, we observe that $k =$ 2 clustering leads to more accurate classification therefore we only focus on these results for our discussion.
Our results are consistent with those of the $N_{\rm D} =$ 4 case yielding a cluster dominated by CSM–type models (60%) and a cluster dominated by MAG models ($\sim$ 80%) with the majority of SLSN–I/SLSN–II associated with the former in the CSM–I/CSM–II/MAG case. In particular, $\sim$ 66–75% of observed SLSN LCs are associated with the CSM–dominated cluster.
In summary, we find that clustering of LC shape properties generally favors the CSM power input mechanism yet the MAG mechanism cannot be ruled out. While clustering on LC timescales supports this result, it is even more robust in clustering of LC symmetry parameters.
Discussion {#disc}
==========
In this paper we explored how high–cadence photometric observations of SLSNe detected shortly after explosion can be used to charactize their power input mechanism. In particular, we constrained the LC shape properties of a set of observed SLSN–I and SLSN–II focusing only on events with complete photometric coverage and searched for possible correlations with semi–analytic model LC shapes assuming either a magnetar spin–down (MAG) or a SN ejecta–circumstellar interaction (CSM) power input [@2012ApJ...746..121C; @2013ApJ...773...76C].
We reiterated that there is a number of simplifying assumptions in using these semi–analytical models including issues with the approximation of centrally–located heating sources and homologous expansion in cases like shock heating where the power input can occur close to the photosphere, the assumption of constant opacity and model parameter degeneracy [@2013ApJ...773...76C; @2013MNRAS.428.1020M; @2018arXiv181206522K]. In addition, models predict bolometric LCs while the observed, rest–frame SLSN LCs are pseudo–bolometric LCs computed by fitting the SED of each event based on available observations in different filters. Regardless of all these caveats, semi–analytic models still constitute a powerful tool to study SLSNe, providing us with the potential to investigate LC shape properties across the associated parameter space for each power input by computing a large number of models. Nevertheless, we have supplemented our study with datasets of numerical MAG and CSM model SLSN LCs available in the literature.
To quantitatively determine whether the main proposed SLSN power input mechanisms yield model LCs with different shape properties (rise and decline timesales and symmetry around peak luminosity) we applied $k$–means clustering analysis for different combinations of parameters and model datasets and computed cluster associations for the observed SLSN sample. We highlight the main results of our analysis below:
- [SLSN exhibit a strong correlation between their primary rise ($tr_{\rm 1}$) and decline ($td_{\rm 1}$ timescales. Although this correlation is reproduced by both MAG and CSM power input models, the larger scatter found in CSM models overlaps better with the SLSN–I/SLSN–II data.]{}
- [CSM models generally correspond to faster evolving LCs in agreement with observations of some SLSN–I.]{}
- [MAG models fail to produce fully symmetric LCs around peak luminosity. In particular, MAG models are never found to be symmetric around the first luminosity threshold ($s_{\rm 1, max} =$ 0.54), including in cases of high gamma–ray leakage.]{}
- [While the majority of CSM models also fail to produce fully symmetric LC shapes, there is a small fraction of them that do. This is in consistent with $\sim$ 24% of SLSN–I LCs in our sample that are measured to be fully symmetric.]{}
- [Symmetric SLSN LCs favor a truncated power input source that leads to faster LC decline rates past peak luminosity. The CSM model naturally provides such a framework since forward and reverse shock power inputs are terminated. An alternative truncated input could be energy release by fallback accretion.]{}
- [MAG models fail to produce LCs with positive second derivative during the early rise to peak luminosity (concave–up). CSM models can produce both concave–up and concave–down LCs.]{}
- [$k$–means clustering analysis suggests that most observed SLSN LCs are associated with CSM power input yet the MAG model cannot be ruled out. A multiple formation channel is therefore possible for SLSNe of both spectroscopic types.]{}
- [The most distinct clustering between MAG and CSM data is found in the 3D LC symmetry parameter space ($s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$). In this case, the majority ($>$ 75%) of SLSNe are strongly associated with the CSM–dominated cluster.]{}
- [LC symmetry properties, together with the shape of the LC at early times, may be key in distinguishing between different power input mechanisms in SLSNe.]{}
Our results illustrate the importance of early detection and high–cadence multi–band photometric follow–up in determining the nature of SLSNe. As transient search surveys like [*LSST*]{}, [*ZTF*]{} and [*Pan–STARRS*]{} usher the new era of big data transient astronomy, a larger number of well–constrained SLSN LCs will become available providing the opportunity to use photometry to characterize their power input mechanisms. This is of critical importance in the study of luminous and uncharacteristic transients in general, since photometry will be more readily available that spectroscopy in most cases.
We have shown that machine learning approaches like $k$–means clustering can be instrumental in helping us characterize SLSNe based on their LC properties, namely rise and decline timescales and LC symmetry. This is made possible by comparing against the LC shape properties of different power input mechanisms using semi–analytic or numerical models. As such, it is of great importance to enhance our numerical modeling efforts for all proposed power input mechanisms and survey a large fraction of the model parameter space. In addition to aiding with SLSN and luminous transient characterization and classification, this will provide us with constrains on the physical domains that enable these extraordinary stellar explosions.
We would like to thank Edward L. Robinson and J. Craig Wheeler for useful discussions and comments. We would also like to thank our anonymous referee for suggestions and comments that improved the quality and presentation of our paper. EC would like to thank the Louisiana State University College of Science and the Department of Physics & Astronomy for their support.
[@Hunter2007], [numpy]{} [@oliphant], [SciPy]{} [@scipy], [Scikit–learn]{} [@scikit-learn], [SuperBol]{} [@2018RNAAS...2d.230N].
[^1]: https://github.com/mnicholl/superbol
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study electron transport in two-dimensional materials with parabolic and linear (graphene) dispersions of the carriers in the presence of surface acoustic waves and an external magnetic field using semiclassical Boltzmann equations approach. We observe an oscillatory behavior of both the longitudinal and Hall electric currents as functions of the surface acoustic wave frequency at a fixed magnetic field and as functions of the inverse magnetic field at a fixed frequency of the acoustic wave. We explain the former by the phenomenon of geometric resonances, while we relate the latter to the Weiss-like oscillations in the presence of the dynamic superlattice created by the acoustic wave. Thus we demonstrate the dual nature of the acoustomagnetoelectric effect in two-dimensional electron gas.'
author:
- 'I. G. Savenko'
- 'A. V. Kalameitsev'
- 'L. G. Mourokh'
- 'V. M. Kovalev'
title: |
Acoustomagnetoelectric effect in two-dimensional materials:\
Geometric resonances and Weiss oscillations
---
Introduction
============
Two-dimensional (2D) electronic systems have attracted great interest of researchers for several recent decades. Initially, two-dimensional electron gas (2DEG) was realized in the inversion layer at the interface of two semiconductors with different bandgaps [@AFS]. Subsequently, other structures based on graphene [@graphene; @2Dgraph] and metal dichalcogenides [@2DMet] were created. One of the primary motivations to design a system containing 2DEG is that it represents an ideal platform for the studies of magnetotransport which led to the observations of quantum Hall [@QHall] and fractional quantum Hall [@FHall; @FHallTheor] effects.
Other prominent phenomenology is related to magneto-oscillations of various types. Some of them are connected to quantum effects at relatively high magnetic fields when the Landau quantization causes the Shubnikov–de Haas effect and associated oscillations [@BeenVH]. Quantum interference between trajectories gives rise to Aharonov-Bohm oscillations in high-mobility GaAs/AlGaAs heterostructures [@AB]. On the other hand, semiclassical effects, which can be observed at smaller fields or higher temperatures, are Weiss [@Weiss] and Brown-Zak (BZ) oscillations [@Brown; @Zak]. The former arises due to the commensurability between the cyclotron orbit and the spatial period in the structure, while the latter is related to the commensurability between the magnetic flux through the unit cell area and the magnetic flux quantum. Subsequent Landau quantization of the BZ minibands leads to the fractal Hofstadter Butterfly (HB) spectrum [@HB]. Since the area of the crystal unit cell is small, it is necessary to apply extremely high fields to detect the associated phenomenology. However, in bilayer graphene or in monolayer graphene placed on top of a hexagonal boron nitride, additional moiré patterns appear, which allows to observe both HB [@HB1; @HB2; @HB3] and BZ [@BZ] oscillations.
There also exist other types of oscillations in 2D systems. One of them is called *the geometric resonances* (GRs). Originally, they revealed themselves in the spectra of electromagnetic power absorption coefficient of plasmas in gases and solids in the presence of a uniform magnetic field [@RefGinzburg; @RefCohen]. The GRs appear as a multi-peak structure at frequencies $\omega=l\,\omega_c$, where $l$ is integer, in addition to the conventional cyclotron (or magnetoplasmon) peak at the cyclotron frequency $\omega_c=eB/m$ (or $l=1$) with $e$ and $m$ being the electron charge and mass, $B$ is the strength of the external magnetic field. The GRs in 2D systems have been studied theoretically [@RefAndo; @RefChaplik1984] and reported experimentally [@RefMohr] in samples made of various materials, such as Si and AlGaAs alloys.
In this paper, we examine magnetotransport phenomena in a 2DEG in the presence of surface acoustic waves (SAWs). These waves are usually produced by the interdigital transducers (IDTs) – metallic gates patterned on top of piezoelectric materials. The spacing of the gates, or pitch, determines the wavelength of the SAW [@IDT]. When the radio-frequency (rf) signal is applied to ITDs, there emerges a SAW with such a wavelength that its product with the rf frequency equals to the sound velocity of the material. Corresponding piezoelectric field modulates both the electron density and velocity of the charge carriers. Accordingly, the electric current density, which is the product of these two parameters, acquires a constant component, called the acoustoelectric current. It can also be explained as a result of SAW drag of the charge carriers in the direction of the SAW wave vector [@Parmenter]. The information obtained by measurements of the SAW-induced effects is complementary to conventional transport experiments, facilitating a frequent use of SAWs in the studies of low-dimensional electronic structures [@SAW], including graphene monolayers [@graphene1; @graphene2; @OurPRLAEE], topological insulators [@top1], and other thin films [@top2]. Besides, SAWs-related methods can also be applied to the exciton transport [@exciton1; @exciton2; @exciton3].
The response of an electron-exposed-to-SAWs system to an external magnetic field was also examined in Refs. [@RefWixforth; @RefWillet; @Kreft1; @Kreft2], although these studies were focused on the quantum regime with established Landau levels. A region of smaller fields was considered in Refs. [@Levinson; @Eckl] but the manifestations of Weiss oscillations were only predicted for the first-order effects, such as the SAW absorption and the velocity shifts. The longitudinal component of the acoustoelectric current was discussed in Ref. [@Fal'ko]. Here, we extend this analysis to the Hall component and also examine the peculiarities appearing in the case of the linear dispersion of graphene.
The acoustoelectric current is the second-order effect with respect to the SAW-induced electric field. Consequently, it is related to a third-order conductivity tensor [@Glazov; @RefBasov]. This tensor couples components of the drag current to the components of the SAW piezoelectric field as $j_\alpha=\chi_{\alpha\beta\gamma}E_\beta E_\gamma$, where $\alpha,~\beta,~\gamma=x,~y,~z$, similar to the photovoltaic effect [@OurRecPRB]. As the SAW frequency is much smaller than the frequencies of the optical fields reported in Refs. [@RefAndo; @RefChaplik1984; @RefMohr], GRs can be expected at much smaller magnetic fields, at which a semiclassical approach based on the Boltzmann equations is appropriate for our studies.
We calculate both the longitudinal and Hall current densities as functions of the SAW frequency and the magnetic field for two possible cases of (i) the parabolic dispersion (for the 2DEG of an interface inversion layer or of a transition metal dichalcogenide) and (ii) the linear dispersion of graphene, and we obtain an oscillatory behavior of these dependencies. We analyze these oscillations and argue that in the case of the SAW drag, GRs and Weiss oscillations represent the same phenomenon; although originally GRs are related to the optical fields with no [*spatial*]{} periodicity and Weiss oscillations are usually connected with a [*static*]{} embedded superlattice. SAWs thus provide a dynamical superlattice merging the GRs and Weiss oscillations phenomena and making both interpretations possible.
Theoretical framework
=====================
We start with the Boltzmann equation for the electron distribution function $f$, when the system is subject to both the piezoelectric field of the SAW and the external uniform magnetic field perpendicular to the 2D layer. In the case of the parabolic electron dispersion, the Boltzmann equation has the form $$\begin{aligned}
\label{EqBolzmann}
\left[\frac{\partial}{\partial t}+\textbf{v}\frac{\partial}{\partial \textbf{r}}+e\Bigl(\textbf{E}(\textbf{r},t)+\textbf{E}^{i}(\textbf{r},t)\Bigr)\right.\\\nonumber
\left.+e[\textbf{v}\times \textbf{B}]\frac{\partial}{\partial \textbf{p}}\right]f=-\frac{f-\langle f\rangle}{\tau},\end{aligned}$$ where $\mathbf{v}=\mathbf{p}/m$ is a velocity of a particle (thus the energy spectrum is given by $\varepsilon_\textbf{p}=\textbf{p}^2/2m$), $\mathbf{r}$ is the coordinate, and $\tau$ is an effective electron scattering time. SAWs produce the in-plane component of a piezoelectric field $\textbf{E}(\textbf{r},t)$ directed along the SAW wave vector $\mathbf{k}$, $\textbf{E}(\textbf{r},t)||\textbf{k}$. $\textbf{E}^{i}(\textbf{r},t)$ is the induced field due to the spatial modulation of 2D electron density in SAW field, which can be found from the solution of the Maxwell’s equation. $\langle f\rangle$ is a quasi-equilibrium electron distribution function in the SAW reference frame. This function depends on time and coordinates via the chemical potential $\mu(\textbf{r},t)$, which determines the electron density $n(\textbf{r},t)$ in slow-varying SAW field.
To find the acoustoelectric current, we expand the electron density and the distribution functions up to the second-order with respect to the total electric field $\tilde{\textbf{E}}(\textbf{r},t)=\textbf{E}(\textbf{r},t)+\textbf{E}^i(\textbf{r},t)$. In particular, $f(\textbf{r},t)=f_0+f_1(\textbf{r},t)+f_2(\textbf{r},t)+o(f_3)$, where $f_0$ is the equilibrium electron distribution function. The first-order correction to $f_0$ is $f_1(\textbf{r},t)=\left[f_1\exp(i\textbf{k}\cdot\mathbf{r}-i\omega t)+f_1^*\exp(-i\textbf{k}\cdot\mathbf{r}+i\omega t)\right]/2$, where $\omega=s|\mathbf{k}|=sk$, with $s$ being the sound velocity.
The time-independent acoustoelectric current can be determined from the stationary second-order correction to the electron distribution function $f_2$ with respect to the SAW field $\textbf{E}(\textbf{r},t)$, as $$\begin{aligned}
\label{EqCurGeneral}
\mathbf{j}=e\int \frac{d\textbf{p}}{(2\pi\hbar)^2}\textbf{v}f_2.\end{aligned}$$ Furthermore, we consider 2DEG to be highly degenerate, thus all the parameters are taken at the Fermi energy.
![(Color online) Electric current densities as functions of the SAW frequency for the parabolic dispersion case. (a) Longitudinal drag ($x$-component) and (b) Hall drag ($y$-component). Different colors correspond to different values of the applied magnetic field $B$, specified in panel (a).[]{data-label="Fig2"}](Fig1a.pdf "fig:"){width="49.00000%"} ![(Color online) Electric current densities as functions of the SAW frequency for the parabolic dispersion case. (a) Longitudinal drag ($x$-component) and (b) Hall drag ($y$-component). Different colors correspond to different values of the applied magnetic field $B$, specified in panel (a).[]{data-label="Fig2"}](Fig1b.pdf "fig:"){width="49.00000%"}
The $x$-axis is chosen along the direction of the SAW propagation. After the calculations detailed in Appendix \[AppendixA\] and Appendix \[AppendixB\], Sec. a, we obtain the longitudinal and Hall acoustoelectric currents in the parabolic electron dispersion case, as $$\begin{aligned}
\label{EqAux4}
&&\left(
\begin{array}{c}
j_x \\
j_y \\
\end{array}
\right)=\frac{1}{env_F}\left|\frac{\sigma_0E_0}{g(k,\omega)}\right|^2\frac{1}{\beta_F^2(1+\omega_c^2\tau^2)}\\
\nonumber
&&~~~~~~~\times \textmd{Re}\,\sum_l\frac{J_l(\beta_F)}{1-i(\omega-l\,\omega_c)\tau}\left[l+\frac{ka_0}{\omega_c\tau}\frac{\sigma_{xx}}{\varepsilon_0(s-R_x)}\right]\\
\nonumber
&&~~~~~~~\times\left(
\begin{array}{c}
\gamma(l+1)J_{l+1}(\beta_F)+\gamma^*(l-1)J_{l-1}(\beta_F) \\
i\gamma(l+1)J_{l+1}(\beta_F)-i\gamma^*(l-1)J_{l-1}(\beta_F) \\
\end{array}
\right),\end{aligned}$$ where $\sigma_0=e^2n\tau/m$ is a static Drude conductivity, $E_0$ is the amplitude of the (external) piezoelectric field, and $J_l(\beta_F)$ are the ordinary Bessel functions with $\beta_F=kv_F/\omega_c$. We have also introduced two auxiliary parameters, $\gamma=1+i\omega_c\tau$ and $a_0=2\pi\hbar^2\varepsilon_0/me^2$. The $xx$-component of the conductivity tensor $\sigma_{xx}$ and $x$-component of the generalized diffusion coefficient $R_x$ are given by
$$\begin{aligned}
\label{EqCondxx2}
\sigma_{xx}=\frac{2\sigma_0}{\beta_F^2}\sum_l\frac{l^2J^2_l(\beta_F)}{1-i(\omega-l\,\omega_c)\tau}\end{aligned}$$
and $$\begin{aligned}
\label{EqCondxx3}
R_x=\frac{\omega_c}{k}\sum_l\frac{l\,J^2_l(\beta_F)}{1-i(\omega-l\,\omega_c)\tau},\end{aligned}$$ respectively, where $$\begin{aligned}
\label{EqDielFun}
g(k,\omega)=1+i\frac{1}{\epsilon_0(\epsilon_d+1)} \frac{\sigma_{xx}}{(s-R_x)}\end{aligned}$$ is the dielectric function of 2DEG, $\epsilon_0$ is the dielectric permittivity of free space, and $\epsilon_d$ is the dielectric constant of the substrate. The function of Eq. describes the screening of SAW piezoelectric field by the mobile electrons of 2D system.
![(Color online) Electric current densities as functions of the SAW frequency for the liner dispersion case. (a) Longitudinal drag ($x$-component) and (b) Hall drag ($y$-component). Different colors correspond to different values of the applied magnetic field $B$, specified in Fig. \[Fig2\](a).[]{data-label="Fig3"}](Fig2a.pdf "fig:"){width="49.00000%"} ![(Color online) Electric current densities as functions of the SAW frequency for the liner dispersion case. (a) Longitudinal drag ($x$-component) and (b) Hall drag ($y$-component). Different colors correspond to different values of the applied magnetic field $B$, specified in Fig. \[Fig2\](a).[]{data-label="Fig3"}](Fig2b.pdf "fig:"){width="48.00000%"}
In the case of linear electron spectrum, $\varepsilon_\textbf{p}=v_0p$, the Boltzmann equation remains almost the same as Eq. with the number of changes. First, velocity $\mathbf{v}$ is replaced by $v_0\mathbf{p}/p$. Second, even for short-range impurities, the scattering times of the first and second harmonics of electron distribution function become energy-dependent, as $\tau_{1}(p)\equiv\tau_1(\varepsilon_\textbf{p})=\tau \varepsilon_F/\varepsilon_\textbf{p}$ for the first harmonics, and $\tau_2(p)=\tau_1(p)/2$ for the second harmonics [@RefNalitov]. Third, the effective cyclotron frequency in the semiclassical limit is given by $\omega_c(p)=eBv_0/p=eBv_0^2/\varepsilon_p$ [@RefWitowski; @RefOrlita].
Performing the calculations (see Appendix \[AppendixB\], Sec. b), we obtain the longitudinal and Hall acoustoelectric current densities in the linear electron dispersion case, as $$\begin{aligned}
\label{MainAEGraph}
&&\left(
\begin{array}{c}
j_x \\
j_y \\
\end{array}
\right)=\frac{1}{2env_0}
\left|\frac{\sigma_gE_0}{g(k,\omega)}\right|^2
\left(\frac{1/\beta_{p_F}}{1+\omega_c^2(p_F)\tau^2_2(p_F)}\right)^2
\\
\nonumber
&&~\times
\textmd{Re}\,\sum_l\frac{J_l(\beta_{p_F})}{1-i[\omega-l\,\omega_c(p_F)]\tau_1(p_F)}\\
\nonumber
&&~\times
\left[l+\frac{ka_g}{\omega_c(p_F)\tau_1(p_F)}\frac{\sigma_{xx}}{\varepsilon_0(s-R_x)}\right]\\
\nonumber
&&~\times
\left(
\begin{array}{c}
-i\bar{\gamma}^2(l+1)J_{l+1}(\beta_{p_F})+i\bar{\gamma}^{*2}(l-1)J_{l-1}(\beta_{p_F}) \\
\bar{\gamma}^2(l+1)J_{l+1}(\beta_{p_F})+\bar{\gamma}^{*2}(l-1)J_{l-1}(\beta_{p_F}) \\
\end{array}
\right),\end{aligned}$$ where $\sigma_g=e^2nv_0\tau_1(p_F)/p_F$ is a static Drude conductivity in graphene and all the momentum-dependent quantities are taken at $p=p_F$. In particular, $\bar{\gamma}=1+i\omega_c(p_F)\tau_2(p_F)$ and $a_g=2\pi\hbar^2\varepsilon_0v_0/e^2p_F$. In this case, the $xx$-component of the conductivity tensor and $x$-component of the generalized diffusion coefficient have the forms $$\begin{aligned}
\label{EqCondxxGraphene}
\sigma_{xx}=\frac{2\sigma_g}{\beta_{p_F}^2}\sum_l\frac{l^2J^2_l(\beta_{p_F})}{1-i[\omega-l\,\omega_c(p_F)]\tau_1(p_F)}\end{aligned}$$ and $$\begin{aligned}
\label{EqRxGraphene}
R_x=\frac{\omega_c(p_F)}{k}\sum_l\frac{l\,J^2_l(\beta_{p_F})}{1-i[\omega-l\,\omega_c(p_F)]\tau_1(p_F)},\end{aligned}$$ respectively. We immediately see several similarities and differences between Eqs. , and Eqs. ,, which we discuss below.
Results and Discussion
======================
First of all, we want to stress that the argument $\beta_F$ of the Bessel functions in Eqs. - and Eqs. - is of special interest. On one hand, it can be expressed in terms of the ratio of frequencies, as $\beta_F=\omega v_F/\omega_c s$ (in the parabolic case), resembling the GRs. On the other hand, $\beta_F$ represents the ratio of the space scales, as $\beta_F = k r_c = 2\pi r_c/\lambda$, where $r_c$ is the cyclotron radius, which is very similar to Weiss oscillations.
To evaluate the electric current densities given by Eqs. and , we use the following set of parameters: $E_0=10$ kV/m; $n=5\cdot10^{12}$ cm$^{-2}$, which is an experimentally achievable value [@RefOrlita]; $m=0.44~m_0$, where $m_0$ is a free electron mass, and we choose MoS$_2$ as a material with the parabolic spectrum; and $\tau=10^{-10}$ s, which corresponds to moderately clean samples. The parameters of the piezoelectric substrate are $\epsilon_d=50$ and $s=3.5\cdot 10^3$ m/s, taken for LiNbO$_3$. For graphene, $v_0=10^8$ cm/s and $\tau_1=\mu_ep_F/ev_0$, where $\mu_e=10^4$ cm$^2$/V$\cdot$s is the electron mobility [@RefMobilityGraphene1; @RefMobilityGraphene2].
Figures \[Fig2\] and \[Fig3\] show (a) longitudinal and (b) Hall components of the drag current density as functions of the SAW frequency $\omega$ for the cases of the parabolic and linear dispersions of mobile carriers, respectively, at various values of the external magnetic field. It is evident from these figures that both components exhibit oscillations, with each maximum approximately corresponding to the geometric resonance $\omega=l\,\omega_c$. As expected, for relatively small SAW frequencies and the cyclotron frequency increasing with $B$, the GRs are pronounced at magnetic fields smaller than 1 T. At higher fields, the functions are monotonic with no GRs-related oscillations.
The dependencies of the current density components on the inverse magnetic field are demonstrated in Figs. \[Fig4\] and \[Fig5\] for the parabolic and linear dispersion cases, respectively. One can see almost perfect oscillations superimposed onto the monotonic decay to the zero field. They are more pronounced for the parabolic dispersion of electrons. This result can be understood as Weiss oscillations in the presence of the spatial periodic structure of the SAW.
Another prominent feature, which we observe in the plots, is the change of the sign of the Hall current density in both the parabolic and linear dispersion cases, and the longitudinal current density in the graphene case. The Hall current vanishes at zero fields and monotonically increases with the increase of $B$. In the presence of SAW-induced oscillations of relatively high magnitude, the current density at small field can achieve negative values at minima. The longitudinal component of acoustoelectric current is non-zero even without a magnetic field. For the parabolic electron dispersion, the magnitude of the oscillations is not sufficiently large to reach negative values of the current density, while for graphene it can occur since the oscillations are more pronounced.
It should be noted that a similar effect of the sign change was also observed in the photon drag in graphene [@Ganichev1], where it was attributed to the energy dependence of the electron scattering time. We believe that the same phenomenology leads to the change of the sign of the acoustoelectric current. We also want to emphasize that the predicted oscillating behavior of the acoustoelectric current occurs at the range of field satisfying $\hbar\omega_c\ll E_F$, where $E_F$ is the Fermi energy, validating the usage of the semiclassical approach.
![(Color online) Components of electric current density as functions of inverse magnetic field in the case of parabolic dispersion for the frequencies specified in panel (a). []{data-label="Fig4"}](Fig3a.pdf "fig:"){width="49.00000%"} ![(Color online) Components of electric current density as functions of inverse magnetic field in the case of parabolic dispersion for the frequencies specified in panel (a). []{data-label="Fig4"}](Fig3b.pdf "fig:"){width="49.00000%"}
![(Color online) Components of electric current density as functions of inverse magnetic field in the case of linear dispersion for the frequencies specified in Fig. \[Fig4\](a).[]{data-label="Fig5"}](Fig4a.pdf "fig:"){width="49.00000%"} ![(Color online) Components of electric current density as functions of inverse magnetic field in the case of linear dispersion for the frequencies specified in Fig. \[Fig4\](a).[]{data-label="Fig5"}](Fig4b.pdf "fig:"){width="49.00000%"}
Conclusions
===========
To summarize, we have examined acoustoelectric current in a 2DEG in the presence of an external magnetic field in two physical systems. First, we have considered 2DEG in which the electron energy is proportional to its momentum squared (parabolic dispersion case). In particular, such situation occurs at the interface of two semiconductors with different band gaps and in transition metal dichalcogenides. Second, we have studied 2DEG in graphene, where the energy is proportional to the first power of momentum (linear dispersion case).
The piezoelectric field created by the SAW modulates both the electron density and electron velocity, resulting in a permanent electric current as a second-order response of the system. Using the semiclassical Boltzmann equations approach, we have calculated and studied both the longitudinal and Hall current densities. For a fixed magnetic field, both the components of the acoustoelectric current exhibit oscillations as functions of the SAW frequency. We have shown that the Hall component changes its sign in both cases of parabolic and linear dispersions, while the change of sign of the longitudinal component occurs in graphene only. For a fixed SAW frequency, the acoustoelectric current oscillates as a function of the inverse magnetic field.
Mathematically, the oscillations are originating from the presence of the (ordinary) Bessel functions in the equations. The argument of Bessel functions can be represented as a ratio of the SAW and cyclotron frequencies or as a ratio of the cyclotron radius and the SAW wavelength. The former is conventionally used to describe optical geometric resonances, while the latter appears in Weiss oscillations of magnetoresistance in the presence of an embedded static superlattice. In the case of SAWs, both interpretations of this phenomenology become possible, since these two effects merge. We thank M. Malishava for useful discussions. IGS acknowledges the support by the Institute for Basic Science in Korea (Project No. IBS-R024-D1). AVK and VMK were supported by the Russian Foundation for Basic Research (Project No. 19-42-540011). LGM acknowledges the partial support by AFOSR, Award No. FA9550-16-1-0279.
The first-order correction to the electron distribution function {#AppendixA}
================================================================
The first-order corrections to the equilibrium electron distribution function and the electron density, $f_1(\textbf{r},t)$ and $n_1(\textbf{r},t)$, satisfy the Boltzmann equation \[derived from Eq. \], $$\begin{gathered}
\label{EqBolzmannFirstOrder}
\left(\frac{1}{\tau}+i\textbf{k}\cdot\mathbf{v}-i\omega+e\left[\mathbf{v}\times\mathbf{B}\right]\cdot\frac{\partial }{\partial \textbf{p}}\right) f_1=\\\nonumber
=-e\Bigl(\textbf{E}+\textbf{E}^i\Bigr)\frac{\partial f_0}{\partial \textbf{p}}
\label{eq3.1main}
+\frac{n_1}{\tau}\frac{\partial f_0}{\partial n}.\end{gathered}$$ To find this equation, we used the expansions $$\begin{gathered}
\label{expansion}
n(\textbf{r},t)=n+n_1(\textbf{r},t)+n_2(\textbf{r},t)+o(n_3),\\\nonumber
f(\textbf{r},t)=f_0+f_1(\textbf{r},t)+f_2(\textbf{r},t)+o(f_3),\\\nonumber
\langle f(\textbf{r},t)\rangle=f_0+[n_1(\textbf{r},t)+n_2(\textbf{r},t)+...]\frac{\partial f_0}{\partial n}+\\\nonumber
+\frac{[n_1(\textbf{r},t)+n_2(\textbf{r},t)+...]^2}{2}\frac{\partial^2 f_0}{\partial n^2}.\end{gathered}$$ Following the approach described in [@RefLLX], we switch to the polar system of coordinates, in which Eq. reads $$\begin{gathered}
\nonumber
\left(\frac{1}{\tau}-i\omega+ikv\cos\phi-\omega_c\frac{\partial }{\partial \phi}\right) f_1(p,\phi)
\\
\label{eq4.1}
=-e\tilde{E}_0v\cos\phi\frac{\partial f_0}{\partial \varepsilon_p}+\frac{n_1}{\tau}\frac{\partial f_0}{\partial n},\end{gathered}$$ where we have accounted for the fact that $\varepsilon_\mathbf{p}=\varepsilon_p$, where $p=|\mathbf{p}|$, and $\partial_\mathbf{p}f_0=(\partial_\mathbf{p}\varepsilon_p)(\partial_{\varepsilon_p}f_0)=\mathbf{v}\partial_{\varepsilon_p}f_0$ with $\partial_B A=\partial A/\partial B$. We have also chosen the direction of $\mathbf{E}_0$ along the x-axis. Then $\tilde{\mathbf{E}}_0=\mathbf{E}_0+\mathbf{E}^i_0$ is also directed along the x-axis (since $\mathbf{E}_0$ and $\mathbf{k}$ are collinear). Then $\tilde{\mathbf{E}}_0\cdot\mathbf{v}=\tilde{E}_0v\cos\phi$ and $k_x=k=\omega/s$.
Eq. (\[eq4.1\]) can be rewritten in the form $$\begin{aligned}
\label{eq4.2}
\frac{\partial f_1}{\partial\phi}+i(\alpha-\beta\cos\phi)=Q(\phi),\end{aligned}$$ where $$\begin{aligned}
\label{eq4.3}
\alpha=\frac{\omega+i/\tau}{\omega_c},~~~~~\beta=\frac{kv}{\omega_c}=\frac{\omega v}{s\omega_c},\\\nonumber
Q(\varphi)=\left(\frac{e\tilde{E}_0v}{\omega_c}\cos\phi+\frac{n_1}{\omega_c\tau}\frac{\partial \mu}{\partial n}\right)\frac{\partial f_0}{\partial \varepsilon_p},\end{aligned}$$ which (all) evidently represent functions of frequency. In Eq. (\[eq4.3\]), we used the relation $\partial_n f_0=(\partial_\mu f_0)(\partial_n\mu)$ and $\partial_\mu f_0=-\partial_{\varepsilon_p} f_0$, which holds for the Fermi distribution function.
From Eqs. (\[eq4.2\])-(\[eq4.3\]) we find $$\begin{gathered}
\label{APPEqf1Bessel}
f_1(p,\phi)=-e^{i\beta\sin\phi}\int\limits_{0}^{\infty}d\psi e^{-i\beta\sin(\phi+\psi)+i\alpha\psi}Q(\phi+\psi).
$$ Using the expansion of the exponents over the cylindrical harmonics, $$\begin{gathered}
\label{APP2}
e^{i\beta\sin\varphi}=\sum_lJ_l(\beta)e^{il\varphi},\end{gathered}$$ we find $$\begin{aligned}
\label{APP3}
&&\int\limits_0^\infty
d\psi
e^{-i\beta\sin(\phi+\psi)+i\alpha\psi}
\\
\nonumber
&&=\sum_l
J_l(\beta)e^{il\phi}\int\limits_0^\infty e^{i(\alpha-l)\psi}d\psi
=
\sum_l
\frac{J_l(\beta)e^{-il\phi}}{i(l-\alpha)}\end{aligned}$$ and $$\begin{aligned}
\label{APP4}
&&\int\limits_0^\infty
d\psi
e^{-i\beta\sin(\phi+\psi)+i\alpha\psi}
\cos(\phi+\psi)
\\\nonumber
&&=
\frac{i}{\beta}\frac{\partial}{\partial\phi}
\int\limits_0^\infty
d\psi
e^{-i\beta\sin(\phi+\psi)+i\alpha\psi}
=
\frac{1}{i\beta}\sum_l
\frac{lJ_l(\beta)e^{-il\phi}}{l-\alpha}.\end{aligned}$$ Then Eq. transforms into $$\begin{aligned}
\label{APPEqf1Bessel2}
f_1(p,\phi)&=&
\frac{e^{i\beta\sin\phi}}{i\omega_c}
\left(
-\frac{\partial f_0}{\partial\varepsilon_p}
\right)
\\
\nonumber
&&\times
\sum_l
\Bigl[
\frac{e\tilde{E}_0v}{\beta}l
+\frac{n_1}{\tau}\frac{\partial\mu}{\partial n}
\Bigr]
\frac{J_l(\beta)}{l-\alpha}e^{-il\phi}.\end{aligned}$$
The conductivity tensor and the diffusion vector can be calculated using the standard definition of the first-order correction to the current density, $$\begin{gathered}
\label{APPCurrent}
j^{(1)}_\alpha=e\int \frac{ d\mathbf{p}}{(2\pi\hbar)^2}
v_\alpha
f_1(p,\phi)=\sigma_{\alpha\beta}\tilde{E}_\beta+en_1R_\alpha,\end{gathered}$$ where $\mathbf{v}(\phi)=v(\cos\phi,\sin\phi)$, and $$\begin{gathered}
\label{APPConductivity}
\sigma_{xx}=\frac{e^2}{\omega_c}\int\frac{d\mathbf{p}}{(2\pi\hbar)^2}
v^2\cos\phi~
e^{i\beta\sin\phi}\left(-\frac{\partial f_0}{\partial\varepsilon_{p}}\right)\\\nonumber
\times
\int\limits_0^\infty d\psi e^{-i\beta\sin(\phi+\psi)+i\alpha\psi}
\cos(\phi+\psi)\end{gathered}$$ and $$\begin{gathered}
\nonumber
R_x=\frac{1}{\omega_c\tau}\frac{\partial\mu}{\partial n}\int\frac{d\mathbf{p}}{(2\pi\hbar)^2}
v\cos\phi~e^{i\beta\sin\phi}\left(-\frac{\partial f_0}{\partial\varepsilon_{p}}\right)\times\\
\label{APPEqSigDiff12}
\times
\int\limits_0^\infty d\psi e^{-i\beta\sin(\phi+\psi)+i\alpha\psi}\end{gathered}$$ are the first ($xx$) matrix element of the conductivity tensor and the $x-$component of the diffusion vector [@Chaplik; @Kittel], respectively. Taking integrals in and in , we find the conductivity and the diffusion coefficient of a degenerate electron gas at zero temperature, Eqs. and in the main text.
The second-order response and the AME current {#AppendixB}
=============================================
### Parabolic dispersion case {#ApBssa}
Since we chose the SAW EM field to be directed along the $x$ axis, the AME current is given by the formula $$\begin{aligned}
\label{EqCur2}
j_\alpha&=&
-\frac{e^2}{2\omega_c}\int\frac{d\mathbf{p}}{(2\pi\hbar)^2}
v_\alpha(\phi)
\int\limits_0^\infty d\psi e^{-\frac{\psi}{\omega_c\tau}}
\\
\nonumber
&&\times
\mathrm{Re}
\left\{\tilde{E}_0^*v\cos(\phi+\psi)\frac{\partial f_1(p,\phi+\psi)}{\partial\varepsilon_{p}}\right\}.\end{aligned}$$ Expressing the $\textbf{p}$-integrals via the integrals over the energy and angle, we perform partial integrations to find $$\begin{aligned}
\label{EqCur2}
\left(
\begin{array}{cc}
j_x \\
j_y
\end{array}
\right)
&=&
\textmd{Re}\,\frac{e^2\tilde{E}_0^*}{\omega_c(2\pi\hbar)^2}\int\limits_0^\infty d\varepsilon_\textbf{p}
\int\limits_0^\infty d\psi e^{-\frac{\psi}{\omega_c\tau}}
\\\nonumber
&&\times\int\limits_0^{2\pi} d\phi
\left(
\begin{array}{cc}
\cos\phi \\
\sin\phi
\end{array}
\right)\cos(\phi+\psi)
f_1(p,\phi+\psi).\end{aligned}$$ Substituting here the first-order electron distribution function , we come up with the $\phi$ and $\psi$-angle integrals, $$\begin{gathered}
\nonumber
\label{EqAux2}
\int\limits_0^{2\pi}d\phi
\cos(\phi+\psi)
\left(
\begin{array}{cc}
\cos\phi \\
\sin\phi
\end{array}
\right)
e^{i\beta_F\sin(\phi+\psi)}e^{-il(\phi+\psi)}
\\
\nonumber
=
\frac{\pi}{\beta_F}
\left(
\begin{array}{cc}
(l+1)J_{l+1}(\beta_F){e^{i\psi}}+(l-1)J_{l-1}(\beta_F){e^{-i\psi}} \\
i(l+1)J_{l+1}(\beta_F){e^{i\psi}}-i(l-1)J_{l-1}(\beta_F){e^{-i\psi}}
\end{array}
\right),\\\nonumber
\int\limits_0^\infty d\psi
e^{-\frac{\psi}{\omega_c\tau}{\pm i\psi}}
=
\omega_c\tau
\frac{1\pm i\omega_c\tau }{1+(\omega_c\tau)^2}.\end{gathered}$$ The integral over energy can be easily taken for a degenerate electrons gas, where $-\partial_{\varepsilon_p}f_0=\delta(\varepsilon_p-\mu)$. Summing up, we find Eq. from the main text.
### Linear dispersion case {#ApBssb}
Following similar steps as for the parabolic dispersion case, integrating by parts via energy, and taking into account that now the cyclotron frequency and the electron relaxation time depend on energy, we find $$\begin{aligned}
\label{EqCur2Graph}
\left(
\begin{array}{cc}
j_x \\
j_y
\end{array}
\right)
&=&\textmd{Re}\,\frac{e^2\tilde{E}_0^*}{(2\pi\hbar)^2}
\\\nonumber
&\times&\int\limits_0^\infty \frac{d\varepsilon_\textbf{p}}{\omega_c(p)}
\int\limits_0^\infty d\psi e^{-\frac{\psi}{\omega_c(p)\tau_2(p)}}\left(1-\frac{\psi}{\omega_c(p)\tau_2(p)}\right)
\\\nonumber
&\times&\int\limits_0^{2\pi} d\phi
\left(
\begin{array}{cc}
\cos\phi \\
\sin\phi
\end{array}
\right)\cos(\phi+\psi)
f_1(p,\phi+\psi),\end{aligned}$$ where $$\begin{aligned}
\label{APPEqf1BesselGraph}
f_1(p,\phi)&=&
\frac{e^{i\beta_p\sin\phi}}{i\omega_c(p)}
\left(
-\frac{\partial f_0}{\partial\varepsilon_p}
\right)
\\\nonumber
&&\times
\sum_l
\Bigl[
\frac{e\tilde{E}_0v_0}{\beta}l
+\frac{n_1}{\tau_1(p)}\frac{\partial\mu}{\partial n}
\Bigr]
\frac{J_l(\beta_p)}{l-\alpha_p}e^{-il\phi}.\end{aligned}$$ The integration over $\phi$ is similar to the parabolic dispersion case, thus we find $$\begin{gathered}
\nonumber
\label{EqAux2graph}
\int\limits_0^{2\pi}d\phi
\cos(\phi+\psi)
\left(
\begin{array}{cc}
\cos\phi \\
\sin\phi
\end{array}
\right)
e^{i\beta_p\sin(\phi+\psi)}e^{-il(\phi+\psi)}
\\\nonumber
=
\frac{\pi}{\beta_p}
\left(
\begin{array}{cc}
(l+1)J_{l+1}(\beta_p){e^{i\psi}}+(l-1)J_{l-1}(\beta_p){e^{-i\psi}} \\
i(l+1)J_{l+1}(\beta_p){e^{i\psi}}-i(l-1)J_{l-1}(\beta_p){e^{-i\psi}}
\end{array}
\right),\end{gathered}$$ whereas for $\psi$-integrals, we use $$\begin{gathered}
\nonumber
\int\limits_0^\infty d\psi
e^{-\frac{\psi}{\omega_c(p)\tau_2(p)}{\pm i\psi}}\left(1-\frac{\psi}{\omega_c(p)\tau_2(p)}\right)
\\\nonumber
=
\frac{\mp i\omega_c(p)\tau_2(p)}{[1\mp i\omega_c(p)\tau_2(p)]^2}.\end{gathered}$$ The remaining integral over energy is much simpler in the case of the degenerate electron gas due to the relation $-\partial_{\varepsilon_p}f_0=\delta(\varepsilon_p-\mu)$, using which we find Eq. in the main text.
[100]{}
T. Ando, A. B. Fowler, and F. Stern, Electronic properties of two-dimensional systems, Rev. Mod. Phys. **54**, 437 (1982).
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Electric field effect in atomically thin carbon films, Science **306**, 666 (2004).
K. S. Novoselov, D. Jiang, F. Schedin, T. J. Booth, V. V. Khotkevich, S. V. Morozov, and A. K. Geim, Two-dimensional atomic crystals, PNAS **102**, 10451 (2005).
Q. H. Wang, K. Kalantar-Zadeh, A. Kis, J. N. Coleman, and M. S. Strano, Electronics and optoelectronics of two-dimensional transition metal dichalcogenides, Nature Nanotech. **7**, 699 (2012).
K. v. Klitzing, G. Dorda, and M. Pepper, New Method for High-Accuracy Determination of the Fine-Structure Constant Based on Quantized Hall Resistance, Phys. Rev. Lett. **45**, 494 (1980).
D. C. Tsui, H. L. Stormer, and A. C. Gossard, Two-Dimensional Magnetotransport in the Extreme Quantum Limit, Phys. Rev. Lett. **48**, 1559 (1982).
R.B. Laughlin, Anomalous Quantum Hall Effect: An Incompressible Quantum Fluid with Fractionally Charged Excitations, Phys. Rev. Lett. **450**, 1395 (1983).
C. W. J. Beenakker and H. van Houten, Quantum transport in semiconductor nanostructures, Solid State Phys. **44**, 1 (1991).
G. Timp, A. M. Chang, J. E. Cunningham, T. Y. Chang, P. Mankiewich, R. Behringer, and R. E. Howard, Observation of the Aharonov-Bohm Effect for $\omega_c \tau
> 1$ , Phys. Rev. Lett. **58**, 2814 (1987).
R. R. Gerhardts, D. Weiss, and K. V. Klitzing, Novel Magnetoresistance Oscillations in a Periodically Modulated Two-dimensional Electron Gas, Phys. Rev. Lett. **62**, 1173 (1989).
E. Brown, Bloch electrons in a uniform magnetic field, Phys. Rev. **133**, A1038 (1964).
J. Zak, Magnetic translation group, Phys. Rev. **134**, A1602 (1964).
D. R. Hofstadter, Energy levels and wave functions of Bloch electrons in rational and irrational magnetic fields, Phys. Rev. B **14**, 2239 (1976).
L. A. Ponomarenko, R. V. Gorbachev, G. L. Yu, D. D. Elias, R. Jalil, A. A. Patel, A. Mishchenko, A. S. Mayorov, C. R. Woods, J. R. Wallbank, M. Mucha-Kruczynski, B.A. Piot, M. Potemski, I. V. Grigorieva, K. S. Novoselov, F. Guinea, V. Falko, and A. K. Geim, Cloning of Dirac fermions in graphene superlattices, Nature (London) **497**, 594 (2013).
C. R. Dean, L. Wang, P. Maher, C. Forsythe, F. Ghahari, Y. Gao, J. Katoch, M. Ishigami, P. Moon, M. Koshino, T. Taniguchi, K. Watanabe, K. L. Shepard, J. Hone, and P. Kim, Hofstadter’s butterfly and the fractal quantum Hall effect in moiré superlattices, Nature (London) **497**, 598 (2013).
B. Hunt, J. D. Sanchez-Yamagishi, A. F. Young, M. Yankowitz, B. J. LeRoy, K. Watanabe, T. Taniguchi, P. Moon, M. Koshino, P. Jarillo-Herrero, and R. C. Ashoori, Massive Dirac Fermions and Hofstadter Butterfly in a van der Waals Heterostructure, Science **340**, 427 (2013).
R. Krishna Kumar, X. Chen, G. H. Auton, A. Mishchenko, D. A. Bandurin, S. V. Morozov, Y. Cao, E. Khestanova, M. Ben Shalom, A. V. Kretinin, K. S. Novoselov, L. Eaves, I. V. Grigorieva, L. A. Ponomarenko, V. I. Fal’ko, and A. K. Geim, High-temperature quantum oscillations caused by recurring Bloch states in graphene superlattices, Science **357**, 181 (2017).
V. L. Ginzburg, Usp. Fiz. Nauk. **69**, 537 (1959). M. H. Cohen, M. J. Harrison, and W. A. Harrison, Phys. Rev. **117**, 937 (1960).
A. V. Chaplik and D. Heitmann, J . Phys. C: Solid State Phys. **18** 3357 (1985).
T. Ando, Phys. Rev. Lett. **36**, 1383 (1976).
E. G. Mohr and D. Heitmann, J. Phys. C: Solid State Phys. **15**, L753 (1982).
C. Campbell, [*Surface Acoustic Wave Devices for Mobile and Wireless Communications*]{} (Academic Press Inc, San Diego, CA, USA, 1998). R. H. Parmenter, The Acousto-Electric effect, Phys. Rev. **89**, 990 (1953).
C. C. W. Ruppel and T. A. Fjeldy, Advances in surface acoustic wave technology, systems and applications (World Scientific Publishing Co. Pte. Ltd., Singapore, 2001). S. H. Zhang and W. Xu, Absorption of surface acoustic waves by graphene, AIP Advances **1**, 022146 (2011).
V. Miseikis, J. E. Cunningham, K. Saeed, R. O’Rorke, and A. G. Davies, Acoustically induced current flow in graphene, Appl. Phys. Lett. **100**, 133105 (2012).
A. V. Kalameitsev, V. M. Kovalev, and I. G. Savenko, Valley Acoustoelectric Effect, Phys. Rev. Lett. **122**, 256801 (2019).
V. Parente, A. Tagliacozzo, F. von Oppen, and F. Guinea, Electron-phonon interaction on the surface of a three-dimensional topological insulator, Phys. Rev. B **88**, 075432 (2013).
L. L. Li and W. Xu, Absorption of surface acoustic waves by topological insulator thin films, Appl. Phys. Lett. **105**, 063503 (2014).
E. G. Batyev, V. M. Kovalev, A. V. Chaplik, Response of a Bose-Einstein condensate of dipole excitons to static and dynamic perturbations, JETP Lett. **99**(9), 540 (2014).
V. M. Kovalev and A. V. Chaplik, Effect of exciton dragging by a surface acoustic wave, JETP Lett. **101**(3), 177 (2015).
V. M. Kovalev and A. V. Chaplik, Acousto-exciton interaction in a gas of 2D indirect dipolar excitons in the presence of disorder, JETP **122**(3), 499 (2016).
A. Wixforth, J. Scriba, M. Wassermeier, J. P. Kotthaus, G. Weimann, and W. Schlapp, Surface acoustic waves on GaAs/Al$_x$Ga$_{1-x}$As heterostructures, Phys. Rev. B **40**, 7874 (1989).
R. L. Willett, M. A. Paalanen, R.R. Ruel, K.W. West, L.N. Pfeiffer, and D.J. Bishop, Anomalous Sound Propagation at $\nu=1/2$ in a 2D Electron Gas: Observation of a Spontaneously Broken Translational Symmetry?, Phys. Rev. Lett. **65**, 112 (1990).
D. J. Kreft and R. H. Blick, [*Surface Acoustic Waves and Nano-Electromechanical Systems, Acoustic Waves—From Microdevices to Helioseismology*]{}, edited by M.G. Beghi (InTech, 2011).
D. J. Kreft, L. G. Mourokh, H. Shin, M. Bichler, W. Wegscheider, and R. H. Blick, Giant acoustoelectric current in suspended quantum point contacts, Phys. Rev. B **94**, 235305 (2016).
Y. Levinson, O. Entin-Wohlmann, A. D. Mirlin, and P. Wolfle, Weiss oscillations in surface-acoustic-wave propagation, Phys. Rev. B **58**, 7113 (1998).
C. Eckl, Yu. A. Kosevich, and A. P. Mayer, Surface acoustic waves and magnetotransport in an embedded modulated two-dimensional electron gas, Phys. Rev. B **61**, 16708 (2000).
J. P. Robinson and V. I. Fal’ko, Commensurability oscillations in the surface-acoustic-wave-induced acoustoelectric effect in a two-dimensional electron gas, Phys. Rev. B **71**, 241301(R) (2005).
M. M. Glazov and S. D. Ganichev, High frequency electric field induced nonlinear effects in graphene, Phys. Rep. **535**, 101 (2014).
Zh. Sun, D. N. Basov, and M. M. Fogler, Third-order optical conductivity of an electron fluid, Phys. Rev. B **97**, 075432 (2018).
V. M. Kovalev and I. G. Savenko, Photogalvanic currents in dynamically gapped transition metal dichalcogenide monolayers, Phys. Rev. B **99**, 075405 (2019).
A. V. Nalitov, L. E. Golub, and E. L. Ivchenko, Ratchet effects in two-dimensional systems with a lateral periodic potential, Phys. Rev. B **86**, 115301 (2012).
A. M. Witowski, M. Orlita, R. Stȩpniewski, A. Wysmołek, J. M. Baranowski, W. Strupiński, C. Faugeras, G. Martinez, and M. Potemski, Quasiclassical cyclotron resonance of Dirac fermions in highly doped graphene, Phys. Rev. B **82**, 165305 (2010).
M. Orlita, I. Crassee, C. Faugeras, A. B. Kuzmenko, F. Fromm, M. Ostler, Th. Seyller, G. Martinez, M. Polini, and M. Potemski, Classical to quantum crossover of the cyclotron resonance in graphene: A study of the strength of intraband absorption, New J. Phys. **14**, 095008 (2012).
J.-H. Chen, C. Jang, S. Xiao, M. Ishigami, and M. S. Fuhrer, Intrinsic and extrinsic performance limits of graphene devices on SiO$_2$, Nature Nanotech. **3**, 206 (2008).
A. Akturka and N. Goldsman, Electron transport and full-band electron-phonon interactions in graphene, J. Appl. Phys. **103**, 053702 (2008).
J. Karch, P. Olbrich, M. Schmalzbauer, C. Brinsteiner, U. Wurstbauer,M. M. Glazov, S. A. Tarasenko, E. L. Ivchenko, D. Weiss, J. Eroms, and S. D. Ganichev, Photon helicity driven electric currents in graphene, Proceedings of the 35th International Conference on Infrared, Millimeter, and Terahertz Waves (2010). E. M. Lifshitz and L. P. Pitaevskii, *Landau and Lifshitz Course of Theoretical Physics. Volume X. Physical Kinetics.* (Translated from Nauka, Moscow, 1979). We used pages 220-222. M. V. Krasheninnikov and A. V. Chaplik, Plasma-acoustic waves on the surface of a piezoelectric crystal, JETP **48**, 960 (1978).
C. Kittel, *Quantum theory of solid states* (Wiley, New York, 2004).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We give a short overview of the proof of Shelah’s eventual categoricity conjecture in universal classes with amalgamation [@ap-universal-v9].'
address: 'Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA'
author:
- Sebastien Vasey
bibliography:
- 'uc-categ-overview.bib'
date: |
\
AMS 2010 Subject Classification: Primary 03C48. Secondary: 03C45, 03C52, 03C55, 03C75, 03E55.
title: 'The lazy model theoretician’s guide to [S]{}helah’s eventual categoricity conjecture in universal classes'
---
Introduction
============
We sketch a proof of:
\[main-thm\] Let ${K}$ be a universal class with amalgamation. If ${K}$ is categorical in[^1] *some* $\lambda > H_2$, then ${K}$ is categorical in *all* $\lambda' \ge H_2$.
The reader should see the introduction of [@ap-universal-v9] for motivation and history. Note that (as stated there) the amalgamation hypothesis can be removed assuming categoricity in cardinals of arbitrarily high cofinality. However this relies on hard arguments of Shelah [@shelahaecbook Chapter IV], so we do not discuss it. There are plans for a sequel where the amalgamation hypothesis will be removed under categoricity in a single cardinal of arbitrary cofinality (earlier versions actually claimed it but the argument contained a mistake).
Note that this is not a self-contained argument, we simply attempt to outline the proof and quote extensively from elsewhere. For another exposition, see the upcoming [@bv-survey].
We attempt to use as few prerequisites as possible and make what we use explicit. We do not discuss generalizations to tame AECs with primes [@categ-primes-v3], although we end up using part of the proof there.
We assume familiarity with a basic text on AECs such as [@baldwinbook09] or the upcoming [@grossbergbook]. We also assume the reader is familiar with the definition of a good ${\mathcal{F}}$-frame (see [@shelahaecbook Chapter II] for the original definition of a good $\lambda$-frame and [@ss-tame-toappear-v3 Definition 2.21] for good ${\mathcal{F}}$-frames), and the definition of superstability (implicit in [@shvi635], but we use the definition in [@indep-aec-v5 Definition 10.1]). All the good frames we will use are *type-full*, i.e. their basic types are the nonalgebraic types, and we will omit the “type-full”.
This note was written while working on a Ph.D. thesis under the direction of Rami Grossberg at Carnegie Mellon University and I would like to thank Professor Grossberg for his guidance and assistance in my research in general and in this work specifically. I thank John Baldwin for early feedback on this note.
The proof
=========
The argument depends on [@sh394], on the construction of a good frame and related results in [@ss-tame-toappear-v3], on Boney’s theorem on extending good frames using tameness [@ext-frame-jml] (the subsequent paper [@tame-frames-revisited-v4] is not needed here), and on the Grossberg-VanDieren categoricity transfer [@tamenesstwo]. The argument also depends on some results about unidimensionality in III.2 of [@shelahaecbook] (these results have short full proofs, and have appeared in other forms elsewhere, most notably in [@tamenesstwo; @tamenessthree]).
There is a dependency on the Shelah-Villaveces theorem ([@shvi635 Theorem 2.2.1]), which can be removed in case one is willing to assume that ${\text{cf} (\lambda)} > {\text{LS}}({K})$. This is reasonable if one is willing to assume that $K$ is categorical in unboundedly many cardinals: then by amalgamation, the categoricity spectrum will contain a club, hence cardinals of arbitrarily high cofinality.
We proceed in several steps.
1. Without loss of generality, ${K}$ has joint embedding and no maximal models.
\[Why? Let us define a relation $\sim$ on ${K}$ by $M \sim N$ if and only if $M$ and $N$ embed into a common extension. Using amalgamation, one can see that $\sim$ is an equivalence relation. Now the equivalence classes ${\langle {K}_i : i \in I \rangle}$ of $\sim$ form disjoint AECs with amalgamation and joint embedding, and by the categoricity assumption (recalling that the Hanf number for existence is bounded by $H_1$) there is a unique $i \in I$ such that ${K}_i$ has arbitrarily large models. Moreover $({K}_i)_{\ge H_1} = {K}_{\ge H_1}$ so it is enough to work inside ${K}_i$.\]
2. ${K}$ is ${\text{LS}}({K})$-superstable.
\[Why? By [@shvi635 Theorem 2.2.1], or really the variation using amalgamation stated explicitly in [@gv-superstability-v2 Theorem 6.3]. Alternatively, if one is willing to assume that ${\text{cf} (\lambda)} > {\text{LS}}({K})$, one can directly apply [@sh394 Lemma 6.3].\]
3. ${K}$ is $(<\aleph_0)$-tame.
\[Why? See [@ap-universal-v9 Section 3][^2] (this does not use the categoricity hypothesis).\]
4. ${K}$ is stable in $\lambda$.
\[Why? By [@ss-tame-toappear-v3 Theorem 5.6], ${\text{LS}}({K})$-superstability and ${\text{LS}}({K})$-tameness imply stability everywhere above ${\text{LS}}({K})$.\]
5. \[sat-step\] The model of size $\lambda$ is saturated.
\[Why? Use stability to build a $\mu^+$-saturated model of size $\lambda$ for each $\mu < \lambda$. Now apply categoricity.\]
6. ${K}$ is categorical in $H_2$.
\[Why? By the proof of [@sh394 II.1.6], or see [@baldwinbook09 14.8].\]
7. ${K}$ has a good $H_2$-frame.
\[Why? By [@ss-tame-toappear-v3 Theorem 7.3] which tells us how to construct a good frame at a categoricity cardinal assuming tameness and superstability below it.\]
8. For $M \in {K}_{H_2}$, $p \in {\text{gS}}(M)$, let ${K}_{\neg^\ast p}$ be defined as in [@ap-universal-v9 Definition 5.7]: roughly, it is the class of $N$ so that $p$ has a unique extension to ${\text{gS}}(N)$ (so in particular $p$ is omitted in $N$), but we add constant symbols for $M$ to the language to make it closed under isomorphisms. Then ${K}_{\neg^\ast p}$ is a universal class.
\[Why? That it is closed under substructure is clear. That it is closed under unions of chains is because universal classes are $(<\aleph_0)$-tame, so if a type has two distinct extensions over the union of a chain, it must have two distinct extension over an element of the chain. Here is an alternate, more general, argument: ${K}_{H_2}$ is $\aleph_0$-local (by the existence of the good frame), so using tameness it is not hard to see that ${K}_{\ge H_2}$ is $\aleph_0$-local. Now proceed as before.\]
9. If $K$ is not categorical in $H_2^+$, then there exists $M \in {K}_{H_2}$ and $p \in {\text{gS}}(M)$ so that ${K}_{\neg^\ast p}$ has a good $H_2$-frame.
\[Why? See [@categ-primes-v3 Theorem 2.15][^3]: it shows that if $K_{H_2}$ is weakly unidimensional (a property that Shelah introduces in III.2 of [@shelahaecbook] and shows is equivalent to categoricity in $H_2^+$), then the good $H_2$-frame that $K$ has, restricted to ${K}_{\neg^\ast p}$ (for a suitable $p$) is a good $H_2$-frame. The definition of weak unidimensionality is essentially the negation of the fact that there exists two types $p \perp q$ (for a notion of orthogonality defined using prime models).\]
10. If $K$ is not categorical in $H_2^+$, $K_{\neg^\ast p}$ above has arbitrarily large models.
\[Why? By Theorem \[step-3\] below (recalling that ${K}_{\neg^\ast p}$ is a universal class), ${K}_{\neg^\ast p}$ has a good $(\ge H_2)$-frame. Part of the definition of such a frame requires existence of a model in every cardinal $\mu \ge H_2$.
11. If $K$ is not categorical in $H_2^+$, the model of size $\lambda$ is not saturated. This contradicts (\[sat-step\]) above, therefore $K$ is categorical in $H_2^+$.
\[Why? Take $M \in {K}_{\neg^\ast p}$ of size $\lambda$ (exists by the previous step). Then $M$ omits $p$ and the domain of $p$ has size $H_2 < \lambda$.\]
12. $K$ is categorical in all $\lambda' \ge H_2$.
\[Why? We know that $K$ is categorical in $H_2$ and $H_2^+$, so apply the upward transfer of Grossberg and VanDieren [@tamenesstwo Theorem 0.1].
To complete the proof, we need the following:
\[step-3\] Let $K$ be a universal class. Let $\lambda \ge {\text{LS}}({K})$. If ${K}$ has a good $\lambda$-frame, then ${K}$ has a good $(\ge \lambda)$-frame.
1. $K$ is $\lambda$-tame for types of length two.
\[Why? See [@ap-universal-v9 Section 3].\]
2. $K$ has weak amalgamation: if[^4] ${\text{gtp}}(a_1 / M; N_1) = {\text{gtp}}(a_2 / M; N_2)$, there exists $N_1' \lea N_1$ containing $a_1$ and $M$ and $N \gea N_1'$, $f: N_2 \xrightarrow[M]{} N$ so that $f (a_2) = a_1$.
\[Why? By the isomorphism characterization of Galois types in AECs which admit intersections, see [@non-locality Lemma 2.6] or [@ap-universal-v9 Proposition 2.17]. More explicitly, set $N_1' := {\text{cl}}^{N_1} (a_1 M)$, where ${\text{cl}}^{N_1}$ denotes closure under the functions of $N_1$. Then chase the definition of equality of Galois types.\]
3. $K$ has amalgamation.
\[Why? By [@ap-universal-v9 Theorem 4.15].\]
4. $K$ has a good $(\ge \lambda)$-frame.
\[Why? By Boney’s upward frame transfer [@ext-frame-jml] which tells us that amalgamation, $\lambda$-tameness for types of length two, and a good $\lambda$-frame imply that the frame can be extended to a good $(\ge \lambda)$-frame.\]
[^1]: Here and below, we write ${h (\theta)} := \beth_{(2^{\theta})^+}$. We see universal classes as AECs so that for $K$ a universal class, ${\text{LS}}({K}) = |L ({K})| + \aleph_0$. For ${K}$ a fixed AEC, we write $H_1 := {h ({\text{LS}}({K}))}$ and $H_2 := {h (H_1)}$.
[^2]: The main idea there is due to Will Boney, see [@tameness-groups].
[^3]: The original argument in [@ap-universal-v9] is harder, as it requires building a global independence relation.
[^4]: Since we do not assume amalgamation, Galois types are defined using the transitive closure of atomic equivalence, see e.g. [@shelahaecbook Definition II.1.9].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We investigate stress-energy tensors constructed from the delta function on a worldline. We concentrate on the quadrupole which has up to two partial or derivatives of the delta function. Unlike the dipole, we show that the quadrupole has 20 free components which are not determined by the properties of the stress-energy tensor. These need to be derived from an underlying model and we give an example modelling a divergent-free dust. We show that the components corresponding to the partial derivatives representation of the quadrupole, have a gauge like freedom. We give the change of coordinate formula which involves a second derivative and two integrals. We also show how to define the quadrupole without reference to a coordinate systems or a metric. For the representation using covariant derivatives, we show how to split a quadrupole into a pure monopole, pure dipole and pure quadrupole in a coordinate free way.'
author:
- 'Jonathan Gratus$^{1,2,3,*}$, Paolo Pinto$^{1,2,4}$, Spyridon Talaganis$^{1,5}$'
bibliography:
- 'bibliography.bib'
title: 'The Distributional Stress-Energy Quadrupole'
---
$^1$ Physics department, Lancaster University, Lancaster LA1 4YB,\
$^2$ The Cockcroft Institute Daresbury Laboratory, Daresbury, Warrington WA4 4AD UK.\
$^3$ `j.gratus@lancaster.ac.uk`\
$^4$ `p.pinto@lancaster.ac.uk`\
$^5$ `s.talaganis@lancaster.ac.uk`\
$^*$ Corresponding author.
Introduction {#ch_Intro}
============
With the recent confirmed observation of gravitational waves, it is natural to look at possible sources of gravity, in particular stress-energy tensors in which we can model compact systems, which are small with respect the distance to an observer. gravitational wave astronomy shall give rise to major developments in gravitational physics and astrophysics. The LIGO and VIRGO detectors have observed relativistic gravitational two-body systems. The existing network of gravitational wave interferometers is expanding both on Earth (for instance, via KAGRA and LIGO-India) and in space. Compact binary systems are important sources of gravitational waves. Two-body systems such as pairs of black holes or neutron stars can emit vast amounts of energy in the form of gravitational waves as their orbits decay and the bodies coalesce.
In this article we model the compact source, using a distribution, in which all the mass is concentrated in one point in space and hence a worldline in spacetime, but has an extended structure encoded as a multipole expansion. The zeroth order is the monopole, followed by the dipole and then the quadrupole. Here we consider in detail this quadrupole order. It is well known [@sathyaprakash2009physics] that gravitational radiation will be dominated by the quadrupole moment.
(-8,-21) – node\[below,pos=0.8\] [$x$]{} +(16,0) ; (-8,-21) – node\[left,pos=0.8\] [$t$]{} +(0,30) ;
(0,0) ellipse (3 and 2.5) ;
(0,0.5) to\[out=0,in=-135\] (2,1.5) arc (135:-135:2) to\[out=135,in=0\] (0,-.5) to \[out=180,in=45\] (-2,-1.5) arc (-45:-360+45:2) to \[out=-45,in=180\] cycle;
(0,0.2) to\[out=0,in=-135\] (4,1.5) arc (135:-135:2) to\[out=135,in=0\] (0,-.2) to \[out=180,in=45\] (-4,-1.5) arc (-45:-360+45:2) to \[out=-45,in=180\] cycle;
(0,0.5) to\[out=0,in=-135\] (2,1.5) arc (135:-135:2) to\[out=135,in=0\] (0,-.5) to \[out=180,in=45\] (-2,-1.5) arc (-45:-360+45:2) to \[out=-45,in=180\] cycle;
(0,0) ellipse (3 and 2.5) ;
When considering sources of gravitational waves, there are multiple approaches. For simple orbiting masses, where relativistic effects can be ignored one can find analytic solutions. By contrast the final stages of coalescing black holes require detailed numerical simulations. Once the stress-energy tensor is constructed one can evaluate the corresponding perturbation of the metric and hence the predicted gravitational wave. Our approach is different. In this article we examine the dynamics of quadrupole sources. This has a major advantage that the dynamics are encoded as ODEs for the components, as opposed to the coupled nonlinear PDEs which one is required to solve to model a general relativistic source. The only constraints we put on the source is that it obeys the rules of a stress-energy tensor, namely symmetry of its indices and the divergenceless condition. For the monopole and the dipole it is well known that these conditions constrain the dynamics so much that they prescribe the ODEs: the geodesic equation for the monopole and the Matterson-Papapetrou-Tulczyjew-Dixon equations for the dipole. One may therefore ask if these two conditions also constrain the quadrupole sufficiently to prescribe the ODEs for the components. In this article we show that, whereas 40 of the components are prescribed by ODEs, a further 20 are arbitrary. For example a quadrupole can expand and contract as depicted in figure \[fig\_intro\_grav\_quad\_free\]. Thus by itself this approach cannot completely prescribed the dynamics of a quadrupole and one must add additional ODEs, or algebraic equations, which one can consider to be for quadrupole. These should arise from an underlying model of the source. I.e. coalescing black holes will have different constitutive relations to a rotating “rigid” body held by non gravitation, e.g. electromagnetic and quantum forces. Once the constitutive relations are decided on, the ODEs can be solved and compared to experiment.
Approximating a distribution of matter with an object at single point is a well established method in many branches of physics. Such approximations are valid if the size of the system is small compared to other distances involved. For example when considering coalescing Black Holes as a source of gravitational waves, the distance between the Black Holes is orders of magnitude smaller than there distance to earth. However there may be other objects in nature for which a multipole expansion may be a good model. For example, it is known that atomic nuclei and molecules have higher order moments. Although these objects are fundamentally quantum in nature, they may be modelled by a classical point particles with multipole structure. Knowing the dynamics of multipoles may also shed light on the problem of radiation reaction, in the case when it is the radiation reaction to the dipole or quadrupole dynamics.
There are many important articles which consider multipole expansions. These date back to at least the 1950s there Tulczyjew [@tulczyjew1959motion] considered a multipole expansions to derive the Matterson-Papapetrou-Tulczyjew-Dixon equations for the dipole. Then in the 1960s and 1970s Dixon [@dixon1964covariant; @DixonII; @DixonIII] and Ellis [@Ellis:1975rp] considered both charge and mass distributions using two different general formalism, which we compare here denoting them the Dixon and Ellis representations.
Recently Steinhoff and Puetzfeld [@steinhoff2010multipolar; @steinhoff2012influence; @Steinhoff:2014kwa] calculate the dynamic equation for the components of the quadrupole. In addition they consider the monopole-dipole and monopole-dipole-quadrupole system. In all cases the worldline of the multipole effects the dynamics of the components. However in the above the authors consider whether if and how the dynamics of the worldline is effected by the higher order moments. The conclude that one would need supplementary conditions in order to determine the worldline dynamics. We note that these supplementary conditions are distinct from the constitutive relations described here for the quadrupole. In this article, excluding the section on the monopole, the worldline is arbitrary but prescribed. Thus at the dipole order there are no supplementary conditions required. However as stated there are 20 constitutive relations required at the quadrupole order.
Let $\Mman$ be spacetime with metric $g_{\iMa\iMb}$ and the Levi-Civita[^1] connection $\nabla_{\iMa}$ with Christoffel symbol $\Gamma^{\iMa}_{\iMb\iMc}$. Here Greek indices run $\iMa,\iMb=0,1,2,3$ and Latin indices $\iSa,\iSb=1,2,3$. Let $C:\Interval\to\Mman$ where $\Interval\subset\Real$ be the worldline of the source[^2] with components $C^{\iMa}(\sigma)$. At this point we do not assume that $\sigma$ is proper time. Here we consider stress-energy tensors $T^{\iMa\iMb}$ which are non zero only on the worldline $C^{\iMa}(\sigma)$, where it has a Dirac–$\delta$ like properties. Such stress-energy tensors are called .
Being a non linear theory, one cannot simply apply the theory of distributions to general relativity. It is not meaningful to write Einstein’s equations $$\begin{aligned}
R_{\iMa\iMb}-\tfrac12 g_{\iMa\iMb} R=8\pi\,T_{\iMa \iMb}
\label{Intro_Ein_eqn}\end{aligned}$$ where the right hand side is a distribution. This contrasts with electromagnetism, which since it is a linear theory, one often uses distributional sources. For example an arbitrary moving point charge which gives rise to the Liénard-Wiechard fields.
There are various interpretations to (\[Intro\_Ein\_eqn\]) which one can try when right hand side is distributional. One approach is to extend the theory of distributions to include products. The most successful being Colombeau algebra [@Steinbauer_2006].
Another approach is to consider $T_{\iMa\iMb}$ as a source of linearised gravity. Perturbatively expanding the gravitational metric, $g_{\iMa \iMb}$, about a background $\bar{g}_{\iMa \iMb}$, $g_{\iMa \iMb}
=
\bar{g}_{\iMa \iMb}+\epsilon\,h_{\iMa \iMb}^{(1)}+\cdots
$ where $\epsilon\ll1$ is the perturbation parameter, and plugging the expansion into the Einstein equation (\[Intro\_Ein\_eqn\]) one has $$\begin{aligned}
G_{\iMa \iMb}
=
\bar{G}_{\iMa \iMb}
+
\epsilon G^{(1)}_{\iMa \iMb}
+
\ldots
\qquadand
T_{\iMa \iMb}
=
\bar{T}_{\iMa \iMb}
+
\epsilon T^{(1)}_{\iMa \iMb}
+
\ldots
\label{Intro_G_T_expan}\end{aligned}$$ Hence the background metric $\bar{g}_{\iMa \iMb}$ satisfies $\bar{G}_{\iMa \iMb}=8\pi\,\bar{T}_{\iMa \iMb}$. The linearised equations are then given by $$\begin{aligned}
G^{(1)}_{\iMa \iMb} = 8\pi\,T^{(1)}_{\iMa \iMb}
\label{Intro_lin_Ein}\end{aligned}$$ Setting $\mathcal{H}_{\iMa \iMb}^{1}=h_{\iMa
\iMb}^{1}-\frac{1}{2}\bar{g}_{\iMa \iMb}h^{1}$ and using the Lorenz gauge ($\bar{\nabla}^{\iMa}\mathcal{H}_{\iMa \iMb}^{1}=0$), (\[Intro\_lin\_Ein\]) becomes $$\begin{aligned}
\bar{\Box} \mathcal{H}_{\iMa \iMb}^{(1)}=-16\pi T_{\iMa \iMb}^{(1)}.
\label{Intro_lin_Ein_res2}\end{aligned}$$ where $\bar{\Box}=\bar{g}^{\iMa\iMb}\bar{\nabla}_{\iMa}\bar{\nabla}_{\iMb}$ is the covariant d’Alembertian operator and is constructed purely out of the background spacetime metric $\bar{g}_{\iMa \iMb}$. In the case where the background $\bar{g}_{\iMa \iMb}$ is the Minkowski metric, then $\bar{\Box}=\partial_\iMa\partial^\iMa$ and we can give $\mathcal{H}^{(1)}_{\iMa \iMb}$ in terms of an integral over the retarded Greens functions. $$\begin{aligned}
\mathcal{H}_{\iMa \iMb}^{(1)}(t,\vec{x})
=
4G \int
\frac{T_{\iMa \iMb}^{(1)}
(t-\lvert \vec{x}-\vec{x}^{'}
\rvert,\vec{x}^{'})}{\lvert \vec{x}-\vec{x}^{'} \rvert}
d^{3} \vec{x}^{'}
\label{Intro_lin_Ein_ret_Green}\end{aligned}$$ One should be careful as there is clearly a contradiction between the statement that the perturbation to the background stress-energy tensor is small, and the statement that it is distributional, and therefore infinite.
In this article we are concerned *only* with the structure of the distributional stress-energy, which we write as $T^{\iMa\iMb}$, and avoid questions of how it should be applied. Since $T^{\iMa\iMb}$ is a stress-energy tensor it has the symmetry $$\begin{aligned}
T^{\iMa\iMb}=T^{\iMb\iMa}
\label{Intro_Tab_Sym}\end{aligned}$$ and is divergenceless, also known as covariantly conserved $$\begin{aligned}
\nabla_{\iMa} T^{\iMa\iMb}=0
\label{Intro_Tab_Div_zero}\end{aligned}$$ Observe that because $T^{\iMa\iMb}$ is a tensor density (\[Intro\_Tab\_Div\_zero\]) becomes $$\begin{aligned}
0 &= \nabla_{\iMa} T^{\iMa\iMb}
= \partial_\iMa T^{\iMa\iMb} + \Gamma^{\iMb}_{\iMa\iMc} T^{\iMa\iMc}
\label{Intro_Tab_Div_zero_density}\end{aligned}$$ where $\Gamma^{\iMb}_{\iMa\iMc}$ are the Christoffel symbols.
There are several ways of representing a multipole. However we consider multipoles to be distributions which are integrated with a symmetric test tensor $\TtwoTen_{\iMa\iMb}=\TtwoTen_{\iMb\iMa}$, so that $$\begin{aligned}
\int_\Mman T^{\iMa\iMb}\,\TtwoTen_{\iMa\iMb}\, d^4x
\qquad
\text{is a real number}
\label{Intro_distribution}\end{aligned}$$ These all can be written as an integral over the worldline with a number of derivative of the Dirac δ-function. I.e. a multipole of order $k$ is $$\begin{aligned}
T^{\iMa\iMb}
=
\sum_{r=0}^k \int_\Interval \zeta^{\iMa\iMb\ldots}(\sigma)
\
{\cal D}^{(r)}_{\ldots} \ \deltaFour\big(x-C(\sigma)\big)
\
d\sigma
\label{Intro_general_k_pole}\end{aligned}$$ where there are $r$ additional indices on $\zeta^{\iMa\iMb\ldots}$ and ${\cal D}^{(r)}_{\ldots}$. The subscript dots on ${\cal
D}^{(r)}_{\ldots}$ contract with the superscript dots on $\zeta^{\iMa\iMb\ldots}$. Here ${\cal D}^{(r)}_{\ldots}$ represents $r$ derivatives of the δ-function. The familiar cases are the when $k=0$, the when $k=1$ and the when $k=2$. As can be seen from (\[Intro\_general\_k\_pole\]) the general dipole contains the monopole term and the general quadrupole contains both the monopole and dipole terms. In general, it is not possible to extract the monopole and dipole terms from the quadrupole, without additional structure such as a preferred vector field or a coordinate system. For the monopole (\[Intro\_Tab\_Sym\]) and (\[Intro\_Tab\_Div\_zero\]) lead to the geodesic equation. By contrasts, for the dipole and quadrupole there is no need to assume the worldline $C$ is a geodesic. Therefore, unless otherwise stated, we present all the result for an arbitrary but prescribed worldline.
There are two main representations of multipoles. One uses the partial derivatives, which we call the representation. The other uses the covariant derivative and will be called the representation. Both have their advantages and disadvantages and these are outline in section \[ch\_EllisDixon\] below. The Ellis formulation is greatly simplified when using a coordinate system $(\sigma,z^1,z^2,z^3)$ which is adapted to the worldline, i.e. where $$\begin{aligned}
C^0(\sigma)=\sigma
\qquadand
C^\iSa(\sigma)=0
\label{Intro_adapt_coords}\end{aligned}$$ for $\iSa=1,2,3$. In this coordinate system the integral in (\[Intro\_general\_k\_pole\]) can be removed. Observe that (\[Intro\_adapt\_coords\]) implies $\Cdot^0=1$ and $\Cdot^\iSa=0$.
The monopole and dipole have been extensively studied in the literature, [@Han:2016cdh; @Kopeikin:2018zro; @Blanchet:2006sc]. In this article we concentrate mainly on the quadrupole. This is particularly interesting. Not only is it the natural source of gravitational waves, but it has several unusual properties not seen in the case of the monopole or quadrupole. These include
The quadrupole contains free components.
In the Ellis representation, the components $\zeta^{\iMa\iMb\iMc\iMd}$ to not transform as tensors but instead involve second derivatives and double integrals.
There is no concept of mass. Instead one can only talk about the energy of a quadrupole and only really in the case where there is a timelike Killing symmetry.
The $\zeta^{\iMa\iMb\ldots}=\zeta^{\iMa\iMb\ldots}(\sigma)$ are called the components of $T^{\iMa\iMb}$ and are functions only of the position on the worldline $C$. Clearly from (\[Intro\_Tab\_Sym\]) they have the symmetry $$\begin{aligned}
\zeta^{\iMa\iMb\ldots} = \zeta^{\iMb\iMa\ldots}
\label{Intro_zeta_sym}\end{aligned}$$ depending on the representation, we may also choose to impose additional symmetries for uniqueness. We then apply the divergenceless condition (\[Intro\_Tab\_Div\_zero\]) to establish further condition on the $\zeta^{\iMa\iMb\ldots}$. We can place the components $\zeta^{\iMa\iMb\ldots}$ into three categories.
Some components are algebraically related to other components and can therefore be removed.
Some components can be are determined by a first order ODE. These are result of the differential equation (\[Intro\_Tab\_Div\_zero\]). In order to specify these components it is only necessary to specify their initial value at some point along the worldline.
This leaves the components we call . These are not constrained by (\[Intro\_Tab\_Sym\]) and (\[Intro\_Tab\_Div\_zero\]) and are allowed to take on any value. These free components can however influence the ODE components.
In order to completely specify the dynamics of a quadrupole, these free components need to be replaced by constitutive equations. The choice of constitutive equations depends on a choice of a model for the material. For example the the quadrupole modelling an elastic material or a fluid with or without pressure, or something else. In section \[ch\_Dust\] we consider the dust stress-energy tensor and use it to suggest corresponding constitutive equations.
In table \[tab\_number\_components\] the number of ODE and free components is given. This is compared to the electromagnetic dipoles and quadrupoles.
This has similarities to other gauges freedoms in that it arises from integrating a physically observable tensor, although the components $\zeta^{\iMa\iMb\ldots}$ are not themselves tensors.
----------------- ----- ------ ----- ------
ODE free ODE free
Monopole 1 0 1 0
Semi-dipole 1 3 7 0
full dipole 1 6 10 0
semi-quadrupole 1 12 22 6
full quadrupole 1 20 40 20
----------------- ----- ------ ----- ------
: List of the number of components which are determined by an ODE and the number which are free, for monopoles, dipoles and quadrupoles. The electromagnetic sources refer to a current $\Jcurr^{\iMa}$ which is conserved and a source for Maxwell’s equations. The gravitational source refers to a stress-energy tensor $T^{\iMa\iMb}$ which are sources for (linearised) Einstein’s equations. Each order includes all the lower orders. That is the 10 components in the full stress-energy dipole includes 1 monopole component, while the $(40+20)$ components in the full quadrupole includes both dipole and monopole components. The definition of the semi-dipole and semi-quadrupole is given in section \[ch\_Semi\_Q\].[]{data-label="tab_number_components"}
(0,-.5) – node\[black,pos=0.9,left\][(a)]{} (0,3.5) ; iin [1,2,...,10]{} [ (0,i3/11) – +([sin(i180/11)]{},0) ;]{} ;
(-1,3) node[(b)]{} ; (0,0) arc(-45:45:2) node\[pos=0.5,right\] [$e^+$]{} ; (0,0) arc(180+45:180-45:2) node\[pos=0.5,left\] [$e^-$]{} ;
For the electromagnetic dipole there is one ODE component, which is simply the total charge and satisfies ${dq/d\sigma}=0$, and there are six free components corresponding to the three electric and three magnetic components. These can be anything without braking charge conservation, as seen in figure \[fig\_intro\_electric\_dipole\]. For the stress-energy tensor, the free components can correspond to the internal matter separating and coalescing, as in figure \[fig\_intro\_grav\_quad\_free\]. In the electromagnetic current case, having free components was not so concerning as one would expect these components to be fixed by the internal dynamics of the charges. However the stress-energy tensor is supposed to contain all the , and one would like there not to be any free components. One therefore needs additional constitutive relations which encode the matter one is modelling. In this article we give an example of constitutive relations which corresponding to non divergent dust.
Given a regular stress-energy tensor $T^{\iMa\iMb}$ and a Killing vector field $K^{\iMa}$ we can find a conserved quantity $T^{\iMa\iMb}
K_\iMb$ such that $\nabla_{\iMa} (T^{\iMa\iMb} K_\iMb)=0$. The same is true for the distributional stress-energy tensor. Here $K_{\iMa}$ gives rise to a conserved vector field $Q^{\iMa}(\sigma)$ along the worldline $C$. If $K^{\iMa}$ is a timelike Killing vector field it is natural to associate $Q^{\iMa}(\sigma)$ as the conserved energy of the multipole. The relationship between the energy and mass is however subtle. In the monopole and dipole case there is a natural definition of the mass, the same is not true in the quadrupole case. Even when a mass can be defined, it is not conserved in general.
### Outline of article {#outline-of-article .unnumbered}
As stated above there are two established methods of representing the stress-energy distribution: one using partial derivatives in (\[Intro\_general\_k\_pole\]), which we call the representation, and the other using covariant derivatives, which we call the representation. The pro and cons of these two approaches is discussed in section \[ch\_EllisDixon\] and summarised in table \[tab\_Ellis\_Dixon\]. In section \[ch\_MD\] we summarise the key results of the monopole and dipole stress-energy tensors. We highlight the Ellis and Dixon representations of the dipole.
In section \[ch\_QP\] we examine the quadrupole in detail. In this section we use the Ellis approach. We give the freedom of the components and complicated change of coordinates which involve second derivatives and integrals over the worldline. We use the adapted coordinates (\[Intro\_adapt\_coords\]) and give the differential equations arising from the symmetry (\[Intro\_Tab\_Sym\]) and divergencelessness (\[Intro\_Tab\_Div\_zero\]) of $T^{\iMa\iMb}$. We can now identify which components are algebraic, which satisfy ODEs and which are free. In subsection \[ch\_Q\_free\] we give an example of the free components in Minkowski spacetime as depicted in figure \[fig\_intro\_grav\_quad\_free\]. As stated above, if there is a Killing vector field, there exists a corresponding conserved quantity. These are given in section \[ch\_Q\_Cons\]. This included a new interpretation of the conserved quantities corresponding the three Lorentz boosts.
In section \[ch\_Dust\] we use the limit of the dust stress-energy tensor as it is squeezed onto the worldline to construct a choice of constitutive relations to replace the free components with ODEs.
Although we have defined everything in terms of a coordinate system, it is useful to define the multipoles in a coordinate free manner. The advantage of such an approach is that complicated coordinate transformations are avoided. It is interesting to observe that, using deRham currents, multipoles can be defined without any additional structure on a manifold. I.e. it is not necessary to prescribe either a metric or a connection to define a general multipole. This is particularly useful if we wish to extend the notion of a general multipole tensor distributions to manifolds such as the tangent bundle which does not posses either metric or connection. However a connection is of course needed to define the covariantly conserved property (\[Intro\_Tab\_Div\_zero\]). In section \[ch\_CoFree\] we detail this approach. Having defined a multipole in a coordinate free manner, one can extract the components in the Ellis approach with respect a coordinate system. This is explicitly given in the case of an adapted coordinate system in section \[ch\_CoFree\_Kill\]. By contrast to the Ellis approach, the Dixon approach contains more information about a multipole, namely how it splits into a monopole term, a dipole term, a quadrupole term and so on. This split, called here the , is actually coordinate independent and the details are given in section \[ch\_CoFree\_Dixon\].
As noted in [@gratus2018correct], without a metric, connection or coordinate system, it is still possible to define and a pure electric dipole. In this article we call such a dipole a . We observe that the semi-dipole stress-energy consists of the displacement vector by not the spin. In section \[ch\_Semi\_Q\] we define the semi-dipole and semi-quadrupole stress-energy tensor.
We conclude, in section \[ch\_Conclusion\]. Finally in the appendix we prove all the results in the body of the article.
### Notation regarding derivatives {#notation-regarding-derivatives .unnumbered}
Given a coordinate system $(x^0,\ldots,x^3)$ then Greek indices $\iMa,\iMb=0,\ldots,3$. We write the partial derivatives $$\begin{aligned}
\partx_{\iMa} = \pfrac{}{x^{\iMa}}
\label{Intro_def_partial}\end{aligned}$$ In the case of the adapted coordinates $(\sigma,z^1,z^2,z^3)$ obeying (\[Intro\_adapt\_coords\]) we use both Greek indices $\iMa,\iMb=0,\ldots,3$ and Latin indices $\iSa,\iSb=1,2,3$. In this case we have $$\begin{aligned}
\partz_{0} = \pfrac{}{\sigma}
\qquadand
\partz_{\iSa} = \pfrac{}{z^\iSa}
\label{Intro_def_partial_adap}\end{aligned}$$ Thus, even if not stated explicitly, writing $\partz_{\iSa}$ implies we are referring to an adapted coordinates system.
Note that in both the adapted and non adapted case we use overdot to represent differentiation with respect to $\sigma$. In the non adapted coordinates this is only used for quantities, such as $C^\iMa(\sigma)$ and $\Cdot^\iMa(\sigma)$ which are only defined on the worldline. In the adapted coordinate cases is it synonymous with $\partz_0$.
When we have two non adapted coordinate systems $(x^0,\ldots,x^3)$ and $(\xhat^\Ohat,\ldots,\xhat^{\hat{3}})$ we use the hat on the index to indicate the hatted coordinate system. Thus $$\begin{aligned}
\partx_{\iMahat} = \pfrac{}{\xhat^{\iMahat}}
\label{Intro_def_partial_hat}\end{aligned}$$ Likewise for the adapted coordinate system $(\hat{\sigma},\zhat^{\hat{1}},\zhat^{\hat{2}},\zhat^{\hat{3}})$ we have $$\begin{aligned}
\partz_{\Ohat} = \pfrac{}{\hat\sigma}
\qquadand
\partz_{\iSahat} = \pfrac{}{\zhat^\iSahat}
\label{Intro_def_partial_adap_hat}\end{aligned}$$
Dixon’s versus Ellis’s approaches to multipoles {#ch_EllisDixon}
===============================================
[|p[0.45]{}|p[0.45]{}|]{} &\
Can be defined using coordinates. & Can be defined using coordinates.\
Components are unique for adapted coordinates. & Components are unique.\
For general coordinate transformation the components require higher derivatives and integrals. & Components transform as a tensor.\
Do not require any additional structure. These can be defined without referring to a metric or additional vector field. & Requires the connection and the Dixon vector $\DixVec_{\iMa}(\sigma)$ for the definition.\
Contains all multipoles up to specific order. & Contains all multipoles up to specific order.\
It is not possible to extract a multipole of a specific order without additional structure. For example an adapted coordinate system. & Easy to extract a multipoles of any order.\
Can be easily defined in a coordinate free way using DeRham push forward. & The Dixon split can be defined in a coordinate free way, but this definition is complicated and requires the DeRham push forward plus a non intuitive additional axiom. This axiom is given in section \[ch\_CoFree\_Dixon\] and encodes the orthogonality condition.\
The dipole can be written in the Ellis representation, which is consistent with the Matterson-Papapetrou-Tulczyjew-Dixon equations. & The dipole can be written in the Dixon representation, which is consistent with the Matterson-Papapetrou-Tulczyjew-Dixon equations.\
There is no concept of the mass of the multipole & The mass is given by the monopole term.\
There is no orthogonality condition. & There is a complicated formula for the components with respect to different $\DixVec^{\iMa}(\sigma)$. This will mix in multipoles of different orders.\
The relationship between these moments and Fourier transforms is less clear than the Dixon representation. & There is a clear relationship between these moments and Fourier transform.\
One can construct a regular tensor field whose moments, up to $k$, are the components of the distribution. The best method is using squeezed tensors that employ an adapted coordinate system.
------------------------------------------------------------------------
In principle is should be possible to do reconstruct an original distribution using the Fourier transform but this has not been investigated. This requires certain assumptions about analyticity of Fourier transform. & One can construct a tensor field whose moments, up to $k$, are the components of the distribution. This is by considering the fields on the transverse hyperspace constructed from the geodesic map of vectors orthogonal to $\DixVec^{\iMa}(\sigma)$.
------------------------------------------------------------------------
If all the moments are know one can reconstruct an original distribution. This also requires certain assumptions about analyticity of Fourier transform.\
There is a formula for extracting the components using test tensors, in adapted coordinate. & In principle the components can be extracted using test tensors.\
The Ellis approach
------------------
As stated in the introduction there are two standard approaches to writing down distributional multipoles.
One method [@Ellis:1975rp] uses partial derivatives of the Dirac-δ function. Although Ellis principally defines it for the electric current $\Jcurr^{\iMa}$ it is easy to extend this for the stress-energy tensor. So a multipole of order $k$ is given by $$\begin{aligned}
T^{\iMa\iMb} = \cRed{\frac{1}{k!}}
\int_\Interval \zetaMultiEllis^{\iMa \iMb \iMc_1\ldots\iMc_k}(\sigma)
\ \partx_{\iMc_1} \cdots \partx_{\iMc_k}
\deltaFour\big(x-C(\sigma)\big)\,d\sigma
\label{Intro_Tab_Ellis_Multi}\end{aligned}$$ where $\zetaMultiEllis^{\iMa \iMb \iMc_1\ldots\iMc_k}(\sigma)$ are smooth functions of $\sigma$ and $\partx_{\iMc_j}$ is given by (\[Intro\_def\_partial\]). Thus when acting on the test tensor $\TtwoTen_{\iMa\iMb}$ $$\begin{aligned}
\int_\Mman T^{\iMa\iMb} \ \TtwoTen_{\iMa\iMb} \ d^4x
&=
(-1)^{k} \cRed{\frac{1}{k!}}\int_\Interval
\zetaMultiEllis^{\iMa \iMb \iMc_1\ldots\iMc_k}(\sigma)\,
\big(\partx_{\iMc_1} \cdots \partx_{\iMc_k}
\TtwoTen_{\iMa\iMb} \big)\big|_{C(\sigma)}
\label{Intro_Tab_Ellis_Multi_action}\end{aligned}$$ In this article we will refer to this representation of a multipole as the representation
The symmetry of $T^{\iMa\iMb}$ leads to $$\begin{aligned}
\zetaMultiEllis^{\iMa \iMb \iMc_1\ldots\iMc_k}
=
\zetaMultiEllis^{\iMb \iMa \iMc_1\ldots\iMc_k}
\label{EllisDixon_ab_symm}\end{aligned}$$ In additional the partial derivatives commute it is natural to demand that the components of $\zetaMultiEllis$ are symmetric. Thus we set $$\begin{aligned}
\zetaMultiEllis^{\iMa \iMb \iMc_1\ldots\iMc_k}
=
\zetaMultiEllis^{\iMa \iMb (\iMc_1\ldots\iMc_k)}
\label{Intro_Zeta_Ellis_sym}\end{aligned}$$ where the round brackets mean the scaled sum over all permutations of the indices, $$\begin{aligned}
\zetaMultiEllis^{\iMa \iMb (\iMc_1\ldots\iMc_k)}
=
\frac{1}{k!}
\sumdoubleind{\text{All permutations}}{{i_1\ldots i_k}}
\zetaMultiEllis^{\iMa \iMb \iMc_{i_1}\ldots\iMc_{i_k}}
\label{Intro_def_round_brakets}\end{aligned}$$ One problem with the Ellis representation is that the $\zetaMultiEllis^{\iMa \iMb \iMc_1\ldots\iMc_k}$ are not unique. Examples of the freedom that these $\zetaMultiEllis^{\iMa \iMb \iMc_1\ldots\iMc_k}$ have is given in (\[MD\_diploe\_Zeta\_Freedom\]) and (\[QP\_Zeta\_Freedom\]). This contrasts with the case when one chooses and adapted coordinate system below.
Adapted coordinates {#ch_Ellis_adap}
-------------------
In general expressions for multipoles in the Ellis representation are complicated. They greatly simply if one chooses an adapted coordinate system as given by (\[Intro\_adapt\_coords\]). In this coordinate system the integral over $\Interval$ is no longer necessary and we replace (\[Intro\_Tab\_Ellis\_Multi\]) with $$\begin{aligned}
T^{\iMa\iMb} =
\sum_{r=0}^k
\cRed{\frac{1}{r!}}
\gamma^{\iMa\iMb \iSa_1 \ldots\iSa_r 0\ldots 0}(\sigma)
\ \partz_{\iSa_1} \cdots \partz_{\iSa_r}\,
\deltaThree(\Vz)
\label{Ellis_adap_Ellis_Multi}\end{aligned}$$ where $\Vz=(z^1,z^2,z^3)$. The component $\gamma^{\iMa\iMb \iSa_1 \ldots\iSa_r 0\ldots 0}$ has $(k-r)$ zero indices, so that $\gamma^{\iMa\iMb \iSa_1 \ldots\iSa_r 0\ldots 0}$ has $2+k$ indices. Observe we only differentiate $\deltaThree(\Vz)$ in the $z^\iSa$ direction. Thus when acting on a test tensor
$$\begin{aligned}
\int_\Mman T^{\iMa\iMb}\,\TtwoTen_{\iMa\iMb}\,d^4 x
&=
\sum_{r=0}^k
{\frac{(-1)^r}{r!}}
\int_\Interval d\sigma \,
\gamma^{\iMa\iMb \iSa_1 \ldots\iSa_r 0\ldots 0}(\sigma)
\,
(\partz_{\iSa_1} \cdots \partz_{\iSa_r}\,\TtwoTen_{\iMa\iMb})
\end{aligned}
\label{Ellis_adap_Ellis_Multi_action}$$
We still impose the symmetry conditions (\[EllisDixon\_ab\_symm\]) and (\[Intro\_Zeta\_Ellis\_sym\]) on the $\gamma$’s so that $$\begin{aligned}
\gamma^{\iMa \iMb \iMc_1\ldots\iMc_k}
=
\gamma^{\iMb \iMa \iMc_1\ldots\iMc_k}
=
\gamma^{\iMa\iMb (\iMc_1\ldots\iMc_k)}
\label{Ellis_adap_symms}\end{aligned}$$ The relationship between the $\gamma^{\iMa\iMb \iSa_1\ldots\iSa_r 0\ldots 0}$ and $\zetaMultiEllis^{\iMa\iMb \iMc_1\ldots\iMc_k}$ is given by comparing (\[Intro\_Tab\_Ellis\_Multi\_action\]) and (\[Ellis\_adap\_Ellis\_Multi\_action\]) for an adapted coordinate system $$\begin{aligned}
\gamma^{\iMa\iMb \iSa_1\ldots\iSa_r 0\ldots 0}
=
\frac{1}{(k-r)!}
\JGchange{\partial_0^{r-k}}
\zetaMultiEllis^{\iMa\iMb \iSa_1\ldots\iSa_r0\ldots 0}
\label{Ellis_adap_gam_zeta}\end{aligned}$$
In an adapted coordinate system, the $\gamma^{\iMa\iMb
\iSa_1\ldots\iSa_r 0\ldots 0}$ are uniquely determined by the distribution. The freedom of the $\zetaMultiEllis^{\iMa \iMb \iSa_1,\ldots\iSa_r0\ldots0}$ in this case arising from the arbitrary constants when integrating (\[Ellis\_adap\_gam\_zeta\]) with respect to $\sigma$.
With respect to this coordinate system, one can partition the multipoles into a monopole, a pure dipole, a pure quadrupole and so on. However this is a coordinate dependent splitting and these terms will mix when changing the coordinate system. The coordinate transformation for quadrupoles is given in (\[QP\_gamma\_chage\_coords\_munu\])-(\[QP\_gamma\_chage\_coords\_00\]). Although they involve up to $k$ derivatives of the coordinate transformation, they do not require any integrals.
Squeezed tensors {#ch_Ellis_Squz}
----------------
In an adapted coordinate system, one can construct a one parameter family of regular stress-energy tensors $\TReg^{\iMa\iMb}_\varepsilon$ from a given stress-energy tensor $\TReg^{\iMa\iMb}$ such that in the weak limit $\TReg^{\iMa\iMb}_\varepsilon\to T^{\iMa\iMb}$ at $\varepsilon\to0$ to order $k$. Since we are using adapted coordinates, we write $(\sigma,\Vz)=(\sigma,z^1,z^2,z^3)$. We set $$\begin{aligned}
\TReg^{\iMa\iMb}_\varepsilon(\sigma,\Vz)
&=
\frac{1}{\varepsilon^3}
\ \TReg^{\iMa\iMb}\Big(\sigma,\frac{\Vz}{\varepsilon}\Big)
\label{Ellis_Squz_def_T_eps}\end{aligned}$$ We assume that $\TReg^{\iMa\iMb}$ has compact support in the transverse planes. I.e. for each $\sigma$, there is a function $R(\sigma)$ such that $$\begin{aligned}
\TReg^{\iMa\iMb}(\sigma,\Vz)
=
0
\qquadtext{for}
g_{\iSa\iSb} \,z^\iSa\,z^\iSb > R(\sigma)
\label{Ellis_Squz_compact_supp}\end{aligned}$$ This guarantees that all the moments, are finite.
This leads to $$\begin{aligned}
\TReg^{\iMa\iMb}_\varepsilon(\sigma,\Vz)
=
\gamma^{\iMa\iMb0\ldots0} \ \deltaThree(\Vz)
+
\varepsilon\,\gamma^{\iMa\iMb\iSa0\ldots0}\,\partz_\iSa\deltaThree(\Vz)
+
\varepsilon^2\,\gamma^{\iMa\iMb\iSa\iSb0\ldots0}
\,\partz_\iSa\partz_\iSb\deltaThree(\Vz)
+\cdots
\end{aligned}
\label{Ellis_Squz_Expansion}$$ where $$\begin{aligned}
\gamma^{\iMa\iMb0\ldots0} (\sigma)
&=
\int_{\Real^3} d^3\Vz\
\TReg^{\iMa\iMb}\big(\sigma,\Vz\big)
,\qquad
\gamma^{\iMa\iMb\iSa0\ldots0} (\sigma)
=
-\int_{\Real^3} d^3\Vz\
z^\iSa\,\TReg^{\iMa\iMb}\big(\sigma,\Vz\big)
,
\\
\gamma^{\iMa\iMb\iSa\iSb0\ldots0} (\sigma)
&=
\int_{\Real^3} d^3\Vz\
z^\iSa\,z^\iSb\,\TReg^{\iMa\iMb}\big(\sigma,\Vz\big)
\qquad\text{etc.}
\end{aligned}
\label{Ellis_Squz_moments}$$ Thus there is an intimate relationship between the components of a distribution and the moments of a regular stress-energy tensor. Here the zeroth order gives the monopole, the first order the dipole and so on. This split is with respect to the chosen adapted coordinate system and these will mix under a coordinate transformation.
The Dixon approach
------------------
The alternative approach, largely developed by Dixon [@DixonII] uses the covariant derivative and a choice of a vector field $\DixVec_{\iMa}(\sigma)$ along the worldline $C^{\iMa}$. This we will call the . This vector is required to be not orthogonal to the worldline $C^{\iMa}$, i.e. $$\begin{aligned}
\DixVec_{\iMa} \, \Cdot^{\iMa} \ne 0
\label{Intro_Tab_Dixon_N_NonOrth}\end{aligned}$$ As long as the worldline $C$ is timelike, a natural choice of the Dixon vector is $\Cdot$, i.e. $\DixVec_{\iMa}
=
g_{\iMa\iMb}\, \Cdot^{\iMa}
$ but this is not the only choice. Having chosen $\DixVec_{\iMa}$, the representation of a multipole is defined by its action on the test tensor $\TtwoTen_{\iMa\iMb}$ as $$\begin{aligned}
\int_\Mman T^{\iMa\iMb} \ \TtwoTen_{\iMa\iMb} \ d^4x
&=
\sum_{r=0}^k (-1)^{r} \cRed{\frac{1}{r!}} \int_\Interval
\zetaMultiDixon^{\iMa\iMb \iMc_1\ldots\iMc_r}(\sigma)\,
\big(\nabla_{c_1} \cdots \nabla_{c_r} \TtwoTen_{\iMa\iMb} \big)\big|_{C(\sigma)}
\ d\sigma
\label{Intro_Tab_Dixon_Multi_action}\end{aligned}$$ where we demand that the components $\zetaMultiDixon^{\iMa \iMb \iMc_1\ldots\iMc_k}$ are orthogonal to the vector $\DixVec^{\iMa}$ $$\begin{aligned}
\DixVec_{\iMc_j}\
\zetaMultiDixon^{\iMa \iMb \iMc_1\ldots\iMc_k}
= 0
\label{Intro_Tab_Dixon_orthog}\end{aligned}$$ for $j=1,\ldots,k$. The covariant derivatives do not commute, as they give rise to curvature terms and lower the number of derivatives. Therefore we again assume that $\zetaMultiDixon^{\iMa \iMb \iMc_1\ldots\iMc_k}$ are symmetric in the relevant indices. $$\begin{aligned}
\zetaMultiDixon^{\iMa \iMb \iMc_1\ldots\iMc_k}
=
\zetaMultiDixon^{\iMa\iMb (\iMc_1\ldots\iMc_k)}
\label{Intro_Zeta_Dixon_sym}\end{aligned}$$ Dixon [@DixonII See equations (4.18), (7.4), (7.5)] writes the distribution for the electric current $\Jcurr^{\iMa}$ in terms of the covariant derivatives of a distribution. We can extend this to the stress-energy tensor $T^{\iMa\iMb}$ via $$\begin{aligned}
T^{\iMa\iMb} = \sum_{r=0}^k \cRed{\frac{1}{r!}} \nabla_{\iMc_1} \cdots \nabla_{\iMc_r}
\int_\Interval \zetaMultiDixon^{\iMa\iMb \iMc_1\ldots\iMc_r}(\sigma)
\,
\deltaFour\big(x-C(\sigma)\big)\,d\sigma
\label{Intro_Tab_Dixon_Multi}\end{aligned}$$ Since $T^{\iMa\iMb}$ is a tensor density this enables us to throw the covariant derivative over onto the test tensor. This follow since if $v^\iMa$ is a vector density (of the correct weight) then $\nabla_\iMa\,v^\iMa=\partial_\iMa\,v^\iMa$.
From (\[Intro\_Tab\_Dixon\_Multi\]) we can use the Dixon vector to perform the in order to take an arbitrary $k$th order multipole and split it into a monopole part, a dipole part and so on. Thus we set $$\begin{aligned}
T^{\iMa\iMb} = \sum_{r=0}^k T^{\iMa\iMb}_{(r)}
\qquadtext{where}
T^{\iMa\iMb}_{(r)} = \cRed{\frac{1}{r!}}\nabla_{\iMc_1} \cdots \nabla_{\iMc_r}
\int_\Interval \zetaMultiDixon^{\iMa\iMb \iMc_1\ldots\iMc_r}(\sigma)
\,
\deltaFour\big(x-C(\sigma)\big)\,d\sigma
\label{Intro_Tab_Dixon_Split}\end{aligned}$$ In section \[ch\_CoFree\_Dixon\] we present a coordinate free approach to performing this split.
Both the Ellis and Dixon approaches have advantages and disadvantages and these are listed in table \[tab\_Ellis\_Dixon\].
$$\begin{array}{|l|l|}
\hline
\text{speed of light} & [1]
\\\hline
dx^{\iMa} & [L]
\\\hline
g_{\iMa\iMb} & [1]
\\\hline
\Cdot & [L^{-1}]
\\\hline
\Cdot^{\iMa} & [1]
\\\hline
\partx_{\iMa} & [L^{-1}]
\\\hline
\deltaFour\big(x-C(\sigma)\big) & [L^{-4}]
\\\hline
\text{mass }m & [M]
\\\hline
\end{array}
\qquad\qquad
\begin{array}{|l|l|}
\hline
T^{\iMa\iMb} & [M\,L^{-3}]
\\\hline
\text{test tensor } \TtwoTen_{\iMa\iMb} & [L^{-1}]
\\\hline
\text{dipole displacement }X^{\iMa} & [M L]
\\\hline
\text{dipole 3--momentum }P^{\iMa} & [M]
\\\hline
\text{dipole spin }S^{\iMa\iMb} & [M L]
\\\hline
\zetaMultiEllis^{\iMa\iMb \iMc_{i_1}\ldots \iMc_{i_k}} & [M\,L^{k}]
\\\hline
\zetaMultiDixon^{\iMa\iMb \iMc_{i_1} \ldots \iMc_{i_k}} & [M\,L^{k}]
\\\hline
\gamma^{\iMa\iMb \iSa_{i_1}\ldots\iSa_{i_k}0\ldots 0} & [M\,L^{k}]
\\\hline
\end{array}
\label{Intro_convension_units}$$
Summary of the monopole and dipole stress-energy tensors. {#ch_MD}
=========================================================
The monopole
------------
From (\[Intro\_Tab\_Ellis\_Multi\]) with $k=0$ we have the gravitational monopole $$\begin{aligned}
T^{\iMa\iMb}=\int_\Interval \zeta^{\iMa\iMb} \delta\big(x-C(\tau))\,d\tau
\label{MD_monpole}\end{aligned}$$ The requirement to be a stress-energy tensor (\[Intro\_Tab\_Sym\]),(\[Intro\_Tab\_Div\_zero\]) implies that $C$ satisfies the pre-geodesic equation $$\begin{aligned}
\Cdot^\iMb\nabla_\iMb \Cdot^\iMa = \kappa_\pregeo(\sigma)\,\Cdot^\iMa
\label{MD_pre_geodesic}\end{aligned}$$ and $$\begin{aligned}
T^{\iMa\iMb}
=
\int_\Interval m_\pregeo(\sigma)\,
\Cdot^{\iMa}\,\Cdot^\iMb\,\delta\big(x-C(\sigma)\big)
\ d\sigma
\label{MD_pre_geodesic_T}\end{aligned}$$ where $$\begin{aligned}
\dot{m}_\pregeo + \kappa_\pregeo\,m_\pregeo=0
\label{MD_pre_geodesic_m_eqn}\end{aligned}$$ Here the overdot refers to differentiation with respect to differentiation with respect to $\sigma$. If $\sigma$ is proper times so that $$\begin{aligned}
g_{\iMa\iMb}\,\Cdot^\iMa\,\Cdot^\iMb=-1
\label{MD_propertime}\end{aligned}$$ then $\kappa_\pregeo=0$ and (\[MD\_pre\_geodesic\]) gives the geodesic equation $$\begin{aligned}
\Dfrac{\Cdot^\iMa}{\sigma} = 0
\label{MD_geodesic}\end{aligned}$$ where $\Dfrac{}{\sigma}$ represents the covariant derivative along the worldline, i.e. $$\begin{aligned}
\Dfrac{X^{\iMa}}{\sigma}
=
\dot{X}^{\iMa} + \Gamma^{\iMa}_{\iMb\iMc}\, X^\iMb\,\Cdot^\iMc
\label{MD_def_Dfrac}\end{aligned}$$ In this case we replace $m_\pregeo$ with $m$ in (\[MD\_pre\_geodesic\_T\]). If $m>0$ then we can associate it with the mass of the source. Thus (\[MD\_pre\_geodesic\_T\]) becomes
$$\begin{aligned}
T^{\iMa\iMb}
=
m \int_\Interval
\Cdot^{\iMa}\,\Cdot^\iMb\,\delta\big(x-C(\sigma)\big)
\,d\sigma
\label{MD_geodesic_T}\end{aligned}$$
Thus there remain just one ODE for the remaining component, namely $\dot{m}=0$. There are no additional free components. See table \[tab\_number\_components\]. However as stated in the introduction, we do not impose the geodesic equation for the subsequent dipole and quadrupoles terms.
The dipole
----------
Setting $k=1$ in (\[Intro\_Tab\_Ellis\_Multi\]) gives the dipole $$\begin{aligned}
T^{\iMa\iMb}
=
\int_\Interval \zetaDP^{\iMa\iMb\iMc}\, \partx_\iMc
\delta\big(x-C(\sigma)\big)
\,d\sigma
\label{MD_diploe_Tab}\end{aligned}$$ where the symmetry condition (\[Intro\_Tab\_Sym\]) implies $\zetaDP^{\iMa\iMb\iMc}=\zetaDP^{\iMb\iMa\iMc}$. We observe that, whereas the components $\zetaDP^{\iMa\iMb\iMc}$ uniquely specify $T^{\iMa\iMb}$, the contrast is not true. That is given $T^{\iMa\iMb}$ the gauge freedom in $\zetaDP^{\iMa\iMb\iMc}$ given by $$\begin{aligned}
\zetaDP^{\iMa\iMb\iMc} \to \zetaDP^{\iMa\iMb\iMc} + M^{\iMa\iMb}\Cdot^\iMc
\label{MD_diploe_Zeta_Freedom}\end{aligned}$$ where $M^{\iMa\iMb}=M^{\iMb\iMa}$ are any set of constants.
In addition the $\zetaDP^{\iMa\iMb\iMc}$ are not tensorial quantities but have a coordinate transformation which involves a derivatives of the Jacobian matrix and an integral. Given two coordinate systems $(x^0,\ldots,x^3)$ and $(\xhat^0,\ldots,\xhat^3)$ then $$\begin{aligned}
\zetahat^{\iMahat\iMbhat\iMchat}
=
J^{\iMahat}_{\iMa} J^{\iMbhat}_\iMb J^{\iMchat}_\iMc \zeta^{\iMa\iMb\iMc}
-
\Cdothat{}^\iMchat
\int^\sigma \partial_\iMc(J^{\iMahat}_{\iMa} J^\iMbhat_\iMb)\,
\zeta^{\iMa\iMb\iMc}\,d\sigma'
\label{MD_diploe_Change_coords}\end{aligned}$$ where $$\begin{aligned}
J^{\iMahat}_{\iMa} = \pfrac{\xhat^{\iMahat}}{x^{\iMa}}
\label{QP_def_J}\end{aligned}$$ Here the freedom to choose the arbitrary constant of integration in (\[MD\_diploe\_Change\_coords\]) is equivalent to the freedom (\[MD\_diploe\_Zeta\_Freedom\]). In adapted coordinates (\[Intro\_adapt\_coords\]) then (\[Ellis\_adap\_Ellis\_Multi\]) and (\[Ellis\_adap\_Ellis\_Multi\_action\]) become $$\begin{aligned}
T^{\iMa\iMb} =
\gamma^{\iMa\iMb0}
\deltaThree(\Vz)
+
\gamma^{\iMa\iMb \iSa}
\ \partz_{\iSa}
\deltaThree(\Vz)
\quadtext{where}
\gamma^{\iMa\iMb 0}
=
\dot\zeta^{\iMa\iMb0}
\quadand
\gamma^{\iMa\iMb \iSa}
=
\zeta^{\iMa\iMb\iSa}
\label{MD_adap_Ellis}\end{aligned}$$ Fortunately for the dipole the requirements (\[Intro\_Tab\_Sym\]) and (\[Intro\_Tab\_Div\_zero\]) restrict the components $\zetaDP^{\iMa\iMb\iMc}$ so much that $T^{\iMa\iMb}$ can be written solely in terms of tensor quantities $$\begin{aligned}
T^{\iMa\iMb}
&=
\int_\Interval
\hat{P}^{\lround\iMa}\,\Cdot^{\iMb\rround}
\,\delta\big(x-C(\sigma)\big)
d\sigma
+
\nabla_\iMc \int_\Interval
\hat{S}^{\iMc\lround\iMa}\,\Cdot^{\iMb\rround}
\,\delta\big(x-C(\sigma)\big)
d\sigma
\end{aligned}
\label{MD_diploe_Dixon_Tab_NoGeo}$$ where $\hat{P}^{\iMa}$ and $\hat{S}^{\iMa\iMb}+\hat{S}^{\iMb\iMa}=0$ satisfy the Matterson-Papapetrou-Tulczyjew-Dixon equations $$\begin{aligned}
\Dfrac{\hat{S}^{\iMa\iMb}}{\sigma}
&=
\hat{P}^\iMb\Cdot^{\iMa}
-
\hat{P}^{\iMa}\Cdot^\iMb
\qquadand
\Dfrac{\hat{P}^{\iMa}}{\sigma}
=
\tfrac12 R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\, \hat{S}^{\iMd\iMc}
\label{MD_diploe_DVa_NoGeo}\end{aligned}$$ To interpret (\[MD\_diploe\_Dixon\_Tab\_NoGeo\]) as a Dixon representation of a Dipole requires we find a vector $\DixVec^\iMc$ such that $\DixVec_\iMc\,\hat{S}^{\iMc\lround\iMa}\,\Cdot^{\iMb\rround}=0$.
Clearly we can replace the covariant derivatives with partial derivatives and Christoffel symbols to give the representation of the dipole $$\begin{aligned}
T^{\iMa\iMb}
&=
\int_\Interval
\Big(
\hat{P}^{\lround\iMa}\,\Cdot^{\iMb\rround}
+
\hat{S}^{\iMc\lround\iMb}\,\Gamma^{\iMa\rround}{}_{\iMc\iMd} \,\Cdot^\iMd
\Big)\,\delta\big(x-C(\sigma)\big)
d\sigma
+
\int_\Interval
\hat{S}^{\iMc\lround\iMa}\,\Cdot^{\iMb\rround}
\,\partial_\iMc \,\delta\big(x-C(\sigma)\big)
d\sigma
\end{aligned}
\label{MD_diploe_Ellis_Tab_NoGeo}$$ However this is not the Ellis representation which is given by (\[MD\_diploe\_Tab\]) where $$\begin{aligned}
\zetaDP^{\iMa\iMb\iMc}
=
\hat{S}^{\iMc\lround\iMa}\,\Cdot^{\iMb\rround}
+
\Cdot^{\iMc}
\int^\sigma
\Big(
\hat{P}^{\lround\iMa}\,\Cdot^{\iMb\rround}
+
\hat{S}^{\iMc\lround\iMb}\,\Gamma^{\iMa\rround}{}_{\iMc\iMd} \,\Cdot^\iMd
\Big)\,
d\sigma'
\label{MD_diploe_Ellis_zeta}\end{aligned}$$ So that in the adapted coordinates (\[MD\_adap\_Ellis\]) we have $$\begin{aligned}
\gamma^{\iMa\iMb 0}
=
\hat{P}^{\lround\iMa}\,\delta_0^{\iMb\rround}
+
\hat{S}^{\iMc\lround\iMb}\,\Gamma^{\iMa\rround}{}_{\iMc0}
+
\partial_0(\hat{S}^{0\lround\iMa}\,\delta_0^{\iMb\rround})
\qquadand
\gamma^{\iMa\iMb \iSa}
=
\hat{S}^{\iSa\lround\iMa}\,\delta_0^{\iMb\rround}
\label{MD_gammas}\end{aligned}$$ Recall that $\Kill^\iMa$ is a Killing vector if $$\begin{aligned}
\nabla_{\iMa}\Kill_\iMb + \nabla_\iMb\Kill_{\iMa} = 0
\label{Q_Cons_symm_Kill}\end{aligned}$$ Then $\Conserv_\Kill$ is a conserved quantity, where $$\begin{aligned}
\Conserv_\Kill
&=
\gamma^{\iMa00} \,\Kill_{\iMa} -
\gamma^{\iMa0\iSa} \, \partz_\iSa \,\Kill_{\iMa}
\label{MD_Cons_Q}\end{aligned}$$ From (\[MD\_gammas\]) we have $$\begin{aligned}
\Conserv_\Kill
=
\hat{P}^\iMa\,K_\iMa
+
\tfrac12\hat{S}^{\iMa\iMb}\,\nabla_{\iMb}\ K_\iMa
\label{MD_Q_K}\end{aligned}$$
The situation is simplified in the case when $C$ is a geodesic. In this case we can use the Dixon representation with $\DixVec^{\iMa}=\Cdot^{\iMa}$. $$\begin{aligned}
T^{\iMa\iMb}
&=
\int_\Interval
\Big(
m \Cdot^{\iMa}\,\Cdot^\iMb
+
P^{\lround\iMa}\,\Cdot^{\iMb\rround}
\Big)\,\delta\big(x-C(\sigma)\big)
d\sigma
+
\nabla_\iMc \int_\Interval
\Big(
X^\iMc \Cdot^{\iMa}\,\Cdot^\iMb
+
S^{\iMc\lround\iMa}\,\Cdot^{\iMb\rround}
\Big)\,\delta\big(x-C(\sigma)\big)
d\sigma
\end{aligned}
\label{MD_diploe_Dixon_Tab}$$ where $$\begin{aligned}
\hat{S}^{\iMa\iMb} = S^{\iMa\iMb} - X^{\iMa}\Cdot^\iMb + X^\iMb\Cdot^{\iMa}
\qquadand
\hat{P}^{\iMa}=P^{\iMa}+m\Cdot^\iMa
\label{MD_geodesic_Deqns}\end{aligned}$$ These quantities have intuitive meaning. See Table \[tab\_List\_units\] for the units associated with each component.
The rest mass $m$.
A displacement vector $X^{\iMa}$ with $X_{\iMa}\,\Cdot^{\iMa}=0$.
The rate of change of the displacement vector $P^{\iMa}$ with $P_{\iMa}\,\Cdot^{\iMa}=0$.
A spin tensor $S^{\iMa\iMb}$ with $S^{\iMa\iMb}+S^{\iMb\iMa}=0$ and $\Cdot_{\iMa}\,S^{\iMa\iMb}=0$
These satisfy $$\begin{aligned}
\dot{m}=0
\,,\qquad
\Dfrac{X^{\iMa}}{\sigma}
=
P^{\iMa}
\,,\qquad
\Dfrac{P^{\iMa}}{\sigma}
=
\tfrac12 R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\, S^{\iMd\iMc}
+R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\, \Cdot^\iMc\,X^\iMd
\,,\qquad
\Dfrac{S^{\iMa\iMb}}{\sigma}
&=
0
\label{MD_diploe_DSab}\end{aligned}$$
Counting the number of components we see there are 10 ODEs, which completely specify the dynamics of the components of the dipoles. Thus there are no additional free components. The 10 components can be loosely counted as follows: One component is the rest mass. Three displacement vectors which specify the “centre of mass” from the position of the dipole and another three represent the velocity of the centre of mass. Finally three components are referred to as the spin. These statements make more sense if we assume that the spacetime has Killing symmetries.
As we see below, the same situation does not occur for the quadrupoles. The conditions (\[Intro\_Tab\_Sym\]) and (\[Intro\_Tab\_Div\_zero\]) do not completely determine the dynamics of all the components, it is not possible to write all the components in terms of tensors, and there is no concept of mass.
A particular case of the dipole is when $S^{\iMa\iMb}=0$, which is compatible with its dynamic equation (\[MD\_diploe\_DSab\]). We call this case a . The notion of semi-dipoles and semi-quadrupoles is purely geometric and is addressed in section \[ch\_Semi\_Q\].
The quadrupole stress-energy tensor. {#ch_QP}
====================================
Setting $k=2$ in (\[Intro\_Tab\_Ellis\_Multi\]) gives the formula for a quadrupole. $$\begin{aligned}
T^{\iMa\iMb}
=
\frac12\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}(\sigma)
\,\partx_\iMc\partx_\iMd
\delta\big(x-C(\sigma)\big)\,d\sigma
\label{QP_Tab}\end{aligned}$$ so that the action on the test tensor $\TtwoTen_{\iMa\iMb}$ is given by $$\begin{aligned}
\int_{\Real^4} T^{\iMa\iMb}\,\TtwoTen_{\iMa\iMb}\,d^4x
=
\frac12\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}(\sigma) \,
\big(\partx_\iMc\partx_\iMd \TtwoTen_{\iMa\iMb}\big)\big|_{C(\sigma)}\ d\sigma
\label{QP_Tab_action}\end{aligned}$$ From (\[Intro\_Tab\_Sym\]) we impose $$\begin{aligned}
\zeta^{\iMa\iMb\iMc\iMd} = \zeta^{\iMb\iMa\iMc\iMd}
\label{QP_zeta_sym_ab}\end{aligned}$$ and due to the commutation of partial derivatives we also set $$\begin{aligned}
\zeta^{\iMa\iMb\iMc\iMd} = \zeta^{\iMa\iMb\iMd\iMc}
\label{QP_zeta_sym_cd}\end{aligned}$$ Like the $\zetaDP^{\iMa\iMb\iMc}$, the $\zeta^{\iMa\iMb\iMc\iMd}$ are not uniquely specified by the $T^{\iMa\iMb}$, with the freedom $$\begin{aligned}
\zeta^{\iMa\iMb\iMc\iMd} \to
\zeta^{\iMa\iMb\iMc\iMd} +
M^{\iMb\iMa}\, \Cdot^{\lround\iMc}\, C^{\iMd\rround}
+
\hat{M}^{\iMa\iMb\lround\iMc}\,\Cdot^{\iMd\rround}
\label{QP_Zeta_Freedom}\end{aligned}$$ where $M^{\iMb\iMa}$ and $\hat{M}^{\iMa\iMb\iMc}$ are arbitrary constants.
As in [@gratus2018correct], under change of coordinate $(x^0,\ldots,x^3)$ to $(\xhat^{\hat 0},\ldots,\xhat^{\hat 3})$ we have have a complicated transformation involving derivatives and integrals $$\begin{aligned}
\zetahat^{\iMahat\iMbhat\iMchat\iMdhat}
&=
\zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\,
J^{\iMchat}_\iMc\, J^{\iMdhat}_\iMd
-\tfrac12
\Cdothat^\iMchat\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
\\&\qquad
-\tfrac12
\Cdothat^\iMdhat\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMchat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMchat}_\iMd\Big) \,d\sigma'
\\&\quad
+\tfrac12
\Cdothat^{\iMdhat}\int^{\sigma}
\Cdothat^{\iMchat}\int^{\sigma'} \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
+\tfrac12
\Cdothat^{\iMchat}\int^{\sigma}
\Cdothat^{\iMdhat}\int^{\sigma'} \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
\end{aligned}
\label{QP_Zeta_Change_Coords}$$ where $J^\iMahat_{\iMa}$ is given by $$\begin{aligned}
J^\iMahat_{\iMa}
=
\pfrac{\xhat^\iMahat}{x^\iMa}
\label{QP_def_Jaa}\end{aligned}$$ and $$\begin{aligned}
{\Jaabb}=J^{\iMahat}_{\iMa}\,J^{\iMbhat}_\iMb\,
\label{QP_def_Jaabb}\end{aligned}$$ It is not necessary to give the lower limits of the integrals at these are incorporate in freedom (\[QP\_Zeta\_Freedom\]).
As stated in the introduction the quadrupole is greatly simplified if we choose adapted coordinates given in (\[Intro\_adapt\_coords\]), so that $\Cdot^{\iMa}=\delta^{\iMa}_0$. Equation (\[QP\_Tab\]) can now be written in terms of components $\gamma^{\iMa\iMb\iMc\iMd}$ $$\begin{aligned}
T^{\iMa\iMb}(\sigma,\Vz)
&=
\gamma^{\iMa\iMb00}(\sigma) \,\deltaThree(\Vz)
+
\gamma^{\iMa\iMb0\iSa}(\sigma)\, \partz_\iSa \deltaThree(\Vz)
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}(\sigma)\,
\partz_\iSa\partz_\iSb
\deltaThree(\Vz)
\label{QP_Tab_gamma}\end{aligned}$$ so that from (\[Ellis\_adap\_Ellis\_Multi\_action\]) becomes $$\begin{aligned}
\int_\Mman T^{\iMa\iMb}\,\TtwoTen_{\iMa\iMb}\,d^4 x
&=
\int_\Interval \Big(\gamma^{\iMa\iMb00}\,\TtwoTen_{\iMa\iMb}
-
\gamma^{\iMa\iMb0\iSa}\,(\partz_\iSa
\TtwoTen_{\iMa\iMb})
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
(\partz_\iSa \partz_\iSb\,\TtwoTen_{\iMa\iMb})
\Big)\,d\sigma
\label{QP_Tab_gamma_action}\end{aligned}$$ Here again we impose $$\begin{aligned}
\gamma^{\iMa\iMb\iMc\iMd} = \gamma^{\iMb\iMa\iMc\iMd}
\qquadand
\gamma^{\iMa\iMb\iMc\iMd} = \gamma^{\iMa\iMb\iMd\iMc}
\label{QP_gamma_sym}\end{aligned}$$ In adapted coordinates, the components $\gamma^{\iMa\iMb\iMc\iMd}$ are uniquely determined from $T^{\iMa\iMb}$, so there is no freedom, as in (\[QP\_Zeta\_Freedom\]). In this coordinate system we can still express $T^{\iMa\iMb}$ in terms of (\[QP\_Tab\]), and the relationship between $\gamma^{\iMa\iMb\iMc\iMd}$ and $\zeta^{\iMa\iMb\iMc\iMd}$ is given by $$\begin{aligned}
\gamma^{\iMa\iMb00}=\tfrac12 \ddot \zeta^{\iMa\iMb00}, \quad
\gamma^{\iMa\iMb \iSa 0}=\dot \zeta^{\iMa\iMb \iSa 0} \quadand
\gamma^{\iMa\iMb \iSa \iSb}= \zeta^{\iMa\iMb \iSa \iSb}
\label{QP_gamma_zeta}\end{aligned}$$ which is consistent with (\[QP\_Zeta\_Freedom\]). This follows from (\[Ellis\_adap\_gam\_zeta\]).
It is now much easier to express the differential and algebraic equations on the components arising from the divergenceless conditions (\[Intro\_Tab\_Div\_zero\]). $$\begin{aligned}
\dot\gamma^{\iMa000}
&=
- \Gamma^{\iMa}{}_{\iMb\iMc}\, \gamma^{\iMc\iMb00}
+(\partz_{\iSa}\Gamma^0{}_{\iMb\iMc})\, \gamma^{\iMc \iMb 0 \iSa}
-\tfrac12\big(\partz_{\iSb}\partz_{\iSa}\Gamma^0_{\iMb\iMc}\big)
\gamma^{\iMc \iMb \iSa \iSb}
\label{QP_DTeqn_a000}
\\
\dot \gamma^{\iMa00\iSa}
&=
-\gamma^{\iMa\iSa00}
- \Gamma^{\iMa}_{\iMb\iMc}\, \gamma^{\iMc \iMb 0 \iSa}
+ (\partz_{\iSb}\Gamma^{\iMa}_{\iMb\iMc})\, \gamma^{\iMc \iMb \iSb \iSa}
\label{QP_DTeqn_a00m}
\\
\dot \gamma^{\iMa 0 \iSa\iSb}
&=
- 2\gamma^{\iMa (\iSb\iSa) 0}
- \Gamma^{\iMa}_{\iMb\iMc}\, \gamma^{c b \iSa \iSb}
\label{QP_DTeqn_a0mn}\end{aligned}$$ together with the algebraic equation $$\begin{aligned}
\gamma ^{\iMa (\iSa \iSb \iSc)}=0
\label{QP_DTeqn_alg}\end{aligned}$$
We can now count the number of components of the quadrupole. From (\[QP\_DTeqn\_a000\])-(\[QP\_DTeqn\_a0mn\]) we have 40 first order ODEs. However not all the components are determined by these ODEs. From (\[QP\_gamma\_sym\]) we start with 100 components. The algebraic equation (\[QP\_DTeqn\_alg\]) gives 40 independent equations so that there are 60 independent components. Thus 40 are determined by ODEs and the remaining 20 are free components. As stated in the introduction these free components need to be replaced by constitutive equations. However the choice of constitutive equations depends on a choice of a model for the material. An example of such constitutive equations is given in section \[ch\_Dust\] below.
Under change of adapted coordinate $(\sigma,z^1,z^2,z^3)$ to $(\hat\sigma,\zhat^1,\zhat^2,\zhat^3)$ we have $$\begin{aligned}
\gammahat^{\iMahat\iMbhat\iSahat\iSbhat}
&=
{\Jaabb}\, J^\iSahat_\iSa\, J^\iSbhat_\iSb\, \gamma^{\iMa\iMb\iSa \iSb}
\label{QP_gamma_chage_coords_munu}
\\
\gammahat^{\iMahat\iMbhat\iSahat\zerohat}
&=
{\Jaabb}\, J^\iSahat_\iSa\, \gamma^{\iMa\iMb\iSa0}
+
\big(
{\Jaabb}\,J^\zerohat_\iSb \, J^\iSahat_\iSa\,
\gamma^{\iMa\iMb\iSa\iSb}\big)\dot{}
-
\tfrac12
\big(
\JJ^\iSahat_{\iSa\iSb}\,{\Jaabb}
+
J^\iSahat_\iSa\, \partx_\iSb {\Jaabb}
+
J^\iSahat_\iSb\, \partx_\iSa {\Jaabb}
\big)
\gamma^{\iMa\iMb\iSa\iSb}
\label{ChangeCoords_gamma_abmu0}
\\
\gammahat^{\iMahat\iMbhat\Ohat\Ohat}
&=
\Jaabb\,\gamma^{\iMa \iMb 00}
+
\Jaabb\,J^\Ohat_{\iSc} \,\dot\gamma^{\iMa\iMb\iSa0}
+
\big(
(\Jaabb\ \JJ^\Ohat_{\iSc})\dot{}
-
\partx_{\iSc} {\Jaabb}
\big)
\gamma^{\iMa \iMb \iSc 0}
\nonumber
\\&\quad
+
\tfrac12\big((\Jaabb\ J^\Ohat_{\iSd} \, J^\Ohat_{\iSc})\,
\gamma^{\iMa \iMb \iSc \iSd}\big)\ddot{}
-
\big(\big(
\tfrac12
\JJ^\Ohat_{\iSc\iSd}\,{\Jaabb}+
J^\Ohat_{\iSd}\, \partx_{\iSc} {\Jaabb}
\big)
\gamma^{\iMa \iMb \iSc \iSd}\big)\dot{}
+ (\tfrac12\partx_{\iSc}\partx_{\iSd}{\Jaabb})\, \gamma^{\iMa \iMb \iSc \iSd}
\label{QP_gamma_chage_coords_00}\end{aligned}$$ where $$\begin{aligned}
\JJ^{\iMahat}_{\iMb\iMc}
=
\partial_{\iMb} J^{\iMahat}_{\iMc}
=
\pqfrac{\xhat^{\iMahat}}{x^\iMb}{x^\iMc}
\label{QP_def_JJ}\end{aligned}$$ Although this may be considered more complicated than (\[QP\_Zeta\_Change\_Coords\]) it does not involve any integrals. We have assumed that $\sigma$ and $\hat\sigma$ parameterise the same points on the worldline $C$. Thus on the worldline $J^\iMahat_0=\delta^\iMahat_0$. However this does not imply $\JJ^{\iMahat}_{\iMb 0}=0$.
The static semi-quadrupole and the free components {#ch_Q_free}
--------------------------------------------------
To get an intuition about the free components, consider to dynamic equations (\[QP\_DTeqn\_a000\])-(\[QP\_DTeqn\_alg\]) on a flat Minkowski background with Cartesian coordinates $(t=z^0,z^1,z^2,z^3)=(t,\Vz)$ and with the worldline at $\Vz=\Vzero$. Thus we can set $t=\sigma$ so that $C^0(t)=t$ and $C^\iSa(t)=0$. The dynamic equations (\[QP\_DTeqn\_a000\])-(\[QP\_DTeqn\_alg\]) become $$\begin{aligned}
\dot\gamma^{\iMa000} &= 0
\label{Q_free_gdot_a000}
\\
\dot\gamma^{\iMa0\iSa0} &= -\gamma^{\iMa\iSa00}
\label{Q_free_gdot_a0mu0}
\\
\dot\gamma^{\iMa0\iSb\iSa} &= -2\gamma^{\iMa(\iSa\iSb)0}
\label{Q_free_gdot_a0munu}
\\
\gamma^{\iMa(\iSa\iSb\rho)} &=0
\label{Q_free_gdot_amunurho}\end{aligned}$$ As a further simplification, consider only the semi-quadrupole. This is when $$\begin{aligned}
\gamma^{\iMa\iSa\iSb\rho}=0
\label{Q_free_semi}\end{aligned}$$ According to table \[tab\_number\_components\] there should be 22 ODE components and 6 free components. This arises since (\[Q\_free\_semi\]) implies $\gamma^{\iSa0\iSb\rho}=0$ which kills all but 6 of the ODEs in (\[Q\_free\_gdot\_a0munu\]). Also see section \[ch\_Semi\_Q\] for full details.
The general solution is given by $$\begin{gathered}
\gamma^{0000} = m,\quad
\gamma^{\iSa000} = P^\iSa,\quad
\gamma^{00\iSa0} = X^\iSa-t\, P^\iSa ,\quad
\\
\gamma^{00\iSb\iSa} = \kappa^{\iSb\iSa}(t),\quad
\gamma^{\iSb0\iSa0} = S^{\iSb\iSa}-\tfrac12\dot{\kappa}^{\iSb\iSa}(t) ,\quad
\gamma^{\iSb\iSa00} = \tfrac12\ddot{\kappa}^{\iSb\iSa}(t),\quad
\gamma^{\rho\iSb\iSa0}=0
\end{gathered}
\label{Q_free_semi_soln}$$ where the 10 constants $m,P^\iSa,X^\iSa,S^{\iSa\iSb}$ with $S^{\iSa\iSb}+S^{\iSb\iSa}=0$ and the six free components $\kappa^{\iSb\iSa}(t)$ with $\kappa^{\iSb\iSa}(t)=\kappa^{\iSa\iSb}(t)$. Here we interpret $m$ as the total mass, $P^\iSa$ as the momentum and $S^{\iSb\iSa}$ as the spin. The six free components $\kappa^{\iSa\iSb}(t)$ are the moments of inertia. Since there are 22 ODEs there should be 22 constants of integration. As well as the 10 already given, the remaining 12 are the six initial conditions for $\kappa^{\iSa\iSb}(0)$ and for $\dot\kappa^{\iSa\iSb}(0)$.
Consider the components of $T^{\iMa\iMb}$ as arising from squeezing a regular stress-energy tensor $\TReg^{\iMa\iMb}(t,\Vz)$ as in section \[ch\_Ellis\_Squz\]. Thus $$\begin{aligned}
\gamma^{\iMa\iMb00} = \int_{\Real^3} \TReg^{\iMa\iMb}(t,\Vz) d^3\Vz,\qquad
\gamma^{\iMa\iMb\iSa0} = \int_{\Real^3} \TReg^{\iMa\iMb}(t,\Vz)\,z^\iSa\ d^3\Vz,\qquad
\gamma^{\iMa\iMb\iSa\iSb} = \int_{\Real^3} \TReg^{\iMa\iMb}(t,\Vz)\,z^\iSa\,z^\iSb\ d^3\Vz,\qquad
\label{Q_free_Squz_componets}\end{aligned}$$ Comparing (\[Q\_free\_semi\_soln\]) and (\[Q\_free\_Squz\_componets\]) we see $$\begin{gathered}
m = \int_{\Real^3} \TReg^{00}(t,\Vz) d^3\Vz,\qquad
P^\iSa = \int_{\Real^3} \TReg^{\iSa0}(t,\Vz) d^3\Vz,\qquad
X^\iSa = t\,P^\iSa + \int_{\Real^3} \TReg^{00}(t,\Vz) z^\iSa\, d^3\Vz,
\\
S^{\iSb\iSa} = \int_{\Real^3}
z^{\lsquare\iSa}\,\TReg^{\iSb\rsquare 0}(t,\Vz) d^3\Vz,\qquad
\kappa^{\iSa\iSb} =
\int_{\Real^3}
z^{\iSa}\,z^{\iSb}\,\TReg^{00}(t,\Vz) d^3\Vz,\qquad
\label{Q_free_Squz_mPXkS}
\end{gathered}$$ For example let $P^\iSa=0$ and $S^{\iSa\iSb}=0$ then $$\begin{aligned}
m = \int_{\Real^3} \TReg^{00}(t,\Vz) \, d^3\Vz,\qquad
\kappa^{\iSa\iSb}(t) = \int_{\Real^3} z^\iSa\,z^\iSb\,\TReg^{00}(t,\Vz) \,d^3\Vz
\label{Q_free_no_p}\end{aligned}$$ Since $\kappa^{\iSa\iSb}(t)$ are free components we can choose any $\TReg^{\iMa\iMb}(t,\Vz)$ we like so long as its total integral is $m$ and they are sufficiently symmetric that $P^\iSa=0$ and $S^{\iSa\iSb}=0$ hold. For example if $\TReg^{\iMa\iMb}(t,\Vz)$ is symmetric about the three directions $z^\iSa$. This explains why we can choose to have a distribution of matter which separates and then coalesces as in figure \[fig\_intro\_grav\_quad\_free\].
Conserved quantities {#ch_Q_Cons}
--------------------
Recall that a killing vector (\[Q\_Cons\_symm\_Kill\]), leads to a conserved quantity in the dipole case. The same is true for quadrupole. In an adapted coordinate system $(\sigma,z^1,z^2,z^3)$ then the conserved quantity $\Conserv_\Kill$ is given by $$\begin{aligned}
\Conserv_\Kill
&=
\gamma^{\iMa000} \Kill_{\iMa} -
\gamma^{\iMa0\iSa0} \partz_\iSa \Kill_{\iMa} +
\tfrac12
\gamma^{\iMa0\iSa\iSb} \partz_\iSa\partz_\iSb \Kill_{\iMa}
\label{Q_Cons_Q}\end{aligned}$$
It is worth exploring the conserved quantities on the static semi-quadrupole given by (\[Q\_free\_semi\_soln\]). In Minkowski spacetime there are 10 Killing vectors.
Mass or Energy: for $\Kill_0=1$, $\Kill_\iSa=0$ we have $\Conserv_\Kill=m$.
Momentum: for $\Kill_0=0$, and for some $\iSa$, $\Kill_\iSa=1$ and $\Kill_\iSb=0$ for $\iSb\ne\iSa$ then $\Conserv_\Kill=p_\iSa$.
Angular momentum and spin: let $\Kill_0=0$, $\Kill_1=z^2$, $\Kill_2=-z^1$ and $\Kill_3=0$. We have $$\begin{aligned}
\Conserv_\Kill
&=
\gamma^{1000} \Kill_1 +
\gamma^{2000} \Kill_2 +
\gamma^{2010} \partz_1 \Kill_2 +
\gamma^{1020} \partz_2 \Kill_1
\\&=
p^1\,z^2 - p^2\,z^1
+
\big(S^{12}-\dot{\kappa}^{12}(t)\big)
-
\big(S^{21}-\dot{\kappa}^{21}(t)\big)
=
S^{12}\end{aligned}$$
Boost: Let $\Kill_0=z^1$, $\Kill_1=t+t_0$, $\Kill_2=0$ and $\Kill_3=0$ for some fixed $t_0$. Then $$\begin{aligned}
\Conserv_\Kill
&=
\gamma^{0000} \Kill_0 +
\gamma^{1000} \Kill_1 +
\gamma^{0010} \partz_1 \Kill_0
=
m\,z^1
+
P^1\,(t+t_0)
+
(X^1-t\,P^1)
=
X^1+t_0\,P^1\end{aligned}$$
Thus the 10 Killing symmetries of Minkowski spacetime correspond directly to the 10 constant of the solution to static semi-quadrupole. This also gives a new interpretation to the three somewhat obscure conserved quantities corresponding to the three boosts. Namely for the boost about the point $\Vz=\Vzero$ and $t=t_0$ then $\Conserv_\Kill$ is the displacement vector at the time $t_0$.
Non-divergent dust model of a quadrupole and the corresponding constitutive relations. {#ch_Dust}
======================================================================================
The familiar dust model is given in terms of a scalar density $\varrho$ and a vector field $U^{\iMa}$ with $g_{\iMa\iMb}\,U^{\iMa}\,U^\iMb=-1$. The stress-energy tensor density is given by $$\begin{aligned}
\TReg^{\iMa\iMb} = \varrho\, U^{\iMa}\,U^\iMb\,\mu
\label{Dust_SE_tensor}\end{aligned}$$ where $\mu=\sqrt{-\det(g_{\iMa\iMb})}$. Then the divergenceless condition implies that the $U^{\iMa}$ are geodesics $$\begin{aligned}
U^{\iMa}\,\nabla_{\iMa}\,U^\iMb = 0
\label{Dust_geodesic}\end{aligned}$$ and the flow $\varrho$ is conserved $$\begin{aligned}
U^{\iMa} (\partx_{\iMa} \varrho) &= 0
\label{Dust_conserved}\end{aligned}$$ Furthermore let us assume that the dust is non divergent, so that it preserves the measure, i.e. $$\begin{aligned}
U^{\iMa}\,\partx_{\iMa} \mu
=0
\label{Dust_non_div}\end{aligned}$$
In order to create a squeezed tensor $\TReg_\varepsilon^{\iMa\iMb}$ from $\TReg^{\iMa\iMb}$ we need a choose a coordinate system. It is natural to choose the coordinate adapted to $U^{\iMa}$ so that $U^{\iMa}=\delta_0^{\iMa}$. This gives $\dot\varrho=0$ so that we can write $\varrho=\varrho(\Vz)$. Likewise we have $\iSa=\iSa(\Vz)$. Hence $$\begin{aligned}
\TReg^{\iMa\iMb}(\sigma,\Vz)
&=
\varrho(\Vz)\, \delta^{\iMa}_0\,\delta^\iMb_0\, \iSa(\Vz)
\label{Dust_Tab_adap}\end{aligned}$$ We require that $\varrho(\Vz)=0$ for large $\Vz$. From (\[Ellis\_Squz\_moments\]) we see $$\begin{aligned}
\gamma^{\iMa\iMb00} (\sigma)
&=
\delta_0^{\iMa}\,\delta_0^\iMb
\int_{\Real^3} d^3\Vz\
\varrho(\Vz) \,\iSa(\Vz)
,
\\
\gamma^{\iMa\iMb\iSa0} (\sigma)
&=
-\delta_0^{\iMa}\,\delta_0^\iMb \int_{\Real^3} d^3\Vz\
z^\iSa\,\varrho(\Vz)\,\iSa(\Vz)
,\quad
\\
\gamma^{\iMa\iMb\iSa\iSb} (\sigma)
&=
\delta_0^{\iMa}\,\delta_0^\iMb \int_{\Real^3} d^3\Vz\
z^\iSa\,z^\iSb\,\varrho(\Vz)\,\iSa(\Vz)
\end{aligned}
\label{Dust_Squz_moments}$$ Since both $\varrho$ and $\iSa$ are independent of $\sigma$ we have the dynamic equations $$\begin{aligned}
\dot\gamma^{\iMa\iMb00} = 0,\quad
\dot\gamma^{\iMa\iMb\iSa0} = 0\quadand
\dot\gamma^{\iMa\iMb\iSa\iSb} = 0
\label{Dust_gamma_dot}\end{aligned}$$ These are consistent with the dynamic equations (\[QP\_DTeqn\_a000\])-(\[QP\_DTeqn\_a0mn\]) since in the adapted coordinate system the geodesics equation becomes $\Gamma^{\iMa}_{00}=0$.
Equation (\[Dust\_gamma\_dot\]) completely defines the dynamics. However, our goal is use use (\[Dust\_gamma\_dot\]) to inspires the constitutive relations in the case when we are not modelling a non-divergent dust, so that the (\[QP\_DTeqn\_a000\])-(\[QP\_DTeqn\_a0mn\]) hold. One option is to require that some of the free components are in fact constants. This is challenging because we need to be consistent with (\[QP\_DTeqn\_a000\])-(\[QP\_DTeqn\_a0mn\]).
As a simple example, consider the static semi-quadrupole given by (\[Q\_free\_semi\_soln\]). The non-divergent dust constitutive relations would make $\kappa^{\iSa\iSb}(t)$ a constant. It would also make $P^\iSa=0$. This replaces (\[Q\_free\_semi\_soln\]) with $$\begin{gathered}
\gamma^{0000} = m,\quad
\gamma^{\iSa000} = 0,\quad
\gamma^{00\iSa0} = X^\iSa ,\quad
\\
\gamma^{00\iSb\iSa} = \kappa^{\iSb\iSa},\quad
\gamma^{\iSb0\iSa0} = S^{\iSb\iSa},\quad
\gamma^{\iSb\iSa00} = 0,\quad
\gamma^{\iSc\iSb\iSa0}=0
\end{gathered}
\label{Dust_Q_free_semi_soln}$$
The coordinate free and metric free approach to quadrupoles. {#ch_CoFree}
============================================================
In [@gratus2018correct] the authors present a coordinate free definition of submanifold distributions, also known as deRham currents, in terms of the deRham push forward and then actions of the standard operations.
Since we are using coordinate free notation we write a vector field as $V\in\Gamma T\Mman$. Here $T\Mman$ is the tangent bundle of spacetime and $\Gamma T\Mman$ refers to sections of the tangent bundle. A vector a point $p\in\Mman$ we write $V\in T_p\Mman$. A vector field and vectors at a point are differential operators and we write the action of a vector on a scalar field as $V\VAct{f}$. The bundle of $p$–forms is written $\Lambda^p\Mman$ so a $p$–form field is written $\alpha\in\Gamma\Lambda^p\Mman$.
Given a coordinate system $(x^0,\ldots,x^3)$ then we write $V=V^{\iMa}\partx_{\iMa}$. Here $\partx_{\iMa}$ are basis vectors and $V^{\iMa}$ are indexed scalar fields. For 1–forms $\alpha\in\Gamma\Lambda^1\Mman$ we can write $\alpha=\alpha_{\iMa}\,dx^{\iMa}$ where again $\alpha_{\iMa}$ are indexed scalar fields.
The two types of $\nabla$ {#ch_Nablas}
-------------------------
In the literature on general relativity and differential geometry, there are two convention used when referring to the covariant derivative. One is typically used when using index tensor notation, the other when one is using coordinate free notation. Usually one has simply to choose one convention present all the results in that. We have done this up to know using index notation. However in this section we wish to present a coordinate free definition of all the objects. As a result it is necessary to use both definitions of the covariant derivatives, sometimes in the same expression. So to avoid confusion, from now one we introduce two different symbols.
The covariant derivative which we have used up to this point and which “knows” about the index of an object we write $\nablaInd_{\iMa}$. Acting on the indexed scalar fields $V^{\iMa}$ then $$\begin{aligned}
\nablaInd_{\iMa} V^\iMb = \partial_{\iMa}(V^\iMb) + V^\iMc\,\Gamma^\iMb_{ac}
\label{Nablas_def_NablaInd}\end{aligned}$$ I.e. the Christoffel symbols are tied to the indexes. By contrast the coordinate free covariant derivative is written $\nablaDG_V$ where $V\in\Gamma T\Mman$. In this case the Christoffel symbol satisfies $$\begin{aligned}
\Gamma^{\iMa}_{\iMb\iMc}\, \partial_{\iMa} = \nablaDG_{\partial_\iMb} \partial_\iMc
\label{Nablas_def_NablaDG_Chris}\end{aligned}$$ This covariant derivative knows about the tensor structure, but not the indexes. Thus $$\begin{aligned}
\nablaDG_U V^{\iMa} = U\VAct{V^{\iMa}}
\label{Nablas_NablaDG_scalar}\end{aligned}$$ The two covariant derivatives are related via the following $$\begin{aligned}
\nablaDG_U (V)
&=
U^\iMb (\nablaInd_\iMb V^{\iMa})\partial_{\iMa}
\end{aligned}
\label{Nablas_relate_defs}$$ since $$\begin{aligned}
\nablaDG_U (V)
&=
\nablaDG_U (V^{\iMa}\,\partial_{\iMa})
=
U\VAct{V^{\iMa}}\partial_{\iMa} + U^\iMb\,V^{\iMa}
\nablaDG_{\partial_\iMb}\partial^{\iMa}
=
U\VAct{V^{\iMa}}\partial_{\iMa} + U^\iMb\,V^{\iMa}
\Gamma^\iMc_{\iMa\iMb}\partial_\iMc
\\&=
U^\iMb\big(\partial_\iMb\VAct{V^{\iMa}}
+ V^\iMc\Gamma^{\iMa}_{\iMb\iMc}\big)\partial_{\iMa}
=
U^\iMb (\nablaInd_\iMb V^{\iMa})\partial_{\iMa}\end{aligned}$$
In the coordinate definition of the Dixon quadrupole, setting $k=2$ in (\[Intro\_Tab\_Dixon\_Multi\]), we see there is an operator $\nablaInd_{\iMa}\nablaInd_\iMb$. This is tensorial with respect to the indices $\iMa$ and $\iMb$. To give coordinate free definition we define for any tensor $S$, $$\begin{aligned}
\nablaDG^2_{U,V} S
&=
\nablaDG_U\,\nablaDG_V S
-
\nablaDG_{\nablaDG_U V} S
\label{Nablas_nable^2}\end{aligned}$$ This definition can be extended to arbitrary order. This is clearly tensorial in $U$, but is also tensorial (also known as f-linear) with respect to $V$. Thus $$\begin{aligned}
\nablaDG^2_{(fU),V} S
=
\nablaDG^2_{U,(fV)} S
=
f\,\nablaDG^2_{U,V} S
\label{Nablas_nable^2_lin}\end{aligned}$$
The relationship between $\nablaDG^2_{U,V}$ and $\nablaInd_{\iMa}\nablaInd_\iMb$ is given by $$\begin{aligned}
\nablaDG^2_{U,V} W
=
U^\iMb\,V^\iMc\,
\Big(\nablaInd_\iMb\,\nablaInd_\iMc W^{\iMa}\Big) \partial_{\iMa}
\label{Nablas_connenting}\end{aligned}$$ for any vector $W^{\iMa}$.
Defining distributional forms {#ch_CoFree_Forms}
-----------------------------
Following Schwartz, we define a distributional $p$–form by how acts on a test $(4-p)$–form $\TarbTen\in\Gamma\Lambda^{4-p}M$, i.e. with $(4-p)$–form with compact support [@gratus2018correct]. Given $\alpha\in\Gamma\Lambda^pM$ is a smooth $p-$form, we construct a regular distribution $\alpha^D$ via $$\begin{aligned}
\alpha^D[\TarbTen]=\int_M \TarbTen\wedge \alpha
\label{Defs_def_alpha_D}\end{aligned}$$ The definition of the wedge product, Lie derivatives, internal contraction and exterior derivatives on distributions are defined to be consistent with (\[Defs\_def\_alpha\_D\]). Thus for a distribution $\Psi$ we set $$\begin{aligned}
\begin{gathered}
(\Psi_1+\Psi_2)[\TarbTen]
=
\Psi_1[\TarbTen]+\Psi_2[\TarbTen]
\,,\quad
(\beta\wedge\Psi)[\TarbTen]=\Psi[\TarbTen\wedge\beta]
\,,\quad
(d\Psi)[\TarbTen]=(-1)^{(3-p)}\Psi[d\TarbTen]
\,,
\\
(i_v\Psi)[\TarbTen]=(-1)^{(3-p)}\Psi[i_v\TarbTen]
\quadand
(L_v\Psi)[\TarbTen]=-\Psi[L_v\TarbTen]
\label{Defs_opps_on_Psi}
\end{gathered}\end{aligned}$$ for $v\in\Gamma T\Mman$. Given $C:\Interval\to\Mman$, is a closed embedding. The push forward with respect to $C$ of a $p$–form, $\alpha\in\Gamma\Lambda^p\Interval$ is given by the distribution $$\begin{aligned}
\big(C_\PF(\alpha)\big)[\TarbTen] = \int_\Interval C^\star(\TarbTen)\wedge\alpha
\label{CoFree_def_C_pf}\end{aligned}$$ where $\TarbTen$ is a test form of degree $0$ or $1$. This has degree $\deg\big(C_\PF(\alpha)\big)=3+\alpha$. A general form distribution is then given by acting (\[Defs\_opps\_on\_Psi\]) on $C_\PF(\alpha)$.
The is defined as follows. If $$\begin{aligned}
\Psi[\lambda^{k+1} \TarbTen]=0
\quadtext{for all}
&\lambda\in\Gamma\Lambda^0M\text{ and }
\TarbTen\in\Gamma_0\Lambda^1M
\quadtext{such that}
C^\star(\lambda)=0
\end{aligned}
\label{Defs_order}$$ then we say that the order of $\Psi$ is at most $k$. Since we impose that $\lambda$ vanishes on the image of $C$, this implies that we need to differentiate the argument $\lambda^{k+1} \TarbTen$ at least $k+1$ times for $\Psi[\lambda^{k+1} \TarbTen]\ne0$. We say dipoles have order at most one and quadrupoles have order at most two. Therefore the terms in a dipole have at most one derivative, and those in a quadrupole at most two. This is consistent with the fact that the set of quadrupoles include all dipoles.
The deRham push forward is compatible with the exterior derivative $$\begin{aligned}
d\,C_\PF(\alpha)=C_\PF(d\alpha)
\label{CoFree_d_C_PF}\end{aligned}$$ and the internal contraction for tangential fields $$\begin{aligned}
i_w\,C_\PF(\alpha)
=
C_\PF(i_v\,\alpha)
\qquadtext{where}
w\in\Gamma T\Mman,\
v\in\Gamma T\Interval,\
C_\star(v|_\sigma) = w|_{C(\sigma)}
\quadtext{for all} \sigma\in\Interval
\label{CoFree_iv_C_PF}\end{aligned}$$ These enable one to manipulate distributions, for example by finding the change of coordinates, without having to act on the test tensors.
The stress-energy 3–forms {#ch_CoFree_SE}
-------------------------
In this section, we exploit the fact the although the stress-energy forms are 3–forms and have a similar structure to the electromagnetic current 3–form.
The stress-energy form $\tau$ is a map which takes a 1–form $\alpha\in\Gamma\Lambda^1\Mman$ and gives a deRham current 3–form $\tau_\alpha$ over the worldline $C$. $$\begin{aligned}
\alpha \mapsto \tau_\alpha
\label{CoFree_tau_map}\end{aligned}$$ The map (\[CoFree\_tau\_map\]) is not tensorial but does satisfy $$\begin{aligned}
\tau_{(\alpha+\beta)} = \tau_\alpha + \tau_\beta
\qquadand
\tau_{(f\alpha)}[\ToneTen] = \tau_{\alpha}[f\ToneTen]
\label{CoFree_tau_lin}\end{aligned}$$ for any test 1–form $\ToneTen$.
Observe that the stress-energy 3–forms take a 1–form $\alpha$ to give a 3–form. This is contrary to the usual definition where we take a vector $v$ to give the 3–form $\tau_v$. The advantage of (\[CoFree\_tau\_map\]) is that we do not need a metric to defined the stress-energy 3–forms or the symmetry and divergenceless conditions (\[CoFree\_SE\_tau\_symm\]) and (\[CoFree\_SE\_Dtau=0\]) below. This is useful if we wish to consider connections which are not metric compatible.
Using $\tau_\alpha$ we define a tensor valued distribution $\tau$ which takes a tensor of type (0,2) as an argument. This is defined as $$\begin{aligned}
\tau[\ToneTen\otimes\alpha]
=
\tau_\alpha[\ToneTen]
\label{CoFree_SE_def_tau}\end{aligned}$$ The stress-energy tensor is symmetric (\[Intro\_Tab\_Sym\]) and divergenceless (\[Intro\_Tab\_Div\_zero\]). The symmetry condition is given by $$\begin{aligned}
\tau[\beta\otimes\alpha]=\tau[\alpha\otimes\beta]
\label{CoFree_SE_tau_symm}\end{aligned}$$ and the divergenceless condition is given by $$\begin{aligned}
D\tau=0
\label{CoFree_SE_Dtau=0}\end{aligned}$$ where $$\begin{aligned}
(D\tau)[\ToneTen] = -\tau[D\ToneTen]
\label{CoFree_SE_def_Dtau}\end{aligned}$$ and $$\begin{aligned}
(D\ToneTen)(U,V) = (\nablaDG_V\ToneTen)(U)
\label{CoFree_SE_def_Dphi}\end{aligned}$$
Using a coordinate system, we can convert the map (\[CoFree\_tau\_map\]) into indexed 3–forms via $$\begin{aligned}
\tau^{\iMa} = \tau_{dx^{\iMa}}
\label{CoFree_def_indexed_tau}\end{aligned}$$ The relationship between the stress-energy forms and the tensor density $T^{\iMa\iMb}$ is given by $$\begin{aligned}
\int_\Interval T^{\iMa\iMb}\,\TtwoTen_{\iMa\iMb}\, d^4x
&=
\tau^{\iMa}[\TtwoTen_{\iMa\iMb}\,dx^\iMb]
\label{CoFree_def_Tab}\end{aligned}$$ Using this coordinate system, (\[CoFree\_SE\_tau\_symm\]) becomes $$\begin{aligned}
dx^{\iMa}\wedge \tau^\iMb = dx^\iMb\wedge \tau^{\iMa}
\label{CoFree_tau^a_sym}\end{aligned}$$ and (\[CoFree\_SE\_Dtau=0\]) becomes $$\begin{aligned}
d\tau^{\iMa} + \Gamma^{\iMa}_{\iMb\iMc}\,dx^\iMc \wedge \tau^\iMb
=
0
\label{CoFree_def_D_coords}\end{aligned}$$
Killing forms and conservation {#ch_CoFree_Kill}
------------------------------
Killing forms (\[Q\_Cons\_symm\_Kill\]) can be written in a coordinate free way. The 1–form $\alpha\in\Gamma\Lambda^1\Mman$ is Killing if $$\begin{aligned}
(\nablaDG_V\alpha)(V)=0
\label{CoFree_Kill_def_Kill}\end{aligned}$$ for all vectors $V\in\Gamma T\Mman$. From (\[CoFree\_def\_D\_coords\]) and (\[CoFree\_tau\^a\_sym\]) we have $$\begin{aligned}
d\tau_\alpha
&=
d(\alpha_\iMa\,\tau^\iMa)
=
d \alpha_\iMa\wedge \tau^\iMa
+ \alpha_\iMa\wedge d\tau^\iMa
=
(\partial_\iMc \alpha_\iMa)\,dx^\iMc\wedge \tau^\iMa
- \Gamma^{\iMa}_{\iMb\iMc} \alpha_\iMa \,dx^\iMc\wedge d\tau^\iMb
=
\nablaInd_\iMc\alpha_\iMb\,dx^\iMc\wedge d\tau^\iMb
\\&=
\tfrac12
(\nablaInd_\iMc\alpha_\iMb-\nablaInd_\iMb\alpha_\iMc)\,dx^\iMc\wedge d\tau^\iMb\end{aligned}$$ Hence if $\alpha\in\Gamma\Lambda^1\Mman$ is a Killing 1–form then from (\[Q\_Cons\_symm\_Kill\]) $d\tau_\alpha=0$. This gives an alternative method of proving (\[Q\_Cons\_Q\]).
Defining and extraction of components {#ch_CoFree_Coords}
-------------------------------------
Using (\[QP\_Tab\]) and (\[CoFree\_def\_Tab\]) we deduce in an arbitrary coordinate system $$\begin{aligned}
\tau^{\iMa} = \tfrac12\, i_\iMb\, L_\iMc\, L_\iMd
C_\PF(\zeta^{\iMa\iMb\iMc\iMd} d\sigma)
\label{CoFree_tau_a_arb_coord}\end{aligned}$$ where $i_\iMb=i_{\partial_\iMb}$ and $L_\iMc=L_{\partial_\iMc}$. In an adapted coordinate system (\[Intro\_adapt\_coords\]) then (\[QP\_Tab\_gamma\]) implies $$\begin{aligned}
\tau^{\iMa}
&=
i_\iMb \, C_\PF(\gamma^{\iMa\iMb00}\,d\sigma)
+
i_\iMb \, L_\iSa\, C_\PF(\gamma^{\iMa\iMb0\iSa}\,d\sigma)
+
\tfrac12 i_\iMb \, L_\iSa\,L_\iSb\,
C_\PF(\gamma^{\iMa\iMb\iSa\iSb}\,d\sigma)
\label{CoFree_tau_a_adp_coord}\end{aligned}$$ As stated the advantage of using an adapted coordinate system is that the $\gamma^{\iMa\iMb\iMc\iMd}$ are unique. We can extract the values of the $\gamma^{\iMa\iMb\iMc\iMd}$ by acting on test forms. $$\begin{aligned}
\gamma^{\iMa\iMb00}(\sigma)
&=
\lim_{\epsilon\to 0} \tau^{\iMa}[dx^\iMb\,\bumpf_{\epsilon,\sigma}]
\,,\quad
\gamma^{\iMa\iMb0\iSa}(\sigma)
=
\lim_{\epsilon\to 0} \tau^{\iMa}[z^\iSa\,dx^\iMb\,\bumpf_{\epsilon,\sigma}]
\,,\quad
\gamma^{\iMa\iMb\iSa\iSb}(\sigma)
=
\lim_{\epsilon\to 0} \tau^{\iMa}[z^\iSa\,z^\iSb\,dx^\iMb\,\bumpf_{\epsilon,\sigma}]
\label{CoFree_extract_coords}\end{aligned}$$ where $$\begin{aligned}
\bumpf_{\epsilon,\sigma}(\sigma',z)
=
\epsilon^{-1}\,
\bump((\sigma-\sigma')/\epsilon)
\,
\bump\big((z^1)^2+(z^2)^2+(z^3)^2\big)\end{aligned}$$ and $\bump:\Real\to\Real$ is a bump function. I.e. a test function with which is flat about $0$.
Semi-dipoles and semi-quadrupoles {#ch_Semi_Q}
---------------------------------
Having defined the quadrupoles in a coordinate free manner, one can identify properties which can be defined without reference to a coordinate system. In [@gratus2018correct] we defined the semi-dipole and semi-quadrupole electromagnetic 3–form. The semi-dipole corresponded to the purely electric quadrupole. One can likewise define the semi-dipole and semi-quadrupole stress-energy distributions. In this case we say that $\tau_\alpha$ is an semi-multipole of order at most $\ell$ if $$\begin{aligned}
\tau_\alpha[\lambda^{\ell} d\mu]=0
\quadtext{for all}&
\lambda,\mu\in\Gamma\Lambda^0M\quadtext{such that}
C^\star(\lambda)=C^\star(\mu)=0
\end{aligned}
\label{Defs_Elec_order}$$ We observe that the semi-dipole ($\ell=1$) corresponds to the case when the spin tensor is $S^{\iSb\iSa}=0$. The semi-quadrupole ($\ell=2$), does not have a natural interpretation, but is used as a quadrupole with fewer components.
When we apply this to the quadrupole (\[CoFree\_tau\_a\_adp\_coord\]) we see that the semi-quadrupole is given by $$\begin{aligned}
\tau^{\iMa}
&=
i_\iMb \, C_\PF(\gamma^{\iMa\iMb00}\,d\sigma)
+
i_\iMb \, L_\iSa\, C_\PF(\gamma^{\iMa\iMb0\iSa}\,d\sigma)
+
\tfrac12 L_\iSa\,L_\iSb\,
C_\PF(\gamma^{\iMa0\iSa\iSb})
\label{Semi_Q_tau_a_adp_coord}\end{aligned}$$ This gives 22 ODE components and 6 free components as indicated in table \[tab\_number\_components\]. We presented the general solution for the static semi-quadrupole in section \[ch\_Q\_free\].
The coordinate free definition of the Dixon split only using $\DixVec$ and the connection {#ch_CoFree_Dixon}
-----------------------------------------------------------------------------------------
We have defined the stress-energy distribution without reference to a coordinate system. When writing this in terms of coordinates (\[CoFree\_tau\_a\_arb\_coord\]) and (\[Ellis\_Squz\_moments\]) we see that this corresponds directly to the Ellis representation of the multipoles. Here we show how to perform the Dixon split (\[Intro\_Tab\_Dixon\_Split\]) which separate the multipoles into different orders with respect to a 1–form $\DixVec$ along the curve. We show this by separating the quadrupole into a pure Dixon quadrupole term, a pure Dixon dipole term and a monopole term. The pattern however is clear. The Dixon split (\[Intro\_Tab\_Dixon\_Split\]) requires defining $\tau_{(0)}$, $\tau_{(1)}$ and $\tau_{(2)}$ such that an arbitrary quadrupole $$\begin{aligned}
\tau =
\tau_{(0)} +
\tau_{(1)} +
\tau_{(2)}
\label{CoFree_Dixon_split}\end{aligned}$$ Using (\[CoFree\_def\_Tab\]) to convert these into $T^{\iMa\iMb}_{(r)}$ so that $T^{\iMa\iMb}=T^{\iMa\iMb}_{(0)}+T^{\iMa\iMb}_{(1)}+T^{\iMa\iMb}_{(2)}$ where $$\begin{aligned}
\tau_{(0)} [\TtwoTen]
&=
\int_\Mman T^{\iMa\iMb}_{(0)} \ \TtwoTen_{\iMa\iMb} \ d^4x
=
\int_\Interval
\zetaMultiDixon^{\iMa\iMb}(\sigma)\,
\TtwoTen_{\iMa\iMb}(\sigma)\ d\sigma
\,,
\label{CoFree_Dixon_Tab_Mono}
\\
\tau_{(1)} [\TtwoTen]
&=
\int_\Mman T^{\iMa\iMb}_{(1)} \ \TtwoTen_{\iMa\iMb} \ d^4x
=
\int_\Interval
\zetaMultiDixon^{\iMa\iMb\iMc}(\sigma)\,
(\nablaInd_\iMc \TtwoTen_{\iMa\iMb})|_{C(\sigma)}\ d\sigma
\,,
\label{CoFree_Dixon_Tab_Dip}
\\
\tau_{(2)} [\TtwoTen]
&=
\int_\Mman T^{\iMa\iMb}_{(2)} \ \TtwoTen_{\iMa\iMb} \ d^4x
=
\int_\Interval
\zetaMultiDixon^{\iMa\iMb\iMc\iMd}(\sigma)\,
(\nablaInd_\iMc\nablaInd_\iMd \TtwoTen_{\iMa\iMb})|_{C(\sigma)}\ d\sigma
\label{CoFree_Dixon_Tab_Quad}\end{aligned}$$ The Dixon split is with respect to a 1–form, as opposed to a vector along $C$. This is in order to avoid requiring the metric. The one requirement is that the 1–form $\DixVec$ combined with the vector $\Cdot$ is nowhere zero. I.e. $$\begin{aligned}
\DixVec(\Cdot)\ne0
\label{CoFree_Dixon_not_orthog}\end{aligned}$$ In order to perform the Dixon split, it is necessary to define a radial vector fields. We say that $R\in\Gamma TM$ is (2 second order) with respect to $C$ and $\DixVec\in\Gamma_\iMc\Lambda^1 M$ if for all $p=C(\sigma)$ $$\begin{aligned}
R|_{p} &= 0
,\qquad
(\nablaDG_V R)|_p = V|_p
\qquadand
\big(\nablaDG^2_{U,V} R\big)\big|_p = 0
\label{CoFree_Dixon_def_Rad}\end{aligned}$$ for all vectors $U,V\in T\Mman$ such that $N(V)=N(U)=0$. In appendix \[ch\_Apendx\_DixonSplit\] we express the components of $R$ with respect to a coordinate system, which is adapted both for $C$ and $\DixVec$.
Using this radial vector, the Dixon split (\[CoFree\_Dixon\_split\]) is given by $$\begin{aligned}
\tau_{(0)} [\TtwoTen]
&=
\tau[\TtwoTen-\nabla_R\TtwoTen + \tfrac12\nabla^2_{R,R} \TtwoTen]
\label{CoFree_Dixon_mono}
\\
\tau_{(1)} [\TtwoTen]
&=
\tau[\nabla_R\TtwoTen-\nabla^2_{R,R} \TtwoTen]
\label{CoFree_Dixon_dip}
\\
\tau_{(2)} [\TtwoTen]
&=
\tau[\tfrac12\nabla^2_{R,R} \TtwoTen]
\label{CoFree_Dixon_quad}\end{aligned}$$ where $\TtwoTen$ is an type (0,2) test tensor. The advantage of this definition is that one can now show how the Dixon components mix when one changes $\DixVec$.
Discussion and outlook. {#ch_Conclusion}
=======================
We have derived a number of key results about the distributional quadrupole stress-energy tensor, in particular the existence of the free components, which require additional constitutive relations to prescribe. An example of these constitutive relations is given. We have also given the coordinate transformation of the quadrupole components, the conserved quantities in the presence of a Killing vector, a definition of semi-quadrupoles and a coordinate free definition of the Dixon split.
The understanding of the quadrupole stress-energy tensor distribution is important for the study of gravitational wave sources, as well as being interesting in its own right. Many features arise at the quadrupole level, which were not present at the dipole level. In particular the non tensorial nature of the components and the existence of free components. These free components imply that it is not possible to know everything about a quadrupole simply from the initial conditions. There is clearly much research that needs to be done to find appropriate constitutive relations to replace the free components with ODEs or algebraic relations. One would expect different constitutive relations for a gravitationally bound object such as two orbiting point masses, a non gravitationally bound object such a rotating asteroid and an object where both gravitational and non gravitational forces are important such as a star. In section \[ch\_Dust\] we present only a very simple constitutive relation corresponding to a dust model. As presented this is only valid for a semi-quadrupole in Minkowski spacetime. With increasing sensitivity of gravitational wave astronomy one can hope to test the different constitutive relations using experimental data.
Although the observation of the need for constitutive relations for the quadrupole on a prescribe worldline is new, there are other cases where the need for constitutive relations has been observed. For example [@steinhoff2010multipolar], they are needed to determine how dipoles or quadrupoles effect the worldline. There are other situations where one can expect constitutive relations will be needed. In future work we intend to look at the dynamics of charged multipoles in an electromagnetic field. One would expect in this case that constitutive relations are also needed, especially since a dipole has nine components, but the electromagnetic current, which provides the force and torque, has only six components. These constitutive relations describe the differences between the charge distribution and the mass distribution in the dipole. The situation has an additional challenge in that the electromagnetic field blows up on the worldline. This poses another question that has been tackled by many authors: how does a dipole respond to its own electromagnetic field [@Gralla:2009md; @gratus2015self; @ferris2011origin].
Having definitions which are coordinate free can be very useful. They make it clear which objects are coordinate dependent and which are truly geometric. Ironically, one principle use is to make it easier to derive the correct coordinate transformation. Although the Ellis representation of multipoles is easy to define in a coordinate free manner, here we have derived a coordinate free define of the Dixon split (\[CoFree\_Dixon\_mono\])-(\[CoFree\_Dixon\_quad\]).
Although spacetime is endowed with both a metric and a connection, there is much research into which objects can be defined without such structures. In some cases this is a philosophical question, posing whether the electromagnetic field is more fundamental than the gravitational field [@hehl2016kottler]. In other cases it is useful for asking how does an object depend on a metric or a connection. This is necessary when doing variations with respect to the metric. It is important therefore that a general multipole does not require any additional structure beyond that defining a general manifold for its definition. This means that one can define multipoles on other manifolds such as the tangent bundle or jet bundles. Such an approach may also give an insight into prescribing constitutive relations, say for a plasma. Of course a connection is required to demand the stress-energy distribution is covariantly conserved, but there is no requirement to demand that such a connection is Levi-Civita. All the coordinate free presentation from section \[ch\_CoFree\] does not require a metric, so one can choose a metric compatible or a non metric compatible connection. We have demanded that the connection is torsion free. On the whole this is to simplify the equations so that we do not have to write down all the torsion components and their derivatives. One can reproduce the results with these extra terms.
Acknowledgement {#acknowledgement .unnumbered}
===============
JG is grateful for the support provided by STFC (the Cockcroft Institute ST/P002056/1) and EPSRC (the Alpha-X project EP/N028694/1). ST and PP would like to thank the Faculty of Science and Technology, Lancaster University for their support. JG would like to thank the anonymous referee of [@gratus2018correct] for suggesting applying the technique to gravity.
Appendix {#appendix .unnumbered}
========
Details of the proofs
=====================
Proofs from introductory sections
---------------------------------
[Proof of and ]{} \[pf\_Tab\_gamma\_eval\] $$\begin{aligned}
\int_\Mman T^{\iMa\iMb}\,\TtwoTen_{\iMa\iMb}\,d^4 x
&=
\int_\Interval d\sigma \int_{\textup{space}} d^3\Vz
\Big(
\sum_{r=0}^k
{\frac{1}{r!}}
\gamma^{\iMa\iMb \iSa_1 \ldots\iSa_r 0\ldots 0}
\ \partz_{\iSa_1} \cdots \partz_{\iSa_r}\,
\deltaThree(\Vz)
\Big)
\TtwoTen_{\iMa\iMb}
\\&=
\sum_{r=0}^k
{\frac{(-1)^r}{r!}}
\int_\Interval d\sigma \int_{\textup{space}} d^3\Vz \
\gamma^{\iMa\iMb \iSa_1 \ldots\iSa_r 0\ldots 0}
\,
(\partz_{\iSa_1} \cdots \partz_{\iSa_r}\,\TtwoTen_{\iMa\iMb})
\,\deltaThree(\Vz)
\\&=
\sum_{r=0}^k
{\frac{(-1)^r}{r!}}
\int_\Interval d\sigma \,
\gamma^{\iMa\iMb \iSa_1 \ldots\iSa_r 0\ldots 0}
\,
(\partz_{\iSa_1} \cdots \partz_{\iSa_r}\,\TtwoTen_{\iMa\iMb})\end{aligned}$$
[Proof of ]{} \[pf\_zeta\_gamma\] $$\begin{aligned}
\int_\Mman T^{\iMa\iMb} \ \TtwoTen_{\iMa\iMb} \ d^4x
&=
(-1)^{k} \cRed{\frac{1}{k!}}\int_\Interval
\zetaMultiEllis^{\iMa \iMb \iMc_1\ldots\iMc_k}\,
\big(\partx_{\iMc_1} \cdots \partx_{\iMc_k}
\TtwoTen_{\iMa\iMb} \big)
\\&=
\sum_{r=0}^k
(-1)^{k} {\frac{1}{k!}}\ \frac{k!}{r!(k-r)!}\int_\Interval
\zetaMultiEllis^{\iMa \iMb \iSa_1\ldots\iSa_r0\ldots0}\,
\big(\partx_{\iSa_1} \cdots \partx_{\iSa_r} \partial_0^{k-r}
\TtwoTen_{\iMa\iMb} \big)
\\&=
\sum_{r=0}^k
(-1)^{r} \frac{1}{r!(k-r)!}\int_\Interval
(\partial_0^{k-r}\zetaMultiEllis^{\iMa \iMb \iSa_1\ldots\iSa_r,0\ldots0})\,
\big(\partx_{\iSa_1} \cdots \partx_{\iSa_r}
\TtwoTen_{\iMa\iMb} \big)\end{aligned}$$ Hence comparing with gives .
[Proof of and ]{} \[pf\_MomentsOfT\] This follows from setting $w^\iSa=z^\iSa/\varepsilon$ and Taylor expanding around $\varepsilon=0$ we have $$\begin{aligned}
\int_{\Real^4}\TReg^{\iMa\iMb}_\varepsilon(&\sigma,\Vz) \,
\TtwoTen_{\iMa\iMb}(\sigma,\Vz)\,
d\sigma\,d^3z
\\&=
\int_{\Real} d\sigma \int_{\Real^3} d^3z\
\TReg^{\iMa\iMb}_\varepsilon(\sigma,\Vz) \,\TtwoTen_{\iMa\iMb}(\sigma,\Vz)\,
\\&=
\int_{\Real} d\sigma \int_{\Real^3} d^3z\
\frac{1}{\varepsilon^3}
\TReg^{\iMa\iMb}\Big(\sigma,\frac{\Vz}{\varepsilon}\Big) \,\TtwoTen_{\iMa\iMb}(\sigma,\Vz)
\\&=
\int_{\Real} d\sigma \int_{\Real^3} d^3\Vw\
\TReg^{\iMa\iMb}\big(\sigma,\Vw\big)
\,\TtwoTen_{\iMa\iMb}(\sigma,\varepsilon \Vw)
\\&=
\int_{\Real} d\sigma \int_{\Real^3} d^3\Vw\
\TReg^{\iMa\iMb}\big(\sigma,\Vw\big)
\,\TtwoTen_{\iMa\iMb}(\sigma,\Vzero)
+
\varepsilon\int_{\Real} d\sigma \int_{\Real^3} d^3\Vw\
\TReg^{\iMa\iMb}\big(\sigma,\Vw\big)
\,w^\iSa\,\big(\partz_{\iSa}\TtwoTen_{\iMa\iMb}\big)(\sigma,\Vzero)
\\&\qquad+
\varepsilon^2\int_{\Real} d\sigma \int_{\Real^3} d^3\Vw\
\TReg^{\iMa\iMb}\big(\sigma,\Vw\big)
\,w^\iSa\,w^\iSb\,
\big(\partz_{\iSa}\,\partz_{\iSb}\TtwoTen_{\iMa\iMb}\big)(\sigma,\Vzero)
+\cdots
\\&=
\int_{\Real} d\sigma
\gamma^{\iMa\iMb0\ldots0}
\,\TtwoTen_{\iMa\iMb}|_{C(\sigma)}
-
\varepsilon\int_{\Real}\gamma^{\iMa\iMb\iSa0\ldots0}
d\sigma\ \big(\partz_{\iSa}\TtwoTen_{\iMa\iMb}\big)\big|_{C(\sigma)}
+
\varepsilon^2\int_{\Real}\gamma^{\iMa\iMb\iSa\iSb0\ldots0}
d\sigma\ \big(\partz_{\iSa}\TtwoTen_{\iMa\iMb}\big)\big|_{C(\sigma)}
+\cdots\end{aligned}$$
Proofs about the dipole
-----------------------
[Proof of ]{} \[pf\_Dipole\_zeta\_freedom\] Substituting into we have $$\begin{aligned}
T^{\iMa\iMb}
\to
T^{\iMa\iMb}
+
\int_\Interval M^{\iMa\iMb}\Cdot^\iMc \partx_\iMc \delta(x-C(\tau))\,d\sigma
&=
T^{\iMa\iMb}
+
\int_\Interval M^{\iMa\iMb} \JGchange{\tfrac{d}{d\sigma}} \delta(x-C(\tau))\,d\sigma
\\&=
T^{\iMa\iMb}
+
\int_\Interval \JGchange{\tfrac{d}{d\sigma}} \big(M^{\iMa\iMb} \delta(x-C(\tau))\big)\,d\sigma
=
T^{\iMa\iMb}\end{aligned}$$ Thus is a gauge freedom. To show it is the maximum freedom consider working in adaptive coordinates. It is clear that the freedom is precisely equivalent to the freedom to choose $\zetaDP^{\iMa\iMb0}$ given $\gamma^{\iMa\iMb0}$. For details of why this is the maximum gauge freedom see proofs number \[pf\_ChangCoordZeta\] and \[pf\_ChangCoordZeta\_Gauge\].
[Relationship between and ]{} \[pf\_MattPap\] In this proof we refer to the two equations in as (\[MD\_diploe\_DVa\_NoGeo\].1) and (\[MD\_diploe\_DVa\_NoGeo\].2) and likewise for (\[MD\_diploe\_DSab\].1) to (\[MD\_diploe\_DSab\].4). From and (\[MD\_geodesic\]) we have $$\begin{aligned}
\Dfrac{\hat{S}^{\iMa\iMb}}{\sigma}
&-
\hat{P}^\iMb\Cdot^{\iMa}
+
\hat{P}^{\iMa}\Cdot^\iMb
\\&=
\Dfrac{S^{\iMa\iMb}}{\sigma}
- \Dfrac{X^{\iMa}}{\sigma}\Cdot^\iMb
- X^{\iMa}\Dfrac{\Cdot^\iMb}{\sigma}
+ \Dfrac{X^\iMb}{\sigma}\Cdot^{\iMa}
+ X^\iMb\Dfrac{\Cdot^{\iMa}}{\sigma}
-
({P}^\iMb+m\Cdot^\iMb)\Cdot^{\iMa}
+
({P}^{\iMa}+m\Cdot^{\iMa})\Cdot^\iMb
\\&=
\Dfrac{S^{\iMa\iMb}}{\sigma}
- \Big(\Dfrac{X^{\iMa}}{\sigma} - P^{\iMa}\Big)\Cdot^\iMb
+ \Big(\Dfrac{X^\iMb}{\sigma} - P^{\iMb}\Big)\Cdot^{\iMa}\end{aligned}$$ Hence (\[MD\_diploe\_DSab\].2) and (\[MD\_diploe\_DSab\].4) imply (\[MD\_diploe\_DVa\_NoGeo\].1). By contrast from (\[MD\_diploe\_DVa\_NoGeo\].1) we can project out (\[MD\_diploe\_DSab\].2) and (\[MD\_diploe\_DSab\].4) using $\Cdot_\iMa$.
Likewise from we have $$\begin{aligned}
\Dfrac{\hat{P}^{\iMa}}{\sigma}
-
\tfrac12 R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\, \hat{S}^{\iMd\iMc}
&=
\Dfrac{{P}^{\iMa}}{\sigma} + \Dfrac{m}{\sigma}\Cdot^\iMa +
m\Dfrac{\Cdot^\iMa}{\sigma}
-
\tfrac12 R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\,
\big({S}^{\iMd\iMc} - X^{\iMd}\Cdot^\iMc + X^\iMc\Cdot^{\iMd}\big)
\\&=
\Dfrac{{P}^{\iMa}}{\sigma} + \dot{m}\Cdot^\iMa
-
\tfrac12 R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\,{S}^{\iMd\iMc}
-
R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\,
X^{\iMd}\Cdot^\iMc\end{aligned}$$ Thus (\[MD\_diploe\_DSab\].1) and (\[MD\_diploe\_DSab\].3) imply (\[MD\_diploe\_DVa\_NoGeo\].2). By contrast from (\[MD\_diploe\_DVa\_NoGeo\].2) we can project out (\[MD\_diploe\_DSab\].1) and (\[MD\_diploe\_DSab\].3) using $\Cdot_\iMa$.
[Proof of ]{} \[pf\_QK\_def\] From we have $$\begin{aligned}
\partial_0 \hat{S}^{\iMa\iMb}
&=
\Dfrac{\hat{S}^{\iMa\iMb}}{\sigma}
-
\Gamma^\iMa_{0\iMc}\, \hat{S}^{\iMc\iMb}
-
\Gamma^\iMb_{0\iMc}\, \hat{S}^{\iMa\iMc}
=
\hat{P}^\iMb\Cdot^{\iMa} - \hat{P}^{\iMa}\Cdot^\iMb
-
\Gamma^\iMa_{0\iMc}\, \hat{S}^{\iMc\iMb}
-
\Gamma^\iMb_{0\iMc}\, \hat{S}^{\iMa\iMc}
\\&=
\hat{P}^\iMb\delta^{\iMa}_0
-
\hat{P}^{\iMa}\delta^\iMb_0
-
\Gamma^\iMa_{0\iMc}\, \hat{S}^{\iMc\iMb}
-
\Gamma^\iMb_{0\iMc}\, \hat{S}^{\iMa\iMc}\end{aligned}$$ so $$\begin{aligned}
\partial_0 \hat{S}^{0\iMa}
&=
\hat{P}^\iMa
-
\hat{P}^{0}\delta^\iMa_0
-
\Gamma^0_{0\iMc}\, \hat{S}^{\iMc\iMa}
-
\Gamma^\iMa_{0\iMc}\, \hat{S}^{0\iMc}\end{aligned}$$ From we have $$\begin{aligned}
\gamma^{\iMa00}
&=
\hat{P}^{\lround\iMa}\,\Cdot^{0\rround}
+
\hat{S}^{\iMc\lround 0}\,\Gamma^{\iMa\rround}{}_{\iMc\iMd} \,\Cdot^\iMd
+
\partial_0(\hat{S}^{0\lround\iMa}\,\Cdot^{0\rround})
\\&=
\tfrac12
\big(
\hat{P}^{\iMa}
+
\hat{P}^0 \delta^{\iMa}_0
+
\hat{S}^{\iMc 0}\,\Gamma^{\iMa}{}_{\iMc0}
+
\hat{S}^{\iMc\iMa}\,\Gamma^{0}{}_{\iMc0}
+
\partial_0(\hat{S}^{0\iMa})
\big)
\\&=
\tfrac12
\big(
\hat{P}^{\iMa}
+
\hat{P}^0 \delta^{\iMa}_0
+
\hat{S}^{\iMc 0}\,\Gamma^{\iMa}{}_{\iMc0}
+
\hat{S}^{\iMc\iMa}\,\Gamma^{0}{}_{\iMc0}
+
\hat{P}^\iMa
-
\hat{P}^{0}\delta^\iMa_0
-
\Gamma^0_{0\iMc}\, \hat{S}^{\iMc\iMa}
-
\Gamma^\iMa_{0\iMc}\, \hat{S}^{0\iMc}
\big)
\\&=
\hat{P}^{\iMa}
+
\hat{S}^{\iMc 0}\,\Gamma^{\iMa}{}_{\iMc0} \end{aligned}$$ and $$\begin{aligned}
\gamma^{\iMa0 \iSa}
=
\tfrac12
\hat{S}^{\iSa\iMa}
+
\tfrac12
\hat{S}^{\iSa0}\delta^\iMa_0\end{aligned}$$ From we have $$\begin{aligned}
0
&=
\nabla_{\iSa}K_0 + \nabla_{0}K_\iSa
=
\partial_{\iSa}K_0 + \partial_{0}K_\iSa - 2\Gamma^\iMa_{\iSa 0} K_\iMa\end{aligned}$$ Hence from we have $$\begin{aligned}
\Conserv_\Kill
&=
\gamma^{\iMa00} \,\Kill_{\iMa} -
\gamma^{\iMa0\iSa} \, \partz_\iSa \,\Kill_{\iMa}
=
\big(\hat{P}^{\iMa}
+
\hat{S}^{\iMc 0}\,\Gamma^{\iMa}{}_{\iMc0}
\big)K_\iMa
-
\tfrac12
\big(\hat{S}^{\iSa\iMa}
+
\hat{S}^{\iSa0}\delta^\iMa_0
\big)\, \partz_\iSa \,\Kill_{\iMa}
\\&=
\hat{P}^{\iMa}K_\iMa
+
\hat{S}^{\iMc 0}\,\Gamma^{\iMa}{}_{\iMc0}
K_\iMa
-
\tfrac12
\hat{S}^{\iSa\iMa}
\,\partz_\iSa\Kill_{\iMa}
-
\tfrac12
\hat{S}^{\iSa0}
\,\partz_\iSa\Kill_{0}
\\&=
\hat{P}^{\iMa}K_\iMa
+
\hat{S}^{\iMc 0}\,\Gamma^{\iMa}{}_{\iMc0}
K_\iMa
-
\tfrac12
\hat{S}^{\iSa\iMa}
\,\partz_\iSa\Kill_{\iMa}
+
\tfrac12
\hat{S}^{\iSa0}
\,\partz_0\Kill_{\iSa}
-
\hat{S}^{\iSa0}
\Gamma^\iMa_{\iSa 0} K_\iMa
\\&=
\hat{P}^{\iMa}K_\iMa
+
\tfrac12
\hat{S}^{\iMa\iSa}
\,\partz_\iSa\Kill_{\iMa}
+
\tfrac12
\hat{S}^{\iMa0}
\,\partz_0\Kill_{\iMa}
=
\hat{P}^{\iMa}K_\iMa
+
\tfrac12
\hat{S}^{\iMa\iMb}
\,\partz_\iMb\Kill_{\iMa}
=
\hat{P}^{\iMa}K_\iMa
+
\tfrac12
\hat{S}^{\iMa\iMb}
\,\nabla_\iMb\Kill_{\iMa}\end{aligned}$$
[Proof that $\Conserv_\Kill$ in is conserved.]{} \[pf\_QK\_consved\] Since $\Kill_{\iMa}$ is killing we have $$\begin{aligned}
\nabla_\iMa\nabla_\iMb\Kill_\iMc
=
R^\iMd{}_{\iMa\iMb\iMc} \Kill_\iMd\end{aligned}$$ From and (\[MD\_diploe\_DVa\_NoGeo\]) we have $$\begin{aligned}
\dot\Conserv_\Kill
&=
\Dfrac{\Conserv_\Kill}{\sigma}
=
\Dfrac{\hat{P}^\iMa}{\sigma}\,K_\iMa
+
\hat{P}^\iMa\,\Cdot^\iMb\nabla_\iMb K_\iMa
+
\tfrac12\Dfrac{\hat{S}^{\iMa\iMb}}{\sigma}\,\nabla_{\iMb}\ K_\iMa
+
\tfrac12\hat{S}^{\iMa\iMb}\,\Cdot^\iMc\nabla_{\iMc}\nabla_{\iMb}\ K_\iMa
\\&=
\tfrac12 R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\, \hat{S}^{\iMd\iMc}
\,K_\iMa
+
\hat{P}^\iMa\,\Cdot^\iMb\nabla_\iMb K_\iMa
+
\tfrac12
\Big(\hat{P}^\iMb\Cdot^{\iMa}
-
\hat{P}^{\iMa}\Cdot^\iMb\Big)
\,\nabla_{\iMb}K_\iMa
+
\tfrac12\hat{S}^{\iMa\iMb}\,\Cdot^\iMc\nabla_{\iMc}\nabla_{\iMb}K_\iMa
\\&=
\tfrac12 R^{\iMa}{}_{\iMb\iMc\iMd}\,\Cdot^\iMb\, \hat{S}^{\iMd\iMc}
\,K_\iMa
+
\tfrac12\hat{S}^{\iMa\iMb}\,\Cdot^\iMc\,R^\iMd{}_{\iMc\iMb\iMa} \Kill_\iMd
=0\end{aligned}$$
Proofs about the quadrupole
---------------------------
[Proof of ]{} \[pf\_Quad\_zeta\_freedom\] Similarly to the proof of , we have $$\begin{aligned}
\int_\Interval M^{\iMa\iMb}\,\Cdot^{\lround\iMc}\, C^{\iMd\rround}
&\partial_\iMc\partial_\iMd\delta\big(x-C(\sigma)\big)\,d\sigma
=
\int_\Interval M^{\iMa\iMb}\,C^{\iMd}\Cdot^{\iMc}\,
\partial_\iMc\partial_\iMd\delta\big(x-C(\sigma)\big)\,d\sigma
\\&=
M^{\iMa\iMb} \int_\Interval C^{\iMd} \dfrac{}{\sigma}
\Big(\partial_\iMd\delta\big(x-C(\sigma)\big)\Big)\,d\sigma
\\&=
M^{\iMa\iMb} \int_\Interval \dfrac{}{\sigma} \Big( C^{\iMd}
\partial_\iMd\delta\big(x-C(\sigma)\big)\Big)\,d\sigma
-
M^{\iMa\iMb}\int_\Interval \Cdot^{\iMd}
\partial_\iMd\delta\big(x-C(\sigma)\big)\,d\sigma
\\&=
-
M^{\iMa\iMb}\int_\Interval \dfrac{}{\sigma}\delta\big(x-C(\sigma)\big)\,d\sigma
=
0\end{aligned}$$ and $$\begin{aligned}
\int_\Interval \hat{M}^{\iMa\iMb\lround\iMd}
\Cdot^{\iMc\rround}\,
\partial_\iMc\partial_\iMd\delta\big(x-C(\sigma)\big)\,d\sigma
&=
\int_\Interval \hat{M}^{\iMa\iMb\iMd}
\Cdot^{\iMc}\,
\partial_\iMc\partial_\iMd\delta\big(x-C(\sigma)\big)\,d\sigma
\\&=
\hat{M}^{\iMa\iMb\iMd}
\int_\Interval
\dfrac{}{\sigma}\Big(\partial_\iMd\delta\big(x-C(\sigma)\big)\Big)
\,d\sigma
=
0\end{aligned}$$
To see why this incorporates all the gauge freedom we use the adapted coordinates system. Assume $T^{\iMa\iMb}$ is given. From (\[CoFree\_extract\_coords\]) we know that the components $\gamma^{\iMa\iMb\iMc\iMd}$ are unique, i.e. have no gauge freedom. Integrating (\[QP\_gamma\_zeta\]) we have $$\begin{aligned}
\zeta^{\iMa\iMb\iMc\iMd} \to
\zeta^{\iMa\iMb\iMc\iMd} +
\sigma M^{\iMb\iMa}\, \delta^\iMc_0\,\delta^\iMd_0
+
\hat{M}^{\iMa\iMb\lround\iMd} \delta^{\iMc\rround}_0\end{aligned}$$ which is (\[QP\_Zeta\_Freedom\]) in adapted coordinates. Hence (\[QP\_Zeta\_Freedom\]) is incorporates all gauge freedom, in adapted coordinates. Now for a general coordinates system we use (\[QP\_Zeta\_Change\_Coords\]). We see in the proof \[pf\_ChangCoordZeta\_Gauge\] below, that (\[QP\_Zeta\_Change\_Coords\]) is consistent with the gauge freedom. Thus there are no additional gauge freedom in a general coordinate system.
[Proof of ]{} \[pf\_ChangCoordZeta\] Using we have $$\begin{aligned}
\int_\Interval \zetahat^{\iMahat\iMbhat\iMchat\iMdhat} \,
&\big(\partx_\iMchat\,\partx_\iMdhat \,\TtwoTenhat_{\iMahat\iMbhat}\big)
\big|_{C(\sigma)}\ d\sigma
=
\int_{\Real^4} \THat^{\iMahat\iMbhat}\,\TtwoTenhat_{\iMahat\iMbhat}\,d^4\xhat
=
\int_{\Real^4} T^{\iMa\iMb}\,\TtwoTen_{\iMa\iMb}\,d^4x
=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd} \,
\big(\partx_\iMc\,\partx_\iMd \,\TtwoTen_{\iMa\iMb}\big) d\sigma
\\&=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,
\big({\Jaabb}\, \TtwoTenhat_{\iMahat\iMbhat}\big)
\ d\sigma
\\&=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
\Big(
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\, \TtwoTenhat_{\iMahat\iMbhat}
+
2\, \partx_\iMc \big({\Jaabb}\big)\, \partx_\iMd\,\TtwoTenhat_{\iMahat\iMbhat}
+
{\Jaabb}\ \partx_\iMc\,\partx_\iMd\, \TtwoTenhat_{\iMahat\iMbhat}
\Big)
\ d\sigma\end{aligned}$$ Take each of the terms in turn. For the third term we have $$\begin{aligned}
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
&
{\Jaabb}\ \partx_\iMc\,\partx_\iMd\, \TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\ \partx_\iMc\,(J^\iMdhat_\iMd\,\partx_\iMdhat\, \TtwoTenhat_{\iMahat\iMbhat})
\ d\sigma
\\&=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\
\Big((\partx_\iMc\,J^\iMdhat_\iMd)\,\partx_\iMdhat\, \TtwoTenhat_{\iMahat\iMbhat} +
J^\iMdhat_\iMd\,\partx_\iMc\,\partx_\iMdhat\, \TtwoTenhat_{\iMahat\iMbhat} \Big)
\ d\sigma
\\&=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\
(\partx_\iMc\,J^\iMdhat_\iMd)\,\partx_\iMdhat\, \TtwoTenhat_{\iMahat\iMbhat} \,d\sigma
+
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\
J^\iMchat_\iMc\, J^\iMdhat_\iMd\,\partx_\iMchat\,\partx_\iMdhat\,
\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
\\&=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd} \,
{\Jaabb} (\partx_\iMc\,J^\iMdhat_\iMd) \,
\bigg(\int^\sigma \Cdothat^\iMchat\,\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}\, d\sigma'\bigg)
\ d\sigma
+
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\
J^\iMchat_\iMc\, J^\iMdhat_\iMd\,\partx_\iMchat\,\partx_\iMdhat\,
\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
\\&=
-\int_\Interval
\bigg(\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb} (\partx_\iMc\,J^\iMdhat_\iMd) \,d\sigma'\bigg)
\Cdothat^\iMchat\,\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}\,
\ d\sigma
+
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\
J^\iMchat_\iMc\, J^\iMdhat_\iMd\,\partx_\iMchat\,\partx_\iMdhat\,
\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma\end{aligned}$$ For the second term we have $$\begin{aligned}
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc \big({\Jaabb}\big)\, \partx_\iMd\,\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
&=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc \big({\Jaabb}\big)\, J^\iMdhat_\iMd \,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
\\&=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc \big({\Jaabb}\big)\, J^\iMdhat_\iMd \,
\bigg(\int^\sigma \Cdothat^\iMchat\,\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}\, d\sigma'\bigg)
\ d\sigma
\\&=
-\int_\Interval
\bigg(\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
\partx_\iMc \big({\Jaabb}\big)\, J^\iMdhat_\iMd \,d\sigma'\bigg)
\Cdothat^\iMchat\,\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}\,
\ d\sigma\end{aligned}$$ For the first term we have $$\begin{aligned}
\int_\Interval
\zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,
\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
&=
\int_\Interval
\zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,
\bigg(\int^\sigma \Cdothat^\iMchat\,\partx_\iMchat\,\TtwoTenhat_{\iMahat\iMbhat}\, d\sigma'\bigg)
\ d\sigma
\\&=
-\int_\Interval
\bigg(\int^\sigma \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma'\bigg)\,
\Cdothat^\iMchat\,\partx_\iMchat\,\TtwoTenhat_{\iMahat\iMbhat}\,
\ d\sigma
\\&=
-\int_\Interval
\bigg(\int^\sigma \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\bigg)\,
\Cdothat^\iMchat\,
\Big(\int^{\sigma} \partx_\iMchat\,\Cdothat^\iMdhat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}\,d\sigma'\Big)
\ d\sigma
\\&=
\int_\Interval
\bigg(\int^{\sigma} \Big(\int^{\sigma'} \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\Big)\,
\Cdothat^\iMchat\,d\sigma'\bigg)
\Cdothat^\iMdhat\,\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
\\&=
\int_\Interval
\bigg(\Cdothat^\iMdhat\,\int^{\sigma}
\Big(\Cdothat^\iMchat\,\int^{\sigma'} \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\Big)\,
d\sigma'\bigg)
\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma\end{aligned}$$ Thus adding these terms together we have $$\begin{aligned}
\int_\Interval \zetahat^{\iMahat\iMbhat\iMchat\iMdhat} \,
&
\big(\partx_\iMchat\,\partx_\iMdhat \,\TtwoTenhat_{\iMahat\iMbhat}\big)
\ d\sigma
=
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
\Big(
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\, \TtwoTenhat_{\iMahat\iMbhat}
+
2\, \partx_\iMc \big({\Jaabb}\big)\, \partx_\iMd\,\TtwoTenhat_{\iMahat\iMbhat}
+
{\Jaabb}\ \partx_\iMc\,\partx_\iMd\, \TtwoTenhat_{\iMahat\iMbhat}
\Big)
\ d\sigma
\\&=
-\int_\Interval
\bigg(\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb} (\partx_\iMc\,J^\iMdhat_\iMd) \,d\sigma'\bigg)
\Cdothat^\iMchat\,\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}\,
\ d\sigma
+
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\
J^\iMchat_\iMc\, J^\iMdhat_\iMd\,\partx_\iMchat\,\partx_\iMdhat\,
\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
\\&\quad
-
2\int_\Interval
\bigg(\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
\partx_\iMc \big({\Jaabb}\big)\, J^\iMdhat_\iMd \,d\sigma'\bigg)
\Cdothat^\iMchat\,\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}\,
\ d\sigma
\\&\quad
+
\int_\Interval
\bigg(\Cdothat^\iMdhat\,\int^{\sigma}
\Big(\Cdothat^\iMchat\,\int^{\sigma'} \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\Big)\,
d\sigma'\bigg)
\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
\\&=
\int_\Interval \bigg(
\zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\
J^\iMchat_\iMc\, J^\iMdhat_\iMd
-
\Cdothat^\iMchat
\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb} (\partx_\iMc\,J^\iMdhat_\iMd) \,d\sigma'
-
2\Cdothat^\iMchat\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
\partx_\iMc \big({\Jaabb}\big)\, J^\iMdhat_\iMd \,d\sigma'
\\&\qquad\qquad
+
\Cdothat^\iMdhat\,\int^{\sigma}
\Big(\Cdothat^\iMchat\,\int^{\sigma'} \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\Big)\,
d\sigma'\bigg)
\partx_\iMchat\,\partx_\iMdhat\,\TtwoTenhat_{\iMahat\iMbhat}
\ d\sigma
$$ Hence follows by symmetrising $\iMchat$ and $\iMdhat$.
[Proof that the change of coordinates is consistent with the gauge freedom ]{} \[pf\_ChangCoordZeta\_Gauge\]
First observe that the lower limits in (\[QP\_Zeta\_Change\_Coords\]) correspond to the to the gauge freedom (\[QP\_Zeta\_Freedom\]) for $\zetahat^{\iMahat\iMbhat\iMchat\iMdhat}$.
It is necessary to establish that, the Gauge freedom (\[QP\_Zeta\_Freedom\]) for $\zeta^{\iMa\iMb\iMc\iMd}$ when substituted into (\[QP\_Zeta\_Change\_Coords\]) does not effect the value of $\zetahat^{\iMahat\iMbhat\iMchat\iMdhat}$. This is achieved by setting $\zeta^{\iMa\iMb\iMc\iMd}=M^{\iMb\iMa}\, \Cdot^{\lround\iMc}\,
C^{\iMd\rround} + \hat{M}^{\iMa\iMb\lround\iMc}\,\Cdot^{\iMd\rround}$, i.e. $\zeta^{\iMa\iMb\iMc\iMd}$ is equivalent to zero, and checking that $\zetahat^{\iMahat\iMbhat\iMchat\iMdhat}=0$. As they are independent, we can consider the two terms $M^{\iMb\iMa}\,
\Cdot^{\lround\iMc}\, C^{\iMd\rround}$ and $\hat{M}^{\iMa\iMb\lround\iMc}\,\Cdot^{\iMd\rround}$ separately.
For the case $\zeta^{\iMa\iMb\iMc\iMd}=M^{\iMb\iMa}\,
\Cdot^{\lround\iMc}\,C^{\iMd\rround}$ we have for the fifth term on the right hand side of (\[QP\_Zeta\_Change\_Coords\]) $$\begin{aligned}
\int^{\sigma}
&
\Cdothat^{\iMdhat}\int^{\sigma'} \Cdot^{\lround\iMc} C^{\iMd\rround} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
=
\int^{\sigma}
\Cdothat^{\iMdhat}\int^{\sigma'} \Cdot^{\iMc} C^\iMd \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
\\&=
\int^{\sigma}
\Cdothat^{\iMdhat}\int^{\sigma'} C^\iMd \,
\dfrac{}{\sigma''} \Big(\partx_\iMd
\,{\Jaabb}\Big)
\,d\sigma''\,
d\sigma'
\\&=
\int^{\sigma}
\Cdothat^{\iMdhat}\int^{\sigma'}
\dfrac{}{\sigma''} \Big(
C^\iMd
\partx_\iMd
\,{\Jaabb}\Big)
\,d\sigma''\,
d\sigma'
-
\int^{\sigma}
\Cdothat^{\iMdhat}\int^{\sigma'}
\Cdot^\iMd
\partx_\iMd
\,{\Jaabb}
\,d\sigma''\,
d\sigma'
\\&=
\int^{\sigma}
\Cdothat^{\iMdhat}
C^\iMd
\partx_\iMd
\,{\Jaabb}\
d\sigma'
-
\int^{\sigma}
\Cdothat^{\iMdhat}\int^{\sigma'}
\dfrac{}{\sigma''}
\,{\Jaabb}
\,d\sigma''\,
d\sigma'
\\&=
\int^{\sigma}
\Cdothat^{\iMdhat}
C^\iMd
\partx_\iMd
\,{\Jaabb}\
d\sigma'
-
\int^{\sigma}
\Cdothat^{\iMdhat}
\,{\Jaabb}
\,d\sigma'
\\&=
\int^{\sigma}
\Cdothat^{\iMdhat}
C^\iMd
\partx_\iMd
\,{\Jaabb}\
d\sigma'
-
\int^{\sigma}
\Cdot^\iMd J^{\iMdhat}_\iMd
\,{\Jaabb}
\,d\sigma'
\\&=
\int^{\sigma}
\Cdot^\iMd J^{\iMdhat}_\iMd
C^\iMc
\partx_\iMc
\,{\Jaabb}\
d\sigma'
-
C^\iMd J^{\iMdhat}_\iMd
\,{\Jaabb}
+
\int^{\sigma}
C^\iMd \dfrac{}{\sigma'}
(J^{\iMdhat}_\iMd
\,{\Jaabb})
\,d\sigma'\end{aligned}$$ Since $$\begin{aligned}
\int^\sigma \Cdot^{\iMd} C^\iMc \,
{\Jaabb} \partx_\iMc\,J^{\iMdhat}_\iMd\,d\sigma'
&=
\int^\sigma \Cdot^{\iMd} C^\iMc \,
{\Jaabb} \partx_\iMd\,J^{\iMdhat}_\iMc\,d\sigma'
=
\int^\sigma \Cdot^{\iMc} C^\iMd \,
{\Jaabb} \partx_\iMc\,J^{\iMdhat}_\iMd\,d\sigma'
$$ we have for the second term on the right hand side of (\[QP\_Zeta\_Change\_Coords\]) $$\begin{aligned}
&\int^\sigma \Cdot^{\lround\iMc} C^{\iMd\rround} \,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
\\&=
\tfrac12\int^\sigma \Cdot^{\iMc} C^{\iMd} \,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
+
\tfrac12\int^\sigma \Cdot^{\iMd} C^{\iMc} \,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
\\&=
\tfrac12\int^\sigma \Cdot^{\iMc} C^{\iMd} \,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
+
\tfrac12\int^\sigma \Cdot^{\iMd} C^{\iMc} \,
{\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)\,d\sigma'
\\&\qquad\qquad +
\int^\sigma \Cdot^{\iMd} C^{\iMc} \,
\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd \,d\sigma'
\\&=
\int^\sigma \Cdot^{\iMc} C^{\iMd} \,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
+
\int^\sigma \Cdot^{\iMd} C^{\iMc} \,
\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd \,d\sigma'
\\&=
\int^\sigma C^{\iMd} \Cdot^{\iMc} \,
\partx_\iMc\,\big({\Jaabb} J^{\iMdhat}_\iMd\big)\,d\sigma'
+
\int^\sigma \Cdot^{\iMd} C^{\iMc} \,
\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd \,d\sigma'
\\&=
\int^\sigma C^{\iMd}
\dfrac{}{\sigma'}\big({\Jaabb} J^{\iMdhat}_\iMd\big)\,d\sigma'
+
\int^\sigma \Cdot^{\iMd} C^{\iMc} \,
\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd \,d\sigma'
$$ Thus taking the difference between these two terms gives $$\begin{aligned}
\int^{\sigma}
&
\Cdothat^{\iMdhat}\int^{\sigma'} \Cdot^{\lround\iMc} C^{\iMd\rround} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
-
\int^\sigma \Cdot^{\lround\iMc} C^{\iMd\rround} \,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
\\&=
\bigg(
\int^{\sigma}
\Cdot^\iMd J^{\iMdhat}_\iMd
C^\iMc
\partx_\iMc
\,{\Jaabb}\
d\sigma'
-
C^\iMd J^{\iMdhat}_\iMd
\,{\Jaabb}
+
\int^{\sigma}
C^\iMd \dfrac{}{\sigma'}
(J^{\iMdhat}_\iMd
\,{\Jaabb})
\,d\sigma'
\bigg)
\\&\qquad\qquad
-
\bigg(
\int^\sigma C^{\iMd}
\dfrac{}{\sigma'}\big({\Jaabb} J^{\iMdhat}_\iMd\big)\,d\sigma'
+
\int^\sigma \Cdot^{\iMd} C^{\iMc} \,
\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd \,d\sigma'
\bigg)
=
-
C^\iMd J^{\iMdhat}_\iMd
\,{\Jaabb}\end{aligned}$$ Hence the sum of half the first term, with the second and fifth terms of (\[QP\_Zeta\_Change\_Coords\]) we have $$\begin{aligned}
\tfrac12
\big(M^{\iMb\iMa}\, \Cdot^{\lround\iMc}\, C^{\iMd\rround}\big)
{\Jaabb}\,
J^{\iMchat}_\iMc\, J^{\iMdhat}_\iMd
&+
\Cdothat^{\iMdhat}\int^{\sigma'}
\big(M^{\iMb\iMa}\, \Cdot^{\lround\iMc}\, C^{\iMd\rround}\big)\,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
\\&
-
\int^\sigma \big(M^{\iMb\iMa}\, \Cdot^{\lround\iMc}\, C^{\iMd\rround}\big)\,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
=0\end{aligned}$$ Likewise for the sum of half the first term, with the third and fourth terms of (\[QP\_Zeta\_Change\_Coords\]). Hence setting $\zeta^{\iMa\iMb\iMc\iMd}=M^{\iMb\iMa}\Cdot^{\lround\iMc}\,
C^{\iMd\rround}$ we have $\zetahat^{\iMahat\iMbhat\iMchat\iMdhat}=0$.
Repeating for $\zeta^{\iMa\iMb\iMc\iMd}=\hat{M}^{\iMa\iMb\lround\iMc}\,\Cdot^{\iMd\rround}$, we have for the fifth term in (\[QP\_Zeta\_Change\_Coords\]) $$\begin{aligned}
\int^{\sigma}
\Cdothat^{\iMdhat}\int^{\sigma'}
\big(\hat{M}^{\iMa\iMb\lround\iMc}\,\Cdot^{\iMd\rround}\big) \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
&=
\hat{M}^{\iMa\iMb\iMc}\,\int^{\sigma}
\Cdothat^{\iMdhat}\int^{\sigma'}
\dfrac{}{\sigma''}\big(\partx_\iMd {\Jaabb}\big)\,d\sigma''\,
d\sigma'
\\&=
\hat{M}^{\iMa\iMb\iMc}\,\int^{\sigma}
\Cdothat^{\iMdhat}\big(\partx_\iMd {\Jaabb}\big)\,
d\sigma'\end{aligned}$$ while for the second term in (\[QP\_Zeta\_Change\_Coords\]) $$\begin{aligned}
\int_\Interval & \big(\hat{M}^{\iMa\iMb\lround\iMc}\,\Cdot^{\iMd\rround}\big)\,
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
\\&=
\tfrac12
\hat{M}^{\iMa\iMb\iMc}\,\int_\Interval
\Cdot^{\iMd}
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
+
\tfrac12
\hat{M}^{\iMa\iMb\iMd}\,\int_\Interval\Cdot^{\iMc}
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
\\&=
\hat{M}^{\iMa\iMb\iMc}\,\int_\Interval
\Cdot^{\iMd}\,J^{\iMdhat}_\iMd\,(\partx_\iMc\,{\Jaabb}) \,d\sigma'
+
\hat{M}^{\iMa\iMb\iMd}\,\int_\Interval\Cdot^{\iMc}
\Big({\Jaabb} (\partx_\iMc\,J^{\iMdhat}_\iMd)+
\partx_\iMc\,({\Jaabb})\,J^{\iMdhat}_\iMd\Big) \,d\sigma'
\\&=
\hat{M}^{\iMa\iMb\iMc}\,\int_\Interval
\Cdothat^{\iMdhat}(\partx_\iMc\,{\Jaabb}) \,d\sigma'
+
\hat{M}^{\iMa\iMb\iMd}\,\int_\Interval
\dfrac{}{\sigma'}\Big({\Jaabb} \,J^{\iMdhat}_\iMd\Big)\,d\sigma'
\\&=
\hat{M}^{\iMa\iMb\iMc}\,\int_\Interval
\Cdothat^{\iMdhat}(\partx_\iMc\,{\Jaabb}) \,d\sigma'
+
\hat{M}^{\iMa\iMb\iMd}\,{\Jaabb} \,J^{\iMdhat}_\iMd\end{aligned}$$ Hence when $\zeta^{\iMa\iMb\iMc\iMd}=\hat{M}^{\iMa\iMb\lround\iMc}\,\Cdot^{\iMd\rround}$ then $\zetahat^{\iMahat\iMbhat\iMchat\iMdhat}=0$.
[Proof of -]{} \[pf\_Quad\_dynm\_eqn\] From we have for any test vector $\ToneTen_{\iMb}$ $$\begin{aligned}
0
&=
\int_\Mman (\nabla_\iMa T^{\iMa\iMb} )\, \ToneTen_{\iMb}\,d^4 x
=
\int_\Mman
\big(\partial_\iMa T^{\iMa\iMb} +
\Gamma^\iMb_{\iMa\iMc} T^{\iMa\iMc} \big)\,
\ToneTen_{\iMb}\,d^4 x
=
\int_\Mman
T^{\iMa\iMb} \big(\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}-\partial_\iMa \ToneTen_{\iMb} \big)\,
d^4 x
\\&=
\int_\Mman
\Big(\gamma^{\iMa\iMb00} \,\deltaThree(\Vz)
+
\gamma^{\iMa\iMb0\iSa}\, \partz_\iSa \deltaThree(\Vz)
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}\,
\partz_\iSa\partz_\iSb
\deltaThree(\Vz)
\Big)
\big(\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}-\partial_\iMa \ToneTen_{\iMb} \big)\,
d^4 x
\\&=
\int_\Interval d\sigma\Big(
\gamma^{\iMa\iMb00}\,
\big(\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}-\partial_\iMa \ToneTen_{\iMb} \big)
-
\gamma^{\iMa\iMb0\iSa}\,\partz_\iSa
\big(\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}-\partial_\iMa \ToneTen_{\iMb} \big)
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
\partz_\iSa \partz_\iSb
\big(\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}-\partial_\iMa \ToneTen_{\iMb} \big)
\Big)
\\&=
\int_\Interval d\sigma\Big(
\gamma^{\iMa\iMb00}\,\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}
-
\gamma^{\iSa\iMb00}\,\partial_\iSa \ToneTen_{\iMb}
+
\dot\gamma^{0\iMb00}\,\ToneTen_{\iMb}
\\&\qquad\qquad
-
\gamma^{\iMa\iMb0\iSa}\,\partz_\iSa
\big(\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}\big)
+
\gamma^{\iSb\iMb0\iSa}\,\partz_\iSa
\partial_\iSb \ToneTen_{\iMb}
-
\dot\gamma^{0\iMb0\iSa}\,\partz_\iSa
\ToneTen_{\iMb}
\\&\qquad\qquad
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
\partz_\iSa \partz_\iSb
\big(\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}\big)
-
\tfrac12
\gamma^{\iSc\iMb\iSa\iSb}
\partz_\iSa \partz_\iSb
\partial_\iSc \ToneTen_{\iMb}
+
\tfrac12
\dot\gamma^{0\iMb\iSa\iSb}
\partz_\iSa \partz_\iSb
\ToneTen_{\iMb}
\Big)
\\&=
\int_\Interval d\sigma\Big(
\gamma^{\iMa\iMb00}\,\Gamma^\iMc_{\iMa\iMb} \,\ToneTen_{\iMc}
-
\gamma^{\iSa\iMb00}\,\partial_\iSa \ToneTen_{\iMb}
+
\dot\gamma^{0\iMc00}\,\ToneTen_{\iMc}
\\&\qquad\qquad
-
\gamma^{\iMa\iMb0\iSa}\,(\partial_\iSa\Gamma^\iMc_{\iMa\iMb}) \,\ToneTen_{\iMc}
-
\gamma^{\iMa\iMb0\iSa}\,
\Gamma^\iMc_{\iMa\iMb} \,\partial_\iSa\ToneTen_{\iMc}
+
\gamma^{\iSb\iMb0\iSa}\,\partial_\iSa
\partial_\iSb \ToneTen_{\iMb}
-
\dot\gamma^{0\iMb0\iSa}\,\partial_\iSa
\ToneTen_{\iMb}
\\&\qquad\qquad
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
\big(\partial_\iSa \partial_\iSb
\Gamma^\iMc_{\iMa\iMb}\big)\ToneTen_{\iMc}
+
\gamma^{\iMa\iMb\iSa\iSb}
\big(\partial_\iSa
\Gamma^\iMc_{\iMa\iMb} \big)
\,\big(\partial_\iSb\ToneTen_{\iMc}\big)
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
\Gamma^\iMc_{\iMa\iMb} \partial_\iSa \partial_\iSb
\ToneTen_{\iMc}
\\&\qquad\qquad\qquad
-
\tfrac12
\gamma^{\iSc\iMb\iSa\iSb}
\partial_\iSa \partial_\iSb
\partial_\iSc \ToneTen_{\iMb}
+
\tfrac12
\dot\gamma^{0\iMb\iSa\iSb}
\partial_\iSa \partial_\iSb
\ToneTen_{\iMb}
\Big)
\\&=
\int_\Interval d\sigma\bigg(
\ToneTen_{\iMc}
\Big(\gamma^{\iMa\iMb00}\,\Gamma^\iMc_{\iMa\iMb}
+
\dot\gamma^{0\iMc00}
-
\gamma^{\iMa\iMb0\iSa}\,(\partial_\iSa\Gamma^\iMc_{\iMa\iMb})
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
\big(\partial_\iSa \partial_\iSb
\Gamma^\iMc_{\iMa\iMb}\big)
\Big)
\\&\qquad\qquad
-\partial_\iSa \ToneTen_{\iMc} \Big(
\gamma^{\iSa\iMc00}\,
+
\gamma^{\iMa\iMb0\iSa}\,\Gamma^\iMc_{\iMa\iMb}
+
\dot\gamma^{0\iMc0\iSa}
-
\gamma^{\iMa\iMb\iSb\iSa}
\big(\partial_\iSb
\Gamma^\iMc_{\iMa\iMb} \big)
\Big)
\\&\qquad\qquad
+\partial_\iSa\partial_\iSb \ToneTen_{\iMc}
\Big(\gamma^{\iSb\iMc0\iSa}
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
\Gamma^\iMc_{\iMa\iMb}
+
\tfrac12
\dot\gamma^{0\iMc\iSa\iSb}
\Big)
-
\tfrac12
\gamma^{\iSc\iMb\iSa\iSb}
\partial_\iSa \partial_\iSb
\partial_\iSc \ToneTen_{\iMb}
\bigg)
$$ The terms with $\ToneTen_{\iMc}$, $\partial_\iSa\ToneTen_{\iMc}$, $\partial_\iSa\partial_\iSb\ToneTen_{\iMc}$ and $\partial_\iSa\partial_\iSb\partial_\iMc\ToneTen_{\iMc}$ are independent. In section \[ch\_CoFree\_Coords\] we give values of $\ToneTen_{\iMa}$ which demonstrate this. From this we get -. Note we must take the symmetric part with respect to $\iSb,\iSa$.
[Proof of -]{} \[pf\_gamma\_change\_coords\] This follows from substituting into .
We set $(x^0,\ldots,x^3)=(\sigma,z^1,z^2,z^3)$ and $(\xhat^\Ohat\ldots\xhat^{\hat{3}})=
(\hat\sigma,\zhat^{\hat{1}},\zhat^{\hat{2}},\zhat^{\hat{2}})$ into and use the fact that $\Cdothat^{\iMahat}=\delta^\iMahat_0$. Hence follows directly.
For we have from
$$\begin{aligned}
\zetahat^{\iMahat\iMbhat\iSchat\Ohat}
&=
\zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\,
J^{\iSchat}_\iMc\, J^{\Ohat}_\iMd
-\tfrac12
\Cdothat^\iSchat\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
\Big({\Jaabb} (\partx_\iMc\,J^{\Ohat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\Ohat}_\iMd\Big) \,d\sigma'
\\&\qquad
-\tfrac12
\Cdothat^\Ohat\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
\Big({\Jaabb} (\partx_\iMc\,J^{\iSchat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iSchat}_\iMd\Big) \,d\sigma'
\\&\quad
+\tfrac12
\Cdothat^{\Ohat}\int^{\sigma}
\Cdothat^{\iSchat}\int^{\sigma'} \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
+\tfrac12
\Cdothat^{\iSchat}\int^{\sigma}
\Cdothat^{\Ohat}\int^{\sigma'} \zeta^{\iMa\iMb\iMc\iMd} \,
\partx_\iMc\,\partx_\iMd \,\big({\Jaabb}\big)\,d\sigma''\,
d\sigma'
\\&=
\zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\,
J^{\iSchat}_\iMc\, J^{\Ohat}_\iMd
-\tfrac12
\int^\sigma \zeta^{\iMa\iMb\iMc\iMd}\,
\Big({\Jaabb} (\partx_\iMc\,J^{\iSchat}_\iMd)+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iSchat}_\iMd\Big) \,d\sigma'\end{aligned}$$
Thus from $$\begin{aligned}
\gammahat^{\iMahat\iMbhat\iSchat\Ohat}
&=
\dot\zetahat^{\iMahat\iMbhat\iSchat\Ohat}
=
(\zeta^{\iMa\iMb\iMc\iMd}\,
{\Jaabb}\,
J^{\iSchat}_\iMc\, J^{\Ohat}_\iMd)\dot{}
-\tfrac12
\zeta^{\iMa\iMb\iMc\iMd}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{\iMc\iMd}+
2\,\partx_\iMc\,({\Jaabb})\,J^{\iSchat}_\iMd\big)
\\&=
(\zeta^{\iMa\iMb00}\,{\Jaabb}\,J^{\iSchat}_0\,
J^{\Ohat}_0)\dot{}
+
(\zeta^{\iMa\iMb\iSc0}\,{\Jaabb}\,J^{\iSchat}_\iSc\,
J^{\Ohat}_0)\dot{}
+
(\zeta^{\iMa\iMb0\iSc}\,{\Jaabb}\,J^{\iSchat}_0\,
J^{\Ohat}_\iSc)\dot{}
+
(\zeta^{\iMa\iMb\iSc\iSd}\,{\Jaabb}\,J^{\iSchat}_\iSc\,
J^{\Ohat}_\iSd)\dot{}
\\&\quad
-
\tfrac12
\zeta^{\iMa\iMb00}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{00}+
2\,\partx_0\,({\Jaabb})\,J^{\iSchat}_0\big)
-
\tfrac12
\zeta^{\iMa\iMb\iSc0}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{\iSc0}+
2\,\partx_\iSc\,({\Jaabb})\,J^{\iSchat}_0\big)
\\&\quad
-
\tfrac12
\zeta^{\iMa\iMb\iSc0}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{\iSc0}+
2\,\partx_0\,({\Jaabb})\,J^{\iSchat}_\iSc\big)
-
\tfrac12
\zeta^{\iMa\iMb\iSc\iSd}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{\iSc\iSd}+
2\,\partx_\iSc\,({\Jaabb})\,J^{\iSchat}_\iSd\big)
\\&=
(\zeta^{\iMa\iMb\iSc0}\,{\Jaabb}\,J^{\iSchat}_\iSc)\dot{}
+
(\zeta^{\iMa\iMb\iSc\iSd}\,{\Jaabb}\,J^{\iSchat}_\iSc\,
J^{\Ohat}_\iSd)\dot{}
\\&\quad
-
\zeta^{\iMa\iMb\iSc0}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{\iSc0}+
{\Jdotaabb}\,J^{\iSchat}_\iSc\big)
-
\tfrac12
\zeta^{\iMa\iMb\iSc\iSd}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{\iSc\iSd}+
2\,\partx_\iSc\,({\Jaabb})\,J^{\iSchat}_\iSd\big)
\\&=
\dot\zeta^{\iMa\iMb\iSc0}\,{\Jaabb}\,J^{\iSchat}_\iSc
+
(\zeta^{\iMa\iMb\iSc\iSd}\,{\Jaabb}\,J^{\iSchat}_\iSc\,
J^{\Ohat}_\iSd)\dot{}
-
\tfrac12
\zeta^{\iMa\iMb\iSc\iSd}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{\iSc\iSd}+
2\,\partx_\iSc\,({\Jaabb})\,J^{\iSchat}_\iSd\big)
\\&=
\gamma^{\iMa\iMb\iSc0}\,{\Jaabb}\,J^{\iSchat}_\iSc
+
(\gamma^{\iMa\iMb\iSc\iSd}\,{\Jaabb}\,J^{\iSchat}_\iSc\,
J^{\Ohat}_\iSd)\dot{}
-
\tfrac12
\gamma^{\iMa\iMb\iSc\iSd}\,
\big({\Jaabb} \,\JJ^{\iSchat}_{\iSc\iSd}+
2\,\partx_\iSc\,({\Jaabb})\,J^{\iSchat}_\iSd\big)
$$ In order to show we have from $$\begin{aligned}
\zetahat^{\iMa \iMb \Ohat\Ohat}
& =
(\Jaabb J^\Ohat_{\iMc} \, J^\Ohat_{\iMd})\, \zeta^{\iMa \iMb \iMc \iMd}
-
\int^{\sigma}
\Big(
(\partx_{\iMd} J^\Ohat_{\iMc})\,J_{\mu \nu}^{\hat{\mu}\hat{\nu}}+
J^\Ohat_{\iMc}\, \partx_{\iMd} J_{\mu \nu}^{\hat{\mu}\hat{\nu}}+
J^{\Ohat}_{\iMd}\, \partx_{\iMc} J_{\mu \nu}^{\hat{\mu}\hat{\nu}}
\Big) \zeta^{\iMa \iMb \iMc \iMd}\,d\sigma'
\\&\qquad
+ \int^\sigma
\hspace{-.5em}
d\sigma' \,
\int^{\sigma'} \partx_{\iMc \iMd} \Jaabb \, \zeta^{\iMa \iMb \iMc \iMd}\, d\sigma''
$$ where $\partx_{\iMc \iMd}=\partx_{\iMc}\partx_{\iMd}$. Hence $$\begin{aligned}
\gammahat^{\hat{\iMa}\hat{\iMb}\Ohat\Ohat}
&=
\tfrac{1}{2}\hat{\ddot{\zeta}}^{\hat{\iMa} \hat{\iMb}\Ohat\Ohat}
\\&=
\tfrac{1}{2}\Big(
\big((\Jaabb\ J^\Ohat_{\iMd} \, J^\Ohat_{\iMc})\, \zeta^{\iMa \iMb \iMc \iMd}\big)\ddot{}
-
\big((
(\partx_{\iMd} J^\Ohat_{\iMc})\,{\Jaabb}+
J^\Ohat_{\iMc}\, \partx_{\iMd} {\Jaabb}+
J^\Ohat_{\iMd}\, \partx_{\iMc} {\Jaabb}
)
\zeta^{\iMa \iMb \iMc \iMd}\big)\dot{}
+ \partx_{\iMc \iMd}{\Jaabb}\, \zeta^{\iMa \iMb \iMc \iMd}\Big)
\end{aligned}
\label{ChangeCoords_zeta_changeNNmm}$$ It is important to establish that all the $\zeta^{\iMa \iMb \iMc \iMd}$ on the right hand side of can be replaced by the corresponding $\gamma^{\iMa\iMb\iMc\iMd}$ without using integrals. However since from $\gamma^{\iMa\iMb00}=\tfrac12\ddot\zeta^{\iMa \iMb00}$ and $\gamma^{\iMa\iMb\iSa0}=\dot\zeta^{\iMa \iMb\iSa0}$ we need to expand to confirm that no terms $\zeta^{\iMa \iMb00}$, $\dot\zeta^{\iMa \iMb00}$ or $\zeta^{\iMa\iMb\iSa0}$ exist on the right hand side.
$$\begin{aligned}
\gammahat^{\hat{\iMa}\hat{\iMb}\Ohat\Ohat}
&=
\tfrac12
\Big((\Jaabb\ J^\Ohat_{\iMd} \, J^\Ohat_{\iMc})\, \zeta^{\iMa \iMb \iMc \iMd}\Big)\ddot{}
-
\Big(\big(
\tfrac12\JJ^\Ohat_{\iMc\iMd}\,{\Jaabb}+
J^\Ohat_{\iMd}\, \partx_{\iMc} {\Jaabb}
\big)
\zeta^{\iMa \iMb \iMc \iMd}\Big)\dot{}
+ (\tfrac12\partx_{\iMc \iMd}{\Jaabb}\, )\zeta^{\iMa \iMb \iMc \iMd}
\\&=
\tfrac12\Big((\Jaabb\ J^\Ohat_{0} \, J^\Ohat_{0})\,
\zeta^{\iMa \iMb 00}\Big)\ddot{}
+
\Big((\Jaabb\ J^\Ohat_{\iSc} \, J^\Ohat_{0})\, \zeta^{\iMa \iMb \iSc 0}\Big)\ddot{}
+
\tfrac12\Big((\Jaabb\ J^\Ohat_{\iSd} \, J^\Ohat_{\iSc})\,
\zeta^{\iMa \iMb \iSc \iSd}\Big)\ddot{}
\\&\quad
-
\Big(\big(
\tfrac12\JJ^\Ohat_{00}\,{\Jaabb}+
J^\Ohat_{0}\, \partx_{0} {\Jaabb}
\big)
\zeta^{\iMa \iMb 00}\Big)\dot{}
-
\Big(\big(
\JJ^\Ohat_{0\iSc}\,{\Jaabb}+
J^\Ohat_{\iSc}\, \partx_{0} {\Jaabb} +
J^\Ohat_{0}\, \partx_{\iSc} {\Jaabb}
\big)
\zeta^{\iMa \iMb \iSc 0}\Big)\dot{}
\\&\quad
-
\Big(\big(
\tfrac12 \JJ^\Ohat_{\iSc\iSd}\,{\Jaabb}+
J^\Ohat_{\iSc}\, \partx_{\iSd} {\Jaabb}
\big)
\zeta^{\iMa \iMb \iSc \iSd}\Big)\dot{}
\\&\quad
+ (\tfrac12\partx_{00}{\Jaabb})\, \zeta^{\iMa \iMb 00}
+ (\partx_{0 \iSc}{\Jaabb})\, \zeta^{\iMa \iMb \iSc 0}
+ (\tfrac12\partx_{\iSc \iSd}{\Jaabb})\, \zeta^{\iMa \iMb \iSc \iSd}
\\&=
\tfrac12(\Jaabb\,
\zeta^{\iMa \iMb 00})\ddot{}
+
\Big((\Jaabb\ J^\Ohat_{\iSc} \, \zeta^{\iMa \iMb \iSc 0}\Big)\ddot{}
+
\tfrac12\Big((\Jaabb\ J^\Ohat_{\iSd} \, J^\Ohat_{\iSc})\,
\zeta^{\iMa \iMb \iSc \iSd}\Big)\ddot{}
-
\Big(
{\Jdotaabb}
\zeta^{\iMa \iMb 00}\Big)\dot{}
\\&\quad
-
\Big(\big(
\JJ^\Ohat_{0\iSc}\,{\Jaabb}+
J^\Ohat_{\iSc}\, {\Jdotaabb}+
\partx_{\iSc} {\Jaabb}
\big)
\zeta^{\iMa \iMb \iSc 0}\Big)\dot{}
-
\Big(\big(
\tfrac12
\JJ^\Ohat_{\iSc\iSd}\,{\Jaabb}+
J^\Ohat_{\iSd}\, \partx_{\iSc} {\Jaabb}
\big)
\zeta^{\iMa \iMb \iSc \iSd}\Big)\dot{}
\\&\quad
+ \tfrac12\,{\Jddotaabb}\, \zeta^{\iMa \iMb 00}
+ (\partx_{\iSc}{\Jdotaabb})\, \zeta^{\iMa \iMb \iSc 0}
+ (\tfrac12\partx_{\iSc \iSd}{\Jaabb})\, \zeta^{\iMa \iMb \iSc \iSd}
\\&=
\tfrac12 \Jddotaabb\,\zeta^{\iMa \iMb 00}
+
\Jdotaabb\,\dot\zeta^{\iMa \iMb 00}
+
\tfrac12
\Jaabb\,\ddot\zeta^{\iMa \iMb 00}
\\&\quad
+
\Jddotaabb\ J^\Ohat_{\iSc} \, \zeta^{\iMa \iMb \iSc 0}
+
\Jaabb\ \JJ^\Ohat_{\iSc00} \, \zeta^{\iMa \iMb \iSc 0}
+
\Jaabb\ J^\Ohat_{\iSc} \, \ddot\zeta^{\iMa \iMb \iSc 0}
+
2\Jdotaabb\ \JJ^\Ohat_{\iSc0} \, \zeta^{\iMa \iMb \iSc 0}
+
2\Jdotaabb\ J^\Ohat_{\iSc} \, \dot\zeta^{\iMa \iMb \iSc 0}
+
2\Jaabb\ \JJ^\Ohat_{\iSc0} \, \dot\zeta^{\iMa \iMb \iSc 0}
\\&\quad
+
\tfrac12\Big((\Jaabb\ J^\Ohat_{\iSd} \, J^\Ohat_{\iSc})\,
\zeta^{\iMa \iMb \iSc \iSd}\Big)\ddot{}
-
{\Jddotaabb}
\zeta^{\iMa \iMb 00}
-
{\Jdotaabb}
\dot\zeta^{\iMa \iMb 00}
\\&\quad
-
\big(
\JJ^\Ohat_{00\iSc}\,{\Jaabb}+
2\JJ^\Ohat_{0\iSc}\,{\Jdotaabb}+
J^\Ohat_{\iSc}\, {\Jddotaabb}+
\partx_{0\iSc} {\Jaabb}
\big)
\zeta^{\iMa \iMb \iSc 0}
-
\big(
\JJ^\Ohat_{0\iSc}\,{\Jaabb}+
J^\Ohat_{\iSc}\, {\Jdotaabb}+
\partx_{\iSc} {\Jaabb}
\big)
\dot\zeta^{\iMa \iMb \iSc 0}
\\&\quad
-
\Big(\big(
\tfrac12
\JJ^\Ohat_{\iSc\iSd}\,{\Jaabb}+
J^\Ohat_{\iSd}\, \partx_{\iSc} {\Jaabb}
\big)
\zeta^{\iMa \iMb \iSc \iSd}\Big)\dot{}
\\&\quad
+ \tfrac12\,{\Jddotaabb}\, \zeta^{\iMa \iMb 00}
+ (\partx_{\iSc}{\Jdotaabb})\, \zeta^{\iMa \iMb \iSc 0}
+ (\tfrac12\partx_{\iSc \iSd}{\Jaabb})\, \zeta^{\iMa \iMb \iSc \iSd}
\\&=
\tfrac12
\Jaabb\,\ddot\zeta^{\iMa \iMb 00}
+
\Jaabb\ J^\Ohat_{\iSc} \, \ddot\zeta^{\iMa \iMb \iSc 0}
+
2\Jdotaabb\ J^\Ohat_{\iSc} \, \dot\zeta^{\iMa \iMb \iSc 0}
+
2\Jaabb\ \JJ^\Ohat_{\iSc0} \, \dot\zeta^{\iMa \iMb \iSc 0}
+
\tfrac12\Big((\Jaabb\ J^\Ohat_{\iSd} \, J^\Ohat_{\iSc})\,
\zeta^{\iMa \iMb \iSc \iSd}\Big)\ddot{}
\\&\quad
-
\big(
\JJ^\Ohat_{0\iSc}\,{\Jaabb}+
J^\Ohat_{\iSc}\, {\Jdotaabb}+
\partx_{\iSc} {\Jaabb}
\big)
\dot\zeta^{\iMa \iMb \iSc 0}
-
\Big(\big(
\tfrac12
\JJ^\Ohat_{\iSc\iSd}\,{\Jaabb}+
J^\Ohat_{\iSd}\, \partx_{\iSc} {\Jaabb}
\big)
\zeta^{\iMa \iMb \iSc \iSd}\Big)\dot{}
+ (\tfrac12\partx_{\iSc \iSd}{\Jaabb})\, \zeta^{\iMa \iMb \iSc \iSd}
\\&=
\Jaabb\,\gamma^{\iMa \iMb 00}
+
\Jaabb\,J^\Ohat_{\iSc} \,\dot\gamma^{\iMa\iMb\iSa0}
+
\big(
(\Jaabb\ \JJ^\Ohat_{\iSc})\dot{}
-
\partx_{\iSc} {\Jaabb}
\big)
\gamma^{\iMa \iMb \iSc 0}
\\&\quad
+
\tfrac12\big((\Jaabb\ J^\Ohat_{\iSd} \, J^\Ohat_{\iSc})\,
\gamma^{\iMa \iMb \iSc \iSd}\big)\ddot{}
-
\big(\big(
\tfrac12
\JJ^\Ohat_{\iSc\iSd}\,{\Jaabb}+
J^\Ohat_{\iSd}\, \partx_{\iSc} {\Jaabb}
\big)
\gamma^{\iMa \iMb \iSc \iSd}\big)\dot{}
+ (\tfrac12\partx_{\iSc \iSd}{\Jaabb})\, \gamma^{\iMa \iMb \iSc \iSd}
$$
[Proof of ]{} \[pf\_SemiQuad\_Count\] For the semi-quadrupole, is automatically satisfied. Equations - become $$\begin{gathered}
\dot\gamma^{0000} = 0,\quad
\dot\gamma^{\iSa000} = 0,\quad
\dot\gamma^{00\iSa0} = -\gamma^{0\iSa00},\quad
\dot\gamma^{0(\iSb\iSa)0} = -\gamma^{\iSb\iSa00},\quad
\dot\gamma^{0[\iSb\iSa]0} = 0,
\\
\dot\gamma^{00\iSb\iSa} = -\gamma^{0(\iSa\iSb)0},\quad
0=\dot\gamma^{0\iSc\iSb\iSa} = -\gamma^{\iSc(\iSa\iSb)0}\end{gathered}$$ It may appear that we have not stated anything about $(\gamma^{\iSc\iSa\iSb0}-\gamma^{\iSc\iSb\iSa0})$. However due to the symmetry of $\gamma^{\iSc\iSa\iSb0}$ we have $$\begin{aligned}
\gamma^{\iSc\iSa\iSb0}-\gamma^{\iSc\iSb\iSa0}
=
\gamma^{\iSa\iSc\iSb0}-\gamma^{\iSb\iSc\iSa0}
=
-\gamma^{\iSa\iSb\iSc0}+\gamma^{\iSb\iSa\iSc0}
=
0\end{aligned}$$ Thus from the last equation above we have $\gamma^{\iSc\iSb\iSa0}=0$. Setting $\gamma^{00\iSb\iSa}=\kappa^{\iSa\iSb}(\sigma)$ we have $\gamma^{0(\iSb\iSa)0}=\dot{\kappa}^{\iSa\iSb}$ and $\gamma^{\iSb\iSa00}=\ddot{\kappa}^{\iSa\iSb}$. The remaining constants in are then set.
Since there are 22 ODEs, one may expect 22 constants instead of 10. However the remaining 12 arise from the initial values of ${\kappa}^{\iSa\iSb}$ and $\dot{\kappa}^{\iSa\iSb}$.
[Proof of and ]{} \[pf\_QK\] Let $\TzeroTen$ be a test function. Thus $$\begin{aligned}
\int_\Mman \nabla_\iMa (T^{\iMa\iMb}\,\Kill_\iMb)\, \TzeroTen\, d^4x
=
\int_\Mman (\nabla_\iMa T^{\iMa\iMb}\,\Kill_\iMb +
T^{\iMa\iMb}\,\nabla_\iMa \,\Kill_\iMb\, \TzeroTen ) d^4x
=
0\end{aligned}$$ from , (\[Intro\_Tab\_Div\_zero\]) and . Since $T^{\iMa\iMb}$ is a tensor density then so is $T^{\iMa\iMb}\Kill_\iMb$. Hence $$\begin{aligned}
0
&=
\int_\Mman \nabla_\iMa (T^{\iMa\iMb}\,\Kill_\iMb)\, \TzeroTen d^4x
=
\int_\Mman T^{\iMa\iMb}\,\Kill_\iMb\, \nabla_\iMa \TzeroTen d^4x
=
\int_\Mman T^{\iMa\iMb}\,\Kill_\iMb\, \partial_\iMa \TzeroTen d^4x
\\&
=
\int_\Interval \Big(\gamma^{\iMa\iMb00}\,\Kill_\iMb\partial_\iMa\TzeroTen
-
\gamma^{\iMa\iMb0\iSa}\,\partz_\iSa
(\Kill_\iMb\partial_\iMa\TzeroTen)
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
\partz_\iSa \partz_\iSb\,(\Kill_\iMb\partial_\iMa\TzeroTen)
\Big)\,d\sigma
\\&=
\int_\Interval \Big(\partial_\iMa\TzeroTen
\big(\gamma^{\iMa\iMb00}\,\Kill_\iMb
-
\gamma^{\iMa\iMb0\iSa}\,\partz_\iSa \Kill_\iMb
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb} \partz_\iSa \partz_\iSb\,\Kill_\iMb\big)
\Big)\,d\sigma
+\text{higher derivatives of $\TzeroTen$.}
\\&=
\int_\Interval \Big(\partial_0\TzeroTen
\big(\gamma^{0\iMb00}\,\Kill_\iMb
-
\gamma^{0\iMb0\iSa}\,\partz_\iSa \Kill_\iMb
+
\tfrac12
\gamma^{0\iMb\iSa\iSb} \partz_\iSa \partz_\iSb\,\Kill_\iMb\big)
\Big)\,d\sigma
\\&\qquad\qquad
+
\int_\Interval \Big(\partial_\iSc\TzeroTen\
\big(\gamma^{\iSc\iMb00}\,\Kill_\iMb
-
\gamma^{\iSc\iMb0\iSa}\,\partz_\iSa \Kill_\iMb
+
\tfrac12
\gamma^{\iSc\iMb\iSa\iSb} \partz_\iSa \partz_\iSb\,\Kill_\iMb\big)
\Big)\,d\sigma
+\text{higher derivatives of $\TzeroTen$.}
\\&=
-
\int_\Interval \Big(\TzeroTen\
\partial_0\big(\gamma^{0\iMb00}\,\Kill_\iMb
-
\gamma^{0\iMb0\iSa}\,\partz_\iSa \Kill_\iMb
+
\tfrac12
\gamma^{0\iMb\iSa\iSb} \partz_\iSa \partz_\iSb\,\Kill_\iMb\big)
\Big)\,d\sigma
+\text{higher derivatives of $\TzeroTen$.}
\\&=
-
\int_\Interval \big(\TzeroTen\
\dot\Conserv_K
\big)\,d\sigma
+\text{higher derivatives of $\TzeroTen$.}\end{aligned}$$ Thus since we can extract the different derivatives of $\TzeroTen$ we have $\dot\Conserv_K=0$.
Clearly for dipoles we have $\gamma^{0\iMb\iSa\iSb}=0$ and have .
Proofs for section \[ch\_CoFree\]
---------------------------------
[Proof of ]{} \[pf\_nabla\^2\_flin\] $$\begin{aligned}
\nablaDG^2_{U,fV} S
&=
\nablaDG_U\,\nablaDG_{(fV)} S
-
\nablaDG_{\nablaDG_U (fV)} S
=
\nablaDG_U\,(f\nablaDG_{V} S)
-
\nablaDG_{(f\nablaDG_U V + U\VAct{f} V)} S
\\&=
f\,\nablaDG_U\,\nablaDG_{V} S + U\VAct{f}\,\nablaDG_V S
-
f\nablaDG_{\nablaDG_U V} S - U\VAct{f}\,\nablaDG_V S
= f \nablaDG^2_{U,V} S\end{aligned}$$
[Proof of ]{} \[pf\_nabla\^2\_nabla\] $$\begin{aligned}
\nablaDG^2_{U,V} W
&=
\nablaDG_U\,\nablaDG_V W
-
\nablaDG_{\nablaDG_U V} W
=
U^\iMb \nablaInd_\iMb\,(\nablaDG_V W)^{\iMa} \partial_{\iMa}
-
(\nablaDG_U V)^\iMc (\nablaInd_\iMc W^{\iMa}) \partial_{\iMa}
\\&=
U^\iMb \nablaInd_\iMb\,(V^\iMc \nablaInd_\iMc W^{\iMa})\partial_{\iMa}
-
U^\iMb (\nablaInd_\iMb V^\iMc) (\nablaInd_\iMc W^{\iMa}) \partial_{\iMa}
\\&=
U^\iMb
\Big(\nablaInd_\iMb\,(V^\iMc \nablaInd_\iMc W^{\iMa})
-
(\nablaInd_\iMb V^\iMc) (\nablaInd_\iMc W^{\iMa})\Big) \partial_{\iMa}
\\&=
U^\iMb\,V^\iMc\,
\Big(\nablaInd_\iMb\,\nablaInd_\iMc W^{\iMa}\Big) \partial_{\iMa}\end{aligned}$$
[Proof of ]{} \[pf\_D\_coords\] Let $\ToneTen$ be a test 1–form then $$\begin{aligned}
(D\ToneTen)(U,V)
&=
(\nablaDG_V\ToneTen)(U)
=
U^\iMb (\nablaDG_V\ToneTen)_\iMb
=
U^\iMb\, V^{\iMa}\,\nablaInd_{\iMa} \ToneTen_{\iMb}
=
(\nablaInd_\iMb \ToneTen_{\iMa})\
(dx^\iMb\otimes dx^{\iMa}) (U,V)\end{aligned}$$ hence $$\begin{aligned}
D\ToneTen=
(\nablaInd_\iMb \ToneTen_{\iMa})\
(dx^\iMb\otimes dx^{\iMa}) \end{aligned}$$ Thus $$\begin{aligned}
D\tau[\ToneTen]
&=
-\tau[D\ToneTen]
=
-\tau[(\nablaInd_\iMb \ToneTen_{\iMa})\
(dx^\iMb\otimes dx^{\iMa})]
=
-\tau^{\iMa}[(\nablaInd_\iMb \ToneTen_{\iMa})\
dx^\iMb]
=
-\tau^{\iMa}[(\partial_\iMb \ToneTen_{\iMa}-\Gamma^\iMc_{\iMb\iMa} \ToneTen_{\iMc})\,dx^\iMb]
\\&=
-\tau^{\iMa}[\partial_\iMb \ToneTen_{\iMa}\,dx^\iMb-\Gamma^\iMc_{\iMb\iMa} \ToneTen_{\iMc}\,dx^\iMb]
=
-\tau^{\iMa}[\partial_\iMb \ToneTen_{\iMa}\,dx^\iMb]
+
\tau^{\iMa}[\Gamma^\iMc_{\iMb\iMa} \ToneTen_{\iMc}\,dx^\iMb]
=
-\tau^{\iMa}[d\ToneTen_{\iMa}]
+
\Gamma^\iMc_{\iMb\iMa} \,dx^\iMb\wedge\tau^{\iMa}[\ToneTen_{\iMc}]
\\&=
d\tau^{\iMa}[\ToneTen_{\iMa}]
+
\Gamma^\iMc_{\iMb\iMa} \,dx^\iMb\wedge\tau^{\iMa}[\ToneTen_{\iMc}]
=
\big(d\tau^\iMc
+
\Gamma^\iMc_{\iMb\iMa} \,dx^\iMb\wedge\tau^{\iMa}\big)[\ToneTen_{\iMc}]\end{aligned}$$
[Proof of ]{} \[pf\_tau\_Tab\_zetas\] From and (\[QP\_Tab\_action\]) we have $$\begin{aligned}
\tau^{\iMa}[\TtwoTen_{\iMa\iMb}\,dx^\iMb]
&=
\tfrac12\, i_\iMe\, L_\iMc\, L_\iMd
C_\PF(\zeta^{\iMa\iMe\iMc\iMd} d\sigma)
[\TtwoTen_{\iMa\iMb}\,dx^\iMb]
=
\tfrac12\, L_\iMc\, L_\iMd
C_\PF( \zeta^{\iMa\iMe\iMc\iMd} d\sigma)
[i_\iMe\,\TtwoTen_{\iMa\iMb}\,dx^\iMb]
\\&=
\tfrac12\, \delta^{\iMb}_{\iMe} L_\iMc\, L_\iMd
C_\PF(\zeta^{\iMa\iMe\iMc\iMd} d\sigma)
[\TtwoTen_{\iMa\iMb}]
=
\tfrac12\, L_\iMc\, L_\iMd
C_\PF(\zeta^{\iMa\iMb\iMc\iMd} d\sigma)
[\TtwoTen_{\iMa\iMb}]
\\&=
\tfrac12\,
C_\PF(\zeta^{\iMa\iMb\iMc\iMd} d\sigma)
[\partx_\iMc\, \partx_\iMd\TtwoTen_{\iMa\iMb}]
=
\tfrac12\,
\int_\Interval \zeta^{\iMa\iMb\iMc\iMd}\,
(\partx_\iMc\, \partx_\iMd\TtwoTen_{\iMa\iMb})\, d\sigma
=
\int_\Interval T^{\iMa\iMb}\,\TtwoTen_{\iMa\iMb}\, d^4x \end{aligned}$$ from .
[Proof of ]{} \[pf\_tau\_Tab\_gammas\] $$\begin{aligned}
\tau^{\iMa}[\TtwoTen_{\iMa\iMb}\,dx^\iMb]
&=
i_\iMe \, C_\PF(\gamma^{\iMa\iMe00}\,d\sigma)
[\TtwoTen_{\iMa\iMb}\,dx^\iMb]
+
i_\iMe \, L_\iSa\, C_\PF(\gamma^{\iMa\iMe0\iSa}\,d\sigma)
[\TtwoTen_{\iMa\iMb}\,dx^\iMb]
\\&\qquad\qquad
+
\tfrac12 i_\iMe \, L_\iSa\,L_\iSb\,
C_\PF(\gamma^{\iMa\iMe\iSa\iSb}\,d\sigma)
[\TtwoTen_{\iMa\iMb}\,dx^\iMb]
\\&=
C_\PF(\gamma^{\iMa\iMb00}\,d\sigma)
[\TtwoTen_{\iMa\iMb}]
+
L_\iSa\, C_\PF(\gamma^{\iMa\iMb0\iSa}\,d\sigma)
[\TtwoTen_{\iMa\iMb}]
+
\tfrac12 L_\iSa\,L_\iSb\,
C_\PF(\gamma^{\iMa\iMb\iSa\iSb}\,d\sigma)
[\TtwoTen_{\iMa\iMb}]
\\&=
C_\PF(\gamma^{\iMa\iMb00}\,d\sigma)
[\TtwoTen_{\iMa\iMb}]
-
C_\PF(\gamma^{\iMa\iMb0\iSa}\,d\sigma)
[\partz_\iSa \TtwoTen_{\iMa\iMb}]
+
\tfrac12
C_\PF(\gamma^{\iMa\iMb\iSa\iSb}\,d\sigma)
[\partz_\iSa \partz_\iSb\,\TtwoTen_{\iMa\iMb}]
\\&=
\int_\Interval \gamma^{\iMa\iMb00}\,\TtwoTen_{\iMa\iMb}\,d\sigma
-
\int_\Interval \gamma^{\iMa\iMb0\iSa}\,(\partz_\iSa
\TtwoTen_{\iMa\iMb}) d\sigma
+
\tfrac12
\int_\Interval \gamma^{\iMa\iMb\iSa\iSb}
(\partz_\iSa \partz_\iSb\,\TtwoTen_{\iMa\iMb})\,d\sigma
\\&=
\int_\Interval \Big(\gamma^{\iMa\iMb00}\,\TtwoTen_{\iMa\iMb}
-
\gamma^{\iMa\iMb0\iSa}\,(\partz_\iSa
\TtwoTen_{\iMa\iMb})
+
\tfrac12
\gamma^{\iMa\iMb\iSa\iSb}
(\partz_\iSa \partz_\iSb\,\TtwoTen_{\iMa\iMb})
\Big)\,d\sigma\end{aligned}$$
[Proof of and Semi-quadrupole counting.]{} A simple application using $\lambda=\lambda_1+\lambda_2$ and $\lambda=\lambda_1-\lambda_2$ implies we can replace for $\ell=2$ with with $\tau_\alpha[\lambda_1\lambda_2 d\iSa]=0$ where $C^\star(\lambda_1)=C^\star(\lambda_2)=C^\star(\iSa)=0$.
In an adapted coordinate system $(\sigma,z^1,z^2,z^3)$ apply this to the test form $z^\iSa\,z^\iSb\,d z^\iSc$ in we see that this leads to the equation $$\begin{aligned}
\gamma^{\iMa\iSc\iSa\iSb}=0
\label{pf_Semi_Q_cond}\end{aligned}$$ and hence .
We can now count the number and type of components. The dynamic equation and (\[QP\_DTeqn\_a00m\]) remain unchanged but becomes $$\begin{aligned}
\gamma^{\iSc \iSb 0 \iSa}
=
- \Gamma^\iSc{}_{00}\, \gamma^{0 0 \iSa \iSb}
\qquadand
\dot \gamma^{0 0 \iSa\iSb}
&=
- 2\gamma^{0 (\iSb 0) \iSa}
- \Gamma^0{}_{00}\, \gamma^{0 0 \iSa \iSb}\end{aligned}$$ since the symmetry condition implies $\gamma^{\iSc0\iSa\iSb}=\gamma^{0\iSc\iSa\iSb}=0$. Thus we have 4+12+6=22 ODEs.
Starting with the 100 components given after applying we have $9\times 6=54$ constraints coming from plus $18$ constraint-es coming from the first equation above. This leaves 28 components left. Of these 22 are given by the ODEs and 6 are free.
Lemmas and proofs associated with the Dixon split {#ch_Apendx_DixonSplit}
-------------------------------------------------
In this section we work in a coordinate system $(\sigma,z^1,z^2,z^3)$, which is adapted both for $C$ and $\DixVec$, so that $\DixVec=\DixVec_0 d\sigma$ with $\DixVec_0\ne 0$. We see that if $N(V)=0$ then we can replace $V^{\iMa}$ with $V^\iSa$. Likewise we can replace $\xi^{\iMa\iMb\iMc\iMd}$ with $\xi^{\iMa\iMb\iSa\iSb}$ since $\xi^{\iMa\iMb0\iSa}=\xi^{\iMa\iMb\iSa0}=0$.
In this coordinate system a radial vector $R$ has the properties $$\begin{aligned}
\begin{gathered}
R^{\iMa}|_p=0
,\quad
\partial_{\iMa} R^0|_p = 0
,\quad
\partial_{\iMa} R^\iSa|_p = \delta_{\iMa}^\iSa
,\quad
\partial_0 \partial_{\iMa} R^\iMb|_p = 0
,\\
\partial_\iSb\partial_\iSc R^0|_p = -2\Gamma^\iSa_{\iSb\iSc}
\quadand
\partial_\iSb\partial_\iSc R^\iSa|_p = -\Gamma^\iSa_{\iSb\iSc}
\end{gathered}
\label{CoFree_Dixon_Radial_diffs_R^mu}\end{aligned}$$ for any $p=C(\sigma)$. This can be expressed as $$\begin{aligned}
R^0 = -
z^\iSb z^\iSc\,\Gamma^0_{\iSb\iSc}\,\partial_0
+
O(\Vz^3)
\qquadand
R^\iSa = z^\iSa - \tfrac12 z^\iSb z^\iSc\,\Gamma^\iSa_{\iSb\iSc} +
O(\Vz^3)
\label{CoFree_Dixon_Radial_expan_alt}\end{aligned}$$ or alternatively as $$\begin{aligned}
R= z^\iSa\,\partz_\iSa
-
z^\iSb z^\iSc\,\Gamma^0_{\iSb\iSc}\,\partial_0
-
\tfrac12 z^\iSb z^\iSc\,\Gamma^\iSa_{\iSb\iSc}\,\partial_\iSa
+ O(\Vz^3)
\label{CoFree_Dixon_Radial_expan}\end{aligned}$$ where $O(\Vz^3)$ is any function (or vector) of $(\sigma,z^1,z^2,z^3)$ which is at least cubic in its $z^\iSa$ arguments.
[Proof of ]{} In the adapted coordinate system, assume first that $R^{\iMa}$ satisfies and that $U,V$ satisfy $N(U)=N(V)=0$, so $U^0=V^0=0$.
Clearly from either (\[CoFree\_Dixon\_def\_Rad\].1) or (\[CoFree\_Dixon\_Radial\_diffs\_R\^mu\].1) we have $R|_p=0$. Here (\[CoFree\_Dixon\_def\_Rad\].1) refers to the first equation in . $$\begin{aligned}
\big(\nablaDG_V R - V\big)^{\iMa}\big|_p
=
\big(V^\iMb\partial_\iMb(R^{\iMa}) + V^\iMb R^\iMc \Gamma^{\iMa}_{\iMb\iMc} - V^{\iMa}\big)\big|_p
=
\big(V^\iSa (\partial_\iSa(R^{\iMa}) - \delta^{\iMa}_\iSa)\big)\big|_p\end{aligned}$$ Thus (\[CoFree\_Dixon\_def\_Rad\].2) is equivalent to (\[CoFree\_Dixon\_Radial\_diffs\_R\^mu\].2), (\[CoFree\_Dixon\_Radial\_diffs\_R\^mu\].3). From (\[CoFree\_Dixon\_Radial\_diffs\_R\^mu\].2) and (\[CoFree\_Dixon\_Radial\_diffs\_R\^mu\].3) we have (\[CoFree\_Dixon\_Radial\_diffs\_R\^mu\].4)
From (\[CoFree\_Dixon\_def\_Rad\].2) we have, (implicitly evaluating at $p$), $$\begin{aligned}
\nablaInd_\iSb(\nablaInd_\iSc R^\iSa)
&=
\partial_\iSb(\nablaInd_\iSc R^\iSa)
+
(\nablaInd_\iSc R^\iSd)
\Gamma^\iSa_{\iSb\iSd}
-
(\nablaInd_\iSd R^\iSa)
\Gamma^\iSd_{\iSb\iSc}
\\&=
\partial_\iSb\partial_\iSc R^\iSa
+
\partial_\iSb(R^\iSe\,\Gamma^\iSa_{\iSc\iSe})
+
(\partial_\iSc R^\iSd)
\Gamma^\iSa_{\iSb\iSd}
+
R^\iSe \Gamma^\iSd_{\iSc\iSe}
\Gamma^\iSa_{\iSb\iSd}
-
(\partial_\iSd R^\iSa)
\Gamma^\iSd_{\iSb\iSc}
-
R^\iSe \Gamma^\iSa_{\iSd\iSe}
\Gamma^\iSd_{\iSb\iSc}
\\&=
\partial_\iSb\partial_\iSc R^\iSa
+
\delta^\iSe_\iSb\,\Gamma^\iSa_{\iSc\iSe}
+
\delta^\iSd_\iSc
\Gamma^\iSa_{\iSb\iSd}
-
\delta^\iSa_\iSd
\Gamma^\iSd_{\iSb\iSc}
=
\partial_\iSb\partial_\iSc R^\iSa
+
\Gamma^\iSa_{\iSc\iSb}
+
\Gamma^\iSa_{\iSb\iSc}
-
\Gamma^\iSa_{\iSb\iSc}
\\&=
\partial_\iSb\partial_\iSc R^\iSa
+
\Gamma^\iSa_{\iSc\iSb}\end{aligned}$$ and $$\begin{aligned}
\nablaInd_\iSb(\nablaInd_\iSc R^0)
&=
\partial_\iSb(\nablaInd_\iSc R^0)
+
(\nablaInd_\iSc R^\iSd)
\Gamma^0_{\iSb \iSd}
-
(\nablaInd_\iSd R^0)
\Gamma^\iSd_{\iSb\iSc}
=
\partial_\iSb\partial_\iSc R^0
+
\partial_\iSb(R^\iSe\,\Gamma^0_{\iSc\iSe})
+
\Gamma^0_{\iSb \iSc}
\\&=
\partial_\iSb\partial_\iSc R^0
+
\partial_\iSb(R^\iSe)\,\Gamma^0_{\iSc\iSe}
+
\Gamma^0_{\iSb \iSc}
=
\partial_\iSb\partial_\iSc R^0
+
2\Gamma^0_{\iSb \iSc} \end{aligned}$$ Thus $$\begin{aligned}
\nablaDG^2_{U,V} R
&=
V^{\iMa} U^\iMb (\partial_{\iMa}\partial_\iMb R^\iSa+ \Gamma^\iSa_{\iSc\iSb})
\partial_\iSa
+
V^{\iMa} U^\iMb (\partial_{\iMa}\partial_\iMb R^0+ 2\Gamma^\iSa_{\iSc\iSb})
\partial_0
\\&=
V^\iSa U^\iSb (\partial_\iSb\partial_\iSc R^\iSa+ \Gamma^\iSa_{\iSc\iSb})
\partial_\iSa
+
V^\iSa U^\iSb (\partial_\iSb\partial_\iSc R^0+ 2\Gamma^\iSa_{\iSc\iSb})
\partial_0\end{aligned}$$ Hence (\[CoFree\_Dixon\_def\_Rad\].3) if and only if (\[CoFree\_Dixon\_Radial\_diffs\_R\^mu\].5) and (\[CoFree\_Dixon\_Radial\_diffs\_R\^mu\].6)
[Proof of ]{} \[pf\_DixonSplit\_quad\] In the adapted coordinate system and evaluating at $C(\sigma)$ we have $$\begin{aligned}
\xi^{\iMa\iMb}
(R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf \TtwoTen_{\iMa\iMb})
=
0\end{aligned}$$ Thus the monopole term does not contribute to $\tau_{(2)}$. Likewise $$\begin{aligned}
\xi^{\iMa\iMb\iMc\iMd} \nablaInd_\iMc
(R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf \TtwoTen_{\iMa\iMb})
=
0\end{aligned}$$ so the dipole term does not contribute to $\tau_{(2)}$. Finally we have $$\begin{aligned}
\xi^{\iMa\iMb\iMc\iMd} \nablaInd_\iMc\nablaInd_\iMd &
(R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf \TtwoTen_{\iMa\iMb})
=
\xi^{\iMa\iMb\iSa\iSb} \nablaInd_{\iSa}\nablaInd_{\iSb}
(R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf \TtwoTen_{\iMa\iMb})
=
\xi^{\iMa\iMb\iSa\iSb}
(\partial_{\iSa}\partial_{\iSb}(R^\iMe\, R^\iMf)\, \nablaInd_\iMe\nablaInd_\iMf \TtwoTen_{\iMa\iMb})
\\&=
\xi^{\iMa\iMb\iSa\iSb} (\delta_\iSa^\iMe\delta_\iSb^\iMf+\delta_\iSa^\iMf\delta_\iSb^\iMe)
(\nablaInd_\iMe\nablaInd_\iMf \TtwoTen_{\iMa\iMb})
=
2 \xi^{\iMa\iMb\iSa\iSb}
(\nablaInd_\iSa\nablaInd_\iSb \TtwoTen_{\iMa\iMb})
=
2 \xi^{\iMa\iMb\iMc\iMd}
(\nablaInd_\iMc\nablaInd_\iMd \TtwoTen_{\iMa\iMb})
\end{aligned}
\label{pf_CoFree_Dixon_xi^abcd_RR}$$ Thus $\tau_{(2)}$ is given by .
[Proof of ]{} \[pf\_DixonSplit\_dip\] Since $$\begin{aligned}
\xi^{\iMa\iMb}
(R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb} - R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf
\TtwoTen_{\iMa\iMb})
=0\end{aligned}$$ the monopole term does not contribute to $\tau_{(1)}$. Also $$\begin{aligned}
\nablaInd_\iSa\nablaInd_\iSb\,(R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb})
&=
\nablaInd_\iSa\big((\nablaInd_\iSb R^\iMe)\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}\big)
+
\nablaInd_\iSa\big( R^\iMe \nablaInd_\iSb\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}\big)
\\&=
(\nablaInd_\iSa \nablaInd_\iSb R^\iMe)\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}
+
(\nablaInd_\iSb R^\iMe)\nablaInd_\iSa \nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}
+
(\nablaInd_\iSa R^\iMe) \nablaInd_\iSb\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}
+
R^\iMe \nablaInd_\iSa\nablaInd_\iSb\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}
\\&=
\delta^\iMe_\iSb \nablaInd_\iSa \nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}
+
\delta^\iMe_\iSa \nablaInd_\iSb\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}
=
\nablaInd_\iSa \nablaInd_\iSb\,\TtwoTen_{\iMa\iMb}
+
\nablaInd_\iSb\nablaInd_\iSa\,\TtwoTen_{\iMa\iMb}\end{aligned}$$ Hence $$\begin{aligned}
\xi^{\iMa\iMb\iMc\iMd}\, \nablaInd_\iMc\nablaInd_\iMd\,(R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb})
&=
\xi^{\iMa\iMb\iSa\iSb}\, \nablaInd_\iSa\nablaInd_\iSb\,(R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb})
=
\xi^{\iMa\iMb\iSa\iSb}\, \big(\nablaInd_\iSa \nablaInd_\iSb\,\TtwoTen_{\iMa\iMb}
+
\nablaInd_\iSb\nablaInd_\iSa\,\TtwoTen_{\iMa\iMb}\big)
\\&=
2 \xi^{\iMa\iMb\iSa\iSb}
(\nablaInd_\iSa\nablaInd_\iSb \TtwoTen_{\iMa\iMb})
=
2 \xi^{\iMa\iMb\iMc\iMd}
(\nablaInd_\iMc\nablaInd_\iMd \TtwoTen_{\iMa\iMb})
\end{aligned}
\label{pf_CoFree_Dixon_xi^abcd_R}$$ Thus using we see $$\begin{aligned}
\xi^{\iMa\iMb\iMc\iMd}\, \nablaInd_\iMc\nablaInd_\iMd\,
(R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb} - R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf
\TtwoTen_{\iMa\iMb})
&=
0\end{aligned}$$ Thus the quadrupole term does not contribute to $\tau_{(1)}$. Finally $$\begin{aligned}
\xi^{\iMa\iMb\iMc}\, \nablaInd_\iMc
(R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb})
&=
\xi^{\iMa\iMb\iSa}\, \nablaInd_\iSa
(R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb})
=
\xi^{\iMa\iMb\iSa}\, (\nablaInd_\iSa R^\iMe)\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}
=
\xi^{\iMa\iMb\iSa}\, \delta_\iSa^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb}
\\&=
\xi^{\iMa\iMb\iSa}\, \nablaInd_\iSa\,\TtwoTen_{\iMa\iMb}
=
\xi^{\iMa\iMb\iMc}\, \nablaInd_\iMc\,\TtwoTen_{\iMa\iMb}
\end{aligned}
\label{pf_CoFree_Dixon_xi^abc_R}$$ Thus $\tau_{(1)}$ is given by .
[Proof of ]{} \[pf\_DixonSplit\_mono\] From and we have $$\begin{aligned}
\xi^{\iMa\iMb\iMc\iMd}\, \nablaInd_\iMc\nablaInd_\iMd\,
(\TtwoTen_{\iMa\iMb} -
R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb} + \tfrac12R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf
\TtwoTen_{\iMa\iMb})
&=
0\end{aligned}$$ Thus the quadrupole term does not contribute to $\tau_{(0)}$. Using we have $$\begin{aligned}
\xi^{\iMa\iMb\iMc}\, \nablaInd_\iMc
(\TtwoTen_{\iMa\iMb} -
R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb} + \tfrac12R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf
\TtwoTen_{\iMa\iMb})=0\end{aligned}$$ so the dipole term does not contribute to $\tau_{(0)}$. Finally $$\begin{aligned}
\xi^{\iMa\iMb}\,
(\TtwoTen_{\iMa\iMb} -
R^\iMe\,\nablaInd_\iMe\,\TtwoTen_{\iMa\iMb} + \tfrac12R^\iMe\, R^\iMf\, \nablaInd_\iMe\nablaInd_\iMf
\TtwoTen_{\iMa\iMb})
&=
\xi^{\iMa\iMb}\, \TtwoTen_{\iMa\iMb}\end{aligned}$$ so $\tau_{(0)}$ is given by .
[^1]: It turns out that in most calculation the metric plays no roll and a arbitrary linear connection can be used. See section \[ch\_CoFree\].
[^2]: Even using proper time in Minkowski space, one cannot assume that $\Interval=\Real$ since it is possible to accelerate to lightlike infinity in finite proper time.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Dust plays an essential role in the unification theory of active galactic nuclei (AGNs). This review summarizes our current understanding of the extinction and infrared emission properties of the circumnuclear dust in AGNs as well as the inferred dust composition and size distribution.'
author:
- Aigen Li
title: Dust in Active Galactic Nuclei
---
Introduction: Are All AGNs Born Equal? — The Role of Dust in the Unified Schemes of AGNs
========================================================================================
Dust is the cornerstone of the unification theory of active galactic nuclei (AGNs). This theory proposes that all AGNs are essentially “born equal”: all types of AGNs are surrounded by an optically thick dust torus and are basically the same object but viewed from different lines of sight (see e.g. Antonucci 1993; Urry & Padovani 1995). The large diversity in the observational properties of AGNs (e.g. optical emission-line widths and X-ray spectral slopes) is simply caused by the viewing-angle-dependent obscuration of the nucleus: those viewed face-on are unobscured (allowing for a direct view of their nuclei) and recognized as “type 1” AGNs, while those viewed edge-on are “type 2” AGNs with most of their central engine and broad line regions being hidden by the obscuring dust.
Apparently, key factors in understanding the structure and nature of AGNs are determining the geometry of the nuclear obscuring torus around the central engine and the obscuration (i.e. extinction, a combination of absorption and scattering) properties of the circumnuclear dust. An accurate knowledge of the dust extinction properties is also required to correct for the dust obscuration in order to recover the intrinsic optical/ultraviolet (UV) spectrum of the nucleus from the observed spectrum and to probe the physical conditions of the dust-enshrouded gas close to the nucleus.
The presence of an obscuring dust torus around the central engine was first indirectly indicated by the spectropolarimetric detection of broad permitted emission lines (characteristic of type 1 AGNs) scattered into our line of sight by free electrons located above or below the dust torus in a number of type 2 AGNs (e.g. see Heisler et al. 1997, Tran 2003). Direct evidence for the presence of a dust torus is provided by infrared (IR) observations. The circumnuclear dust absorbs the AGN illumination and reradiates the absorbed energy in the IR. The IR emission at wavelengths longward of $\lambda$$>$1$\mum$ accounts for at least 50% of the bolometric luminosity of type 2 AGNs. For type 1 AGNs, $\simali$10% of the bolometric luminosity is emitted in the IR (e.g. see Fig.13.7 of Osterbrock & Ferland 2006). A near-IR “bump” (excess emission above the $\simali$2–10$\mum$ continuum), generally attributed to hot dust with temperatures around $\simali$1200–1500$\K$ (near the sublimation temperatures of silicate and graphite grains), is seen in a few type 1 AGNs (Barvainis 1987; Rodríguez-Ardila & Mazzalay 2006). Direct imaging at near- and mid-IR wavelengths has been performed for several AGNs and provides constraints on the size and structure of the circumnuclear dust torus (e.g. see Jaffe et al. 2004, Elitzur 2006). Spectroscopically, the 10$\mum$ silicate [*absorption*]{} feature (see §3.3) and the 3.4$\mum$ aliphatic hydrocarbon [*absorption*]{} feature (see §3.2) are widely seen in heavily obscured type 2 AGNs; in contrast, the 10$\mum$ silicate [*emission*]{} feature has recently been detected in a number of type 1 AGNs (see §3.3).
To properly interpret the observed IR continuum emission and spectroscopy as well as the IR images of AGNs, it requires a good understanding of the absorption and emission properties of the circumnuclear dust. To this end, one needs to know the composition, size, and morphology of the dust – with this knowledge, one can use Mie theory (for spherical dust) to calculate the absorption and scattering cross sections of the dust from X-ray to far-IR wavelengths, and then calculate its UV/optical/near-IR obscuration as a function of wavelength, and derive the dust thermal equilibrium temperature (based on the energy balance between absorption and emission) as well as its IR emission spectrum. This will allow us to correct for dust obscuration and constrain the circumnuclear structure through modeling the observed IR emission and images. The former is essential for interpreting the obscured UV/optical emission lines and probing the physical conditions of the central regions; the latter is critical to our understanding of the growth of the central supermassive black hole. However, little is known about the dust in the circumnuclear torus of AGNs. Even our knowledge of the best-studied dust – the Milky Way interstellar dust – is very limited. In this review, I will take a comparative study of the extinction and IR emission as well as the UV/IR spectroscopic properties and the inferred composition, size and morphology of the dust in AGNs and the dust in the interstellar medium (ISM) of the Milky Way and other galaxies.
Extinction — A Powerful Discriminator of Dust Size
==================================================
Extinction is a combined effect of absorption and scattering. Since a grain absorbs and scatters light most effectively at wavelengths comparable to its size $\lambda$$\approx$$2\pi a$, the wavelength dependence of extinction (“extinction curve”) constrains the dust size distribution.
Interstellar Extinction: Milky Way, SMC, and LMC
------------------------------------------------
Interstellar extinction is most commonly obtained through the “pair-method” by comparing the spectra of two stars of the same spectral type, one of which is reddened and the other unreddened. Interstellar extinction curves rise from the near-IR to the near-[UV]{}, with a broad absorption feature at about $\lambda^{-1}$$\approx$4.6$\mum^{-1}$ ($\lambda$$\approx$2175$\Angstrom$), followed by a steep rise into the far-[UV]{} $\lambda^{-1}$$\approx$10$\mum^{-1}$ (see Fig.1). This wavelength dependence indicates that there must exist in the ISM a population of large grains with $a$$\simgt$$\lambda/2\pi$$\approx$$0.1\mum$ to account for the extinction at visible/near-IR wavelengths, and a population of ultrasmall grains with $a$$\simlt$$\lambda/2\pi$$\approx$\
$0.016\mum$ to account for the far-[UV]{} extinction at $\lambda$=$0.1\mum$. In the wavelength range of 0.125$\le$$\lambda$$\le$3.5$\mum$, the Galactic extinction curves can be approximated by an analytical formula involving only one free parameter: [$R_V$$\equiv$$A_V/E(B-V)$]{}, the total-to-selective extinction ratio (Cardelli et al. 1989), with $R_V$$\approx$3.1 for the Galactic average (see Fig.1). The optical/[UV]{} extinction curves and $R_V$ show considerable regional variations and depend on the environment: lower-density regions have a smaller [$R_V$]{}, a stronger [2175$\Angstrom$]{} bump and a steeper far-[UV]{} rise ($\lambda^{-1}$$>$4$\mum^{-1}$), implying smaller dust in these regions; denser regions have a larger [$R_V$]{}, a weaker [2175$\Angstrom$]{} bump and a flatter far-[UV]{} rise, implying larger dust.
In the Small Magellanic Cloud ([SMC]{}), the extinction curves of most sightlines display a nearly linear steep rise with $\lambda^{-1}$ and an extremely weak or absent 2175$\Angstrom$ bump (Lequeux et al. 1982; Prévot et al. 1984; see Fig.2), suggesting that the dust in the SMC is smaller than that in the Galactic diffuse ISM as a result of either more efficient dust destruction in the [SMC]{} due to its harsh environment of the copious star formation associated with the [SMC]{} Bar or lack of growth due to the low-metallicity of the [SMC]{}, or both. The Large Magellanic Cloud ([LMC]{}) extinction curve is characterized by a weaker 2175$\Angstrom$ bump and a stronger far-[UV]{} rise than the Galactic curve (Nandy et al. 1981; Koornneef & Code 1981), intermediate between that of the [SMC]{} and that of the Galaxy (see Fig.2). Regional variations also exist in the SMC and LMC extinction curves.
AGN Extinction — “Gray” or SMC-like Extinction?
-----------------------------------------------
Little is known about the wavelength dependence of the extinction caused by the circumnuclear dust of AGNs. In literature, the AGN extinction curves are mainly inferred from (1) composite quasar spectra, and (2) individual reddened AGNs. The former often reveals a “gray” extinction, implying that the size distribution of the dust in the AGN circumnuclear environments is skewed towards substantially large grains. The latter often suggests a steep-rising SMC-like extinction, indicating a preponderance of small grains near the nucleus. There is also indirect information, including the dust reddening- and extinction-to-gas ratios and the IR emission modeling of AGNs (see §4).
### Composite Reddened Quasar Spectra — “Gray” Extinction?
Czerny et al. (2004) constructed a quasar extinction curve based on the blue and red composite quasar spectra of Richards et al. (2003) obtained from the Sloan Digital Sky Survey (SDSS). Six composite quasar spectra were generated by Richards et al. (2003) from 4576 SDSS quasars based on the relative g$^{\ast}$–i$^{\ast}$ color with “Composite 1” (made from 770 objects) being the bluest. Czerny et al. (2004) created a mean “quasar extinction curve” by averaging 3 extinction curves obtained through comparing the spectra of Composites 3, 4, and 5 (consisting of 770, 770, and 211 objects, respectively) with that of Composite 1, assuming that Composite 1 is essentially unaffected by dust while Composites 3, 4, and 5 are subject to dust reddening. The resulting extinction curve is nearly monotonic with wavelength, without any trace of the 2175$\Angstrom$ bump (see Fig.3).
Gaskell et al. (2004) derived extinction curves for radio-loud quasars based on the composite spectra of 72 radio quasars created by Baker & Hunstead (1995), and for radio-quiet AGNs based on the composite spectrum of 1018 radio-quiet AGNs generated by Francis et al. (1991). The extinction curve for these radio-loud quasars, grouped by Baker & Hunstead (1995) into 4 subsamples according to the 5GHz radio core-to-lobe flux ratios $R$, was determined by comparing the composite spectrum of the more-reddened lobe-dominant ($R$$<$0.1) sample with that of less-reddened core-dominant ($R$$>$1) sample. Similarly, Gaskell et al. (2004) obtained an extinction curve for radio-quiet AGNs by comparing the composite spectrum of Francis et al. (1991) created for 1018 radio-quiet AGNs with that for the relatively unreddened core-dominant composite of Baker & Hunstead (1995). Most prominently, the derived extinction curves for both radio-loud quasars and radio-quiet quasarssy lack the 2175$\Angstrom$ bump and are essentially “gray”, i.e., significantly flatter in the UV than that of the Milky Way diffuse ISM, although it appears that for the latter the reddening curve is slightly steeper in the UV (see Fig.3).
However, Willott (2005) questioned the validity of the approach based on the ratios of reddened and unreddened composite quasars (Czerny et al. 2004; Gaskell et al. 2004) since composite spectra combine quasars at different redshifts, while the quasars going into a composite spectrum have a negative correlation between reddening and redshift, and quasar surveys in practice contain more highly reddened quasars at lower redshifts. He argued that since the quasars contributing to the composite in the UV have typically lower reddening than those contributing in the optical, the gray UV extinction laws derived using composite quasars (Czerny et al. 2004; Gaskell et al. 2004) might be artificial, and the actual AGN extinction curve may be SMC-like.
### Individual Reddened AGNs — SMC-like Extinction?
In contrast to the “composite quasar spectrum” method which may be biased by the fact that the highest redshift quasars (which contribute to the UV part of a composite spectrum) are less extincted (leading to shallower extinction in the UV), AGN extinction curves have also been derived for individual reddened objects.
Crenshaw et al. (2001) determined a reddening curve for the nucleus of the Seyfert 1 galaxy NGC3227 by comparing its HST/STIS UV and optical spectra with that of the unreddened Seyfert galaxy NGC4151. They found that the derived extinction curve in the UV is even steeper than that of the SMC and lacks the 2175$\Angstrom$ bump. Similar studies were performed for Ark564, a Narrow-Line Seyfert 1 galaxy (Crenshaw et al. 2002). By comparing the HST/STIS UV and optical spectra of Ark564 with that of Mrk493, an unreddened Narrow-Line Seyfert 1 galaxy, Crenshaw et al. (2002) found that the extinction curve for Ark564, with no evidence for the 2175$\Angstrom$ bump, rises to the UV more steeply than the Galactic extinction curve (but not as steeply as the SMC curve) with a longer turning-up wavelength of $\simali$4000$\Angstrom$ (compared to $\simali$2500$\Angstrom$ for the standard Galactic, LMC, and SMC curves).
In an analysis of the optical/UV color distribution of 4576 SDSS quasars, Richards et al. (2003) showed that 273 (6.0%) of the quasars in their sample appear to be redder because of SMC-like dust extinction and reddening. Hopkins et al. (2004) investigated the reddening law toward 9566 SDSS quasars, including a subset of 1886 quasars matched to 2MASS (Two Micron All Sky Survey) by exploring the shapes of their spectral energy distributions obtained from broadband photometry (at five SDSS bands $ugriz$ and three 2MASS bands $JHK$). They found that the reddening toward quasars is dominated by SMC-like dust at the quasar redshift.
More recently, Gaskell & Benker (2007) determined the extinction curves for 14 individual AGNs based on the FUSE and HST spectrophotometry of Shang et al. (2005). Unlike Crenshaw et al. (2001, 2002) who used a single unreddened AGN as a reference, Gaskell & Benker (2007) took the average of 3 AGNs which have the highest 4–8$\mum^{-1}$ fluxes relative to their optical fluxes in the sample of Shang et al. (2005). They found that the majority of the derived extinction curves in the UV are much flatter than that of the SMC, although not as flat as the “gray” curve derived by Gaskell et al. (2004) based on composite quasar spectra (see Fig.3 for the average extinction curve for the 5 AGNs with the greatest reddening in their sample).
### Reduced Reddening- and Extinction-to-Gas Ratios — Flat Extinction?
Assuming a Galactic standard extinction curve ($R_V$=3.1) and a foreground screen, Maiolino et al. (2001a) determined for 19 AGNs the amount of reddening $E(B-V)$ affecting the broad line region by comparing the observed optical/IR H broad line ratios with the intrinsic values. For these AGNs, they also determined the X-ray absorbing column densities $\NH$ from the photoelectric cutoff in their X-ray spectra. They found that for most (16 of 19) objects $E(B-V)/\NH$ is significantly lower than the Galactic standard value ($\approx$$1.7\times 10^{-22}$magcm$^{-2}$) by a factor ranging from a few to $\simali$100 (except for 3 Low Luminosity AGNs whose physics may be intrinsically different \[see Ho 1999\]). Similarly, Maiolino et al. (2001a) also found that the extinction-to-gas ratios $A_V/\NH$ of various classes of AGNs are significantly lower than the Galactic standard value ($\approx$$5.3\times 10^{-22}$magcm$^{-2}$). Maiolino et al. (2001b) ascribed the reduced $E(B-V)/\NH$ and $A_V/\NH$ ratios of AGNs (often with a solar or higher metallicity) to grain growth through coagulation in the dense circumnuclear region which results in a dust size distribution biased in favour of large grains and therefore a flat extinction curve.
However, Weingartner & Murray (2002) argued that the X-ray absorption and optical extinction may occur in distinct media (e.g. the X-ray absorption occurs in material located off the torus and/or accretion disk, while the optical extinction occurs in material located beyond the torus); therefore, the reduced $E(B-V)/\NH$ and $A_V/\NH$ ratios may not necessarily imply that the grains in AGNs are systematically larger than those in the Galactic ISM.
Dust Spectroscopy — Diagnosis of Dust Composition
=================================================
Dust spectroscopy provides the most diagnostic information on the dust composition. Our knowledge about the composition of the dust in the Galactic diffuse ISM is mainly derived from the absorption and emission spectral lines: the 2175$\Angstrom$ extinction bump (small graphitic dust), the 3.4$\mum$ absorption feature (aliphatic hydrocarbon dust), the 9.7$\mum$ and 18$\mum$ absorption features (amorphous silicate dust), and the 3.3, 6.2, 7.7, 8.6, and 11.3$\mum$ emission features (polycyclic aromatic hydrocarbon \[PAH\] molecules). The ice absorption features at 3.1 and 6.0$\mum$ (H$_2$O), 4.67$\mum$ (CO), 4.27 and 15.2$\mum$ (CO$_2$), 3.54 and 9.75$\mum$ (CH$_3$OH), 2.97$\mum$ (NH$_3$), 7.68$\mum$ (CH$_4$), 5.81$\mum$ (H$_2$CO), and 4.62$\mum$ (XCN$^{-}$) are seen in dark molecular clouds with visual extinction $A_V$$>$3mag. In this section I will present a comparative overview of the dust absorption and emission features in AGNs and the inferred dust composition.
The 2175$\Angstrom$ Extinction Bump
-----------------------------------
The 2175$\Angstrom$ extinction bump, first detected over 40 years ago (Stecher 1965), is an ubiquitous feature of the Milky Way ISM. With a stable central wavelength and variable feature strength for lines of sight in our Galaxy, the 2175$\Angstrom$ bump is relatively weaker in the LMC and absent in the SMC (see Fig.2). This bump is largely absent in AGNs (see §2) except Gaskell & Benker (2007) recently claimed that it might be detected in Mrk304, one of the seven AGNs with the highest quality extinction curves in their 14-AGN sample. Fig.4 compares the UV spectra of 5 slightly reddened type 1 AGNs with the template of type 1 AGNs reddened with the standard Galactic extinction. It is seen that the Galactic extinction predicts too strong a 2175$\Angstrom$ dip (Maiolino et al. 2001b).
The exact nature of the 2175$\Angstrom$ bump, the strongest spectroscopic extinction feature in the Galactic ISM, remains uncertain. It is generally believed to be caused by aromatic carbonaceous (graphitic) materials, very likely a cosmic mixture of PAH molecules (Joblin et al. 1992; Li & Draine 2001b). The fact that the 2175$\Angstrom$ bump is not (or at least rarely) seen in AGNs suggests that its carrier (e.g. PAHs) may have been photodestroyed by energetic photons (e.g. X-ray irradiation) from the central engine.
The 3.4$\mum$ Aliphatic Hydrocarbon Absorption Feature
------------------------------------------------------
The 3.4$\mum$ absorption feature, attributed to the C–H stretching mode in saturated aliphatic hydrocarbon dust, is widely seen in the Galactic diffuse ISM (but never seen in molecular clouds; see Pendleton & Allamandola 2002). This feature is also seen in AGNs (Wright et al. 1996, Imanishi et al. 1997, Mason et al. 2004), closely resembling that of our Galaxy in both peak wavelengths and relative feature strengths of the 3.42$\um$, 3.48$\um$, and 3.51$\um$ subfeatures (corresponding to symmetric and asymmetric stretches of C–H bonds in CH$_2$ and CH$_3$ groups in aliphatic hydrocarbon chains). Mason et al. (2004) argued that the 3.4$\mum$ absorption feature at least in face-on Seyfert 2 galaxies arises in dust local to the active nucleus rather than in the diffuse ISM of the galaxy.
The exact carrier of this feature remains uncertain. So far, among the $>$20 candidate materials proposed over the years since its first detection in the Galactic center sightlines 28 years ago, the experimental spectra of hydrogenated amorphous carbon (Mennella et al. 1999) and the organic refractory residue, synthesized from UV photoprocessing of interstellar ice mixtures (Greenberg et al. 1995), provide the best fit to the observed spectra.
So far, no polarization has been detected for this feature (Adamson et al. 1999, Chiar et al. 2006, Mason et al. 2006), suggesting that the carrier of this feature is either spherical or unaligned or both. Spectropolarimetric measurements for both the 9.7$\mum$ silicate and the 3.4$\mum$ hydrocarbon features for the same sightline (e.g. Chiar et al. 2006) would allow for a direct test of the silicate core-hydrocarbon mantle interstellar dust model (Li & Greenberg 1997, Jones et al. 1990), since this model predicts that the 3.4$\mum$ feature would be polarized if the 9.7$\mum$ feature (for the same sightline) is polarized (Li & Greenberg 2002).
The 9.7$\mum$ and 18$\mum$ Silicate Absorption and Emission Features
--------------------------------------------------------------------
The strongest IR absorption features in the Galactic ISM are the 9.7$\mum$ and 18$\mum$ bands, which are almost certainly due to silicate minerals: they are respectively ascribed to the Si–O stretching and O–Si–O bending modes in some form of silicate material (e.g. olivine Mg$_{2x}$Fe$_{2-2x}$SiO$_4$). The observed interstellar silicate bands are broad and relatively featureless, indicating that interstellar silicates are largely amorphous rather than crystalline (Li & Draine \[2001a\] estimated that the amount of $a$$<$1$\mum$ crystalline silicate grains in the Galactic diffuse ISM is $<$5% of the solar Si abundance).
The first detection of the silicate [*absorption*]{} feature in AGNs was made at 9.7$\mum$ for the prototypical Seyfert 2 galaxy NGC1068 (Rieke & Low 1975; Kleinmann et al. 1976), indicating the presence of a large column of silicate dust in the line-of-sight to the nucleus. It is known now that most of the type 2 AGNs display silicate [*absorption*]{} bands (e.g. see Roche et al. 1991, Siebenmorgen et al. 2004) as expected – for a centrally heated optically thick torus viewed edge-on, the silicate features should be in absorption. Spatially resolved mid-IR spectra obtained for NGC1068 (Mason et al. 2006, Rhee & Larkin 2006) and Circinus (Roche et al. 2006) have revealed striking variations in continuum slope, silicate feature profile and depth.
However, it appears that the 9.7$\mum$ silicate absorption profile of AGNs differs from that of the Milky Way. Jaffe et al. (2004) found that the 9.7$\mum$ silicate absorption spectrum of NGC1068 shows a relatively flat profile from 8 to 9$\mum$ and then a sharp drop between 9 and 10$\mum$; in comparison, the Galactic silicate absorption profiles begin to drop already at $\simali$8$\mum$. They obtained a much better fit to the 9.7$\mum$ absorption feature of NGC1068 by using the profile of calcium aluminium silicate Ca$_2$Al$_2$SiO$_7$, a high-temperature dust species found in some supergiant stars (Speck et al. 2000). It would be interesting to know if the amount of calcium required to account for the observed absorption is consistent with abundance constraints. Very recently, Roche et al. (2007) reported the detection of a spectral structure near 11.2$\mum$ in NGC3094, indicative of the possible presence of crystalline silicates in AGNs.
For type 1 AGNs viewed face-on, one would expect to see the silicate features in [*emission*]{} since the silicate dust in the surface of the inner torus wall will be heated to temperatures of several hundred kelvin by the radiation from the central engine, allowing for a direct detection of the 9.7$\mum$ and 18$\mum$ silicate bands emitted from this hot dust. However, their detection (using [*Spitzer*]{}) has only very recently been reported in a number of type 1 AGNs (Hao et al. 2005, Siebenmorgen et al. 2005, Sturm et al. 2005, Weedman et al. 2005, Shi et al. 2006). Siebenmorgen et al. (2005) postulated that the AGN luminosity determines whether the silicate emission bands are prominent or not (i.e., they may be present only in the most luminous AGNs), but this idea was challenged by their detection in the low-luminosity AGN NGC3998, a type 1 LINER galaxy (Sturm et al. 2005).
The 9.7$\mum$ silicate emission profiles of both quasars (high luminosity counterparts of Seyfert 1 galaxies; Hao et al. 2005, Siebenmorgen et al. 2005) and the low-luminosity AGN NGC3998 (Sturm et al. 2005) peak at a much longer wavelength ($\simali$11$\mum$), inconsistent with “standard” silicate ISM dust (which peaks at $\simali$9.7$\mum$). The 9.7$\mum$ feature of NGC3998 is also much broader than that of the Galactic ISM (Sturm et al. 2005). The deviations of the silicate emission profiles of type 1 AGNs from that of the Galactic ISM dust may indicate differences in the dust composition, grain size distribution, or radiative transfer effects (Sturm et al. 2005, Levenson et al. 2007). The red tail of the 18$\mum$ silicate feature of NGC3998 is significantly weaker than that of the bright quasars (Sturm et al. 2005), suggesting that there may exist significant environmental variations. Finally, it is worth noting that the 9.7$\mum$ silicate feature of Mkn 231, a peculiar type 1 Seyfert galaxy, is also seen in [*absorption*]{} peaking at $\simali$10.5$\mum$ (Roche et al. 1983).
The 3.3, 6.2, 7.7, 8.6 and 11.3$\mum$ PAH Emission Features
-----------------------------------------------------------
The distinctive set of “Unidentified Infrared” (UIR) emission features at 3.3, 6.2, 7.7, 8.6, and 11.3$\mum$, now generally identified as the vibrational modes of PAH molecules (Léger & Puget 1984; Allamandola et al. 1985), are seen in a wide variety of Galactic and extragalactic regions (see Draine & Li 2007). In the Milky Way diffuse interstellar medium (ISM), PAHs, containing $\simali$45$\ppm$ (parts per million, relative to H) C, account for $\simali$20% of the total power emitted by interstellar dust (Li & Draine 2001b). The [*ISO*]{} (Infrared Space Observatories) and [*Spitzer*]{} imaging and spectroscopy have revealed that PAHs are also a ubiquitous feature of external galaxies. Recent discoveries include the detection of PAH emission in a wide range of systems: distant Luminous Infrared Galaxies (LIRGs) with redshift $z$ ranging from 0.1 to 1.2 (Elbaz et al. 2005), distant Ultraluminous Infrared Galaxies (ULIRGs) with redshift $z\sim$2 (Yan et al. 2005), distant luminous submillimeter galaxies at redshift $z\sim$2.8 (Lutz et al. 2005), elliptical galaxies with a hostile environment (containing hot gas of temperature $\sim$10$^7\K$) where PAHs can be easily destroyed through sputtering by plasma ions (Kaneda et al. 2005), faint tidal dwarf galaxies with metallicity $\sim Z_\odot/3$ (Higdon et al. 2006), and galaxy halos (Irwin & Madden 2006, Engelbracht et al. 2006).
However, the PAH features are absent in AGNs, as first noticed by Roche et al. (1991). This is commonly interpreted as the destruction of PAHs by extreme UV and soft X-ray photons in AGNs (Roche et al. 1991; Voit 1991, 1992; Siebenmorgen et al. 2004). Genzel et al. (1998) proposed to use the line-to-continuum ratio of the 7.7$\mum$ PAH feature as a discriminator between starburst and AGN activity in ULIRGs (i.e. whether the dominant luminosity source of ULIRGs is an AGN or a starburst). We should note that the PAH emission features are detected in some Seyfert 2 galaxies, but they are from the circumnuclear star-forming regions, not from the AGNs (e.g. see Le Floc’h et al. 2001, Siebenmorgen et al. 2004).
The Ice Absorption Features
---------------------------
Grains in dark molecular clouds (usually with $A_V$$>$3$\magni$) obtain ice mantles consisting of H$_2$O, NH$_3$, CO, CH$_3$OH, CO$_2$, CH$_4$, H$_2$CO and other molecules (with H$_2$O as the dominant species), as revealed by the detection of various ice absorption features (e.g., H$_2$O: 3.1, 6.0$\um$; CO: 4.67$\um$; CO$_2$: 4.27, 15.2$\um$; CH$_3$OH: 3.54, 9.75$\um$; NH$_3$: 2.97$\um$; CH$_4$: 7.68$\um$; H$_2$CO: 5.81$\um$; XCN$^{-}$: 4.62$\um$). The ice absorption features are also seen in most ULIRGs (e.g. see Spoon et al. 2002), indicating the presence of a large quantity of molecular material in ULIRGs. However, the ice absorption features are not expected in AGNs due to the high dust temperatures (because of the immense bolometric luminosity emitted from the AGN) – the dust in the torus, even at a distance of $\simali$100pc, is too warm ($>$100K) for ice mantles to survive.
IR Emission Modeling–Inferring Dust Size and Torus Geometry?
============================================================
To constrain the dust size distribution and the size and geometry of the dust torus, various models have been proposed to explain the observed IR emission spectral energy distribution (SED) of AGNs, radiated by the circumnuclear dust heated by the AGN illumination. These models assume a wide range of torus geometries: uniform density annular (cylindrical) rings of a few pc with an extremely large optical depth $\tau_{\rm UV}$$>$1000 (Pier & Krolik 1992, 1993), optically thick plane parallel slabs of a few thousand pc (Laor & Draine 1993), extended tori of hundreds of pc (Granato & Danese 1994, Granato et al. 1997), geometrically thin, optically thick spherical shells (Rowan-Robinson 1995), tapered disks (Efstathiou & Rowan-Robinson 1995, Stenholm 1995), optically thick, flared disks (Manske et al. 1998), clumpy tori (Nenkova et al. 2002), and other more complicated torus geometries (van Bemmel & Dullemond 2003, Schartmann et al. 2005). In order to suppress the 9.7$\mum$ silicate emission feature (which was not detected until very recently by [*Spitzer*]{}; see §3.3), some models hypothesized that the dust in AGNs must be large ($a$$<$10$\mum$) or small silicate grains must be depleted (e.g. see Laor & Draine 1993, Granato & Danese 1994). Some models ascribed the suppression of the 9.7$\mum$ silicate emission feature to clumpiness (Nenkova et al. 2002) or the strong anisotropy of the source radiation (Manske et al. 1998). Apparently, more modeling efforts are required to account for the very recent detection of the 9.7$\mum$ and 18$\mum$ silicate emission features in type 1 AGNs and the recent high resolution IR imaging observations which seem to show that the torus size is no more than a few parsecs (see Elitzur 2006 and references therein). It is well known that the SED modeling alone does not uniquely determine the dust size distribution and the dust spatial distribution.
I thank L.C. Ho and J.M. Wang for inviting me to attend this stimulating conference. I also thank B. Cznery, C.M. Gaskell, S.L. Liang, R. Maiolino, and C. Willott for their comments and/or help in preparing for this article. Partial support by NASA/Spitzer theory programs and the University of Missouri Research Board is gratefully acknowledged.
=
Adamson, A.J., et al. 1999, ApJ, 512, 224 Allamandola, L.J., Tielens, A.G.G.M., & Barker, J.R. 1985, , 290, L25 Antonucci, R.R.J. 1993, ARA&A, 31, 473 Baker, J.C., & Hunstead, R.W. 1995, ApJ, 452, L95 Barvainis, R. 1987, , 320, 537 Buchanan, C.L., et al., 2006, AJ, 132, 401 Chiar, J.E., et al. 2006, ApJ, 651, 268 Cardelli, J.A., Clayton, G.C., & Mathis, J.S. 1989, ApJ, 345, 245 Crenshaw, D.M., Kraemer, S.B., Bruhweiler, F.C., & Ruiz, J.R. 2001, ApJ, 555, 633 Crenshaw, D.M., et al. 2002, ApJ, 566, 187 Czerny, B., Li, J., Loska, Z., & Szczerba, R. 2004, MNRAS, 348, 54 Draine, B.T., & Li, A. 2001, ApJ, 551, 807 Draine, B.T., & Li, A. 2007, ApJ, 657, 810 Efstathiou, A., & Rowan-Robinson, M. 1995, MNRAS, 273, 649 Elbaz, D., Le Floc’h, E., Dole, H., & Marcillac, D. 2005, A&A, 434, L1 Elitzur, M. 2006, New Astronomy Review, 50, 728 Engelbracht, C.W., et al. 2006, ApJ, 642, L127 Francis, P.J., et al. 1991, ApJ, 373, 465 Gaskell, C.M., et al. 2004, ApJ, 616, 147 Gaskell, C.M., & Benker, A.J. 2007, ApJ, in press (astro-ph/0711.1013) Genzel, R., et al. 1998, ApJ, 498, 579 Granato, G.L., & Danese, L. 1994, , 268, 235 Granato, G.L., Danese, L., & Franceschini, A. 1997, , 486, 147 Greenberg, J.M., Li, A., et al. 1995, ApJ, 455, L177 Hao, L., et al. 2005, , 625, L75 Heisler, C.A., Lumsden, S.L., & Bailey, J.A. 1997, Nature, 385, 700 Higdon, S.J., Higdon, J.L., & Marshall, J. 2006, ApJ, 640, 768 Ho, L.C. 1999, Adv. Space Res., 23, 813 Hopkins, P.F., et al. 2004, AJ, 128, 1112 Imanishi, M., et al. 1997, PASJ, 49, 69 Irwin, J.A., & Madden, S.C. 2006, A&A, 445, 123 Joblin, C., Léger, A., & Martin, P. 1992, ApJ, 393, L79 Jones, A.P., Duley, W.W., & Williams, D.A. 1990, QJRAS, 31, 567 Kaneda, H., Onaka, T., & Sakon, I. 2005, ApJ, 632, L83 Kleinmann, D.E., Gillett, F.C., & Wright, E.L. 1976, , 208, 42 Koornneef, J., & Code, A.D. 1981, ApJ, 247, 860 Laor, A., & Draine, B.T. 1993, , 402, 441 Le Floc’h, E., et al. 2001, , 367, 487 Léger, A., & Puget, J. L. 1984, , 137, L5 Lequeux, J., et al. 1982, A&A, 113, L15 Levenson, N.A., et al. 2007, , 654, L45 Li, A., & Draine, B.T. 2001a, ApJ, 550, L213 Li, A., & Draine, B.T. 2001b, ApJ, 554, 778 Li, A., & Greenberg, J.M. 1997, A&A, 323, 566 Li, A., & Greenberg, J.M. 2002, ApJ, 577, 789 Lutz, D., et al. 2005, ApJ, 625, L83 Maiolino, R., Marconi, A., & Oliva, E. 2001a, A&A, 365, 37 Maiolino, R., et al. 2001b, A&A, 365, 28 Manske, V., Henning, Th., & Men’shchikov, A.B. 1998, , 331, 52 Mason, R.E., Wright, G.S., Pendleton, Y.J., & Adamson, A. 2004, ApJ, 613, 770 Mason, R.E., et al. 2006, , 640, 612 Mason, R.E., Wright, G.S., Adamson, A., & Pendleton, Y.J. 2007, ApJ, 656, 798 Mennella, V., Brucato, J.R., Colangeli, L., & Palumbo, P. 1999, ApJ, 524, L71 Nandy, K., et al. 1981, MNRAS, 196, 955 Nenkova, M., Ivezic, Z., & Elitzur, M. 2002, , 570, L9 Osterbrock, D.E., & Ferland, G.J. 2006, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei, 2nd ed, University Science Books Pendleton, Y.J., & Allamandola, L.J. 2002, ApJS, 138, 75 Pier, E.A., & Krolik, J.H. 1992, , 401, 99; 1993, , 418, 673 Prévot, M.L., et al. 1984, A&A, 132, 389 Rhee, J.H., & Larkin, J.E. 2006, ApJ, 640, 625 Richards, G.T., et al. 2003, AJ, 126, 1131 Rieke, G.H., & Low, F.J. 1975, , 199, L13 Roche, P.F., Aitken, D.K., & Whitmore, B. 1983, MNRAS, 205, P21 Roche, P.F., Aitken, D.K., & Smith, C.H. 1991, MNRAS, 252, 282 Roche, P.F., et al. 2006, , 367, 1689 Roche, P.F., Packham, C., Aitken, D.K., & Mason, R.E. 2007, , 375, 99 Rodr[í]{}guez-Ardila, A., & Mazzalay, X. 2006, , 367, L57 Rowan-Robinson, M. 1995, MNRAS, 272, 737 Schartmann, M., et al. 2005, A&A, 437, 861 Shi, Y., et al. 2006, ApJ, 653, 127 Siebenmorgen, R., Krügel, E., & Spoon, H.W.W. 2004, , 414, 123 Siebenmorgen, R., Haas, M., Kr[ü]{}gel, E., & Schulz, B. 2005, , 436, L5 Shang, Z., et al. 2005, , 619, 41 Speck, A.K., Barlow, M.J., Sylvester, R.J., & Hofmeister, A.M. 2000, , 146, 437 Spoon, H.W.W., et al. 2002, A&A, 385, 1022 Stecher, T.P. 1965, ApJ, 142, 1683 Stenholm, L. 1995, A&A, 290, 393 Sturm, E., et al. 2005, , 629, L21 Tran, H.D. 2003, , 583, 632 Urry, C.M., & Padovani, P. 1995, PASP, 107, 803 van Bemmel, I.M., & Dullemond, C.P. 2003, , 404, 1 Voit, G.M. 1991, , 379, 122; 1992, , 258, 841 Weedman, D.W., et al. 2005, ApJ, 633, 706 Weingartner, J.C., & Murray, N. 2002, , 580, 88 Willott, C.J. 2005, ApJ, 627, L101 Wright, G.S., Bridger, A., Geballe, T.R., & Pendleton, Y. 1996, in New Extragalactic Perspectives in the New South Africa, 143 Yan, L., et al. 2005, , 628, 604
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study the anisotropic in-plane optical conductivity of detwinned Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ single crystals for $x$=0, 2.5$\%$ and 4.5$\%$ in a broad energy range (3 meV-5 eV) across their structural and magnetic transitions. For temperatures below the Neel transition, the topology of the reconstructed Fermi surface, combined with the distinct behavior of the scattering rates, determines the anisotropy of the low frequency optical response. For the itinerant charge carriers, we are able to disentangle the evolution of the Drude weights and scattering rates and to observe their enhancement along the orthorhombic antiferromagnetic $a$-axis with respect to the ferromagnetic $b$-axis. For temperatures above $T_s$, uniaxial stress leads to a finite in-plane anisotropy. The anisotropy of the optical conductivity, leading to a significant dichroism, extends to high frequencies in the mid- and near-infrared regions. The temperature dependence of the dichroism at all dopings scales with the anisotropy ratio of the $dc$ conductivity, suggesting the electronic nature of the structural transition. Our findings bear testimony to a large nematic susceptibility that couples very effectively to the uniaxial lattice strain. In order to clarify the subtle interplay of magnetism and Fermi surface topology we compare our results with theoretical calculations obtained from density functional theory within the full-potential linear augmented plane-wave method.'
author:
- 'A. Dusza$^{1*}$, A. Lucarelli$^{1*}$, A. Sanna$^2$, S. Massidda$^2$, J.-H. Chu$^3$, I.R. Fisher$^3$ and L. Degiorgi$^1$'
title: 'Anisotropic in-plane optical conductivity in detwinned Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$'
---
Introduction
============
In many unconventional superconductors including cuprates and Fe-based pnictides, superconductivity emerges from a complicated soup of competing phases in the normal state when magnetism is suppressed by doping, pressure or other external parameters [@norman; @basov_review]. This multiplicity of phases includes nematicity, defined as the spontaneously broken C4 rotational symmetry of the square lattice, and a novel form of magnetism, arising from either orbital currents or antiferromagnetic fluctuations. In the Fe-based pnictide superconductors (for a review with a comprehensive references list see Ref. ), nematic correlations and antiferromagnetic fluctuations have also been recently connected with symmetry breaking competing phases. Inelastic neutron scattering experiments revealed anisotropic magnetic excitations [@li], coupled to the structural tetragonal-orthorhombic transition at $T_s$. This structural transition elongates the Fe-Fe distance in the $ab$ plane along the $a$-axis direction and contracts it along the perpendicular $b$-axis. It turns out that below the Neel temperature $T_N$ ($\le T_s$) the spins present ferromagnetic correlations along the shorter orthorhombic $b$-axis and antiferromagnetic stripes along the longer $a$-axis [@li]. The two-fold anisotropy is also evident in quasiparticle interference observed via scanning tunnelling microscopy measurements [@chuang] and further confirmed by angle-resolved-photoemission-spectroscopy (ARPES) data, collected on crystals for which the incident beam size was comparable to the size of a single structural domain [@wang]. Furthermore, quantum oscillations in the parent compound reveal that the reconstructed Fermi-surface (FS) comprises several small pockets [@Terashima]. The smallest of these pockets is essentially isotropic in the $ab$-plane, but the other, larger pockets are much more anisotropic.
The first ARPES observations of an anisotropic electronic dispersion [@wang] motivated an intensive research activity also with probes for which any impact of the electronic anisotropy would be obscured by the formation of dense twin domains. These correspond to adjacent microscopic domains as small as a few microns with alternating orthorhombic $a$ and $b$ axes [@tanatar]. Two distinct methods have been employed so far to detwin the specimen: application of uniaxial stress [@chudw; @tanatar] and of an in-plane magnetic field [@chu_magnetic]. The former method, employed here, is superior in order to achieve an almost complete detwinning. Recent ARPES measurements [@zxshen; @kim] of detwinned single crystals of Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ reveal an increase (decrease) in the binding energy of bands with dominant $d_{yz}$ ($d_{xz}$) character on cooling through $T_s$ [@zxshen], leading to a difference in orbital occupancy. The splitting of the $d_{xz}$ and $d_{yz}$ bands is progressively diminished with Co substitution in Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$, reflecting the monotonic decrease in the lattice orthorhombicity $\lbrack 2(a-b)/(a+b)\rbrack$. For temperatures above $T_s$, the band-splitting can be induced up to rather high temperatures by uniaxial stress.
Mechanically detwinned crystals also provide a suitable playground in order to explore the intrinsic in-plane anisotropy of the transport properties. Measurements of the $dc$ resistivity as a function of temperature of the single domain parent compounds BaFe$_2$As$_2$, SrFe$_2$As$_2$ and CaFe$_2$As$_2$ (i.e., so called 122 iron pnictides) reveal a modest in-plane $dc$ anisotropy for temperatures below $T_s$, with the resistivity in the ferromagnetic direction larger than along the antiferromagnetic direction [@chudw; @Tanatar; @Blomberg]. Substitution of Co, Ni or Cu suppresses the lattice orthorhombicity [@Prozorov], but in contrast the in-plane resistivity anisotropy is found to initially increase with the concentration of the substituent, before reverting to an isotropic in-plane conductivity once the structural transition is completely suppressed [@chudw; @Kuo]. Perhaps coincidentally, the onset of the large in-plane anisotropy for the cases of Co and Ni substitution occurs rather abruptly at a composition close to the start of the superconducting dome.
For temperatures above $T_s$, there is a remarkably large sensitivity to uniaxial pressure, leading to a large induced in-plane resistivity anisotropy that is not observed for overdoped compositions [@chudw]. There is no evidence in thermodynamic or transport measurements for an additional phase transition above $T_s$ for unstressed crystals, implying that the induced anisotropy is the result of a large nematic susceptibility, rather than the presence of static nematic order. The observation of a large in-plane resistivity anisotropy, at least for the electron-doped Ò122Ó Fe arsenides, bears witness to the orthorhombicity of the material, but does not distinguish between anisotropy in the electronic structure and anisotropy in the scattering rate. To this end, reflectivity measurements of detwinned single crystals using polarized light can provide important insight to the effects of the magnetic and structural transitions on the anisotropic charge dynamics and the electronic band structure. Indeed, the counterintuitive anisotropic behavior of $\rho(T)$ is also reflected in the finite frequency response of the charge carriers as observed by the optical measurements reported in our previous Letter work [@dusza]. Optical measurements of detwinned single crystals of Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ in the underdoped regime reveal large changes in the low-frequency metallic response on cooling through $T_s$ and $T_N$ together with a pronounced optical anisotropy (i.e., $\Delta\sigma_1(\omega)=\sigma_1(\omega,E\parallel a)-\sigma_1(\omega,E\parallel b)$) at high frequencies, defining the linear dichroism [@dusza]. For light polarized in the antiferromagnetic $a$ direction, there is an increase in the scattering rate, but this is accompanied by a dramatic increase in the metallic spectral weight that ultimately leads to a reduction in the $dc$ resistivity, consistent with observations. For light polarized along the $b$ direction, the dominant effect is a reduction in the metallic spectral weight, consistent with the increase in the $dc$ resistivity. The high frequency dichroism, which is smaller for higher Co concentrations, clearly reveals that changes in the electronic structure are not confined to near the Fermi energy. Similar to $dc$ transport and ARPES measurements, a pronounced optical anisotropy persists at temperatures above $T_s$ for crystals held under uniaxial stress, also anticipating a substantial nematic susceptibility.
In the present work, we expand in greater detail our broad spectral range characterization of the anisotropic optical conductivity of detwinned Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ single crystals under uniaxial pressure. We extensively study the underdoped region at compositions with $x = 0$, $x = 0.025$ and $x = 0.045$ with light polarized along the in-plane orthorhombic $a$ and $b$ axes. We present here the full set of data and their complete phenomenological analysis. Furthermore, in order to clarify the subtle interplay of magnetism and Fermi surface topology we compare directly our optical measurements with theoretical calculations obtained from density functional theory within the full-potential linear augmented plane-wave method (LAPW) [@sanna].
![(color online) Sample-holder set-up including a mechanical clamp (a) and an optical mask (b and c). Uniaxial-pressure is applied parallel to the $b$-axis of the sample (S) by drawing the clamp against the side-surface of the crystal (a and c). The optical mask, attached in tight contact with the clamp device, shapes identically incident and reflected light beams for the tungsten reference mirror M (red arrows) and S (light blue arrows).[]{data-label="Clamp"}](Fig_1_vfin.png){width="6cm"}
Experimental
============
Samples
-------
Single crystals of Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ with $x = 0$, 2.5 and 4.5% were grown using a self-flux method [@chudw]. The crystals have a plate-like morphology of 0.1$\div$0.3 mm thickness, with the $c$-axis perpendicular to the plane of the plates. Crystals were cut in to a square shape, approximately 2 mm on the side, oriented such that below $T_s$ the orthorhombic $a/b$ axes are parallel to the sides of the square [@chudw]. Detailed thermodynamic, transport and neutron scattering measurements for the studied dopings of Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ give evidence for structural, magnetic and superconducting phase transitions occurring at different temperatures [@chu; @lester]: for $x = 0$, the coincident structural (tetragonal-orthorhombic) and magnetic transitions where the system forms antiferromagnetically ordered stripes occur at $T_s$ = $T_N$ = 135 K, whereas for $x = 0.025$ they develop at $T_s$ = 98 K and $T_N$ = 92 K, respectively. The compound with $x = 0.045$ undergoes first a structural transition at $T_s$ = 66 K then a magnetic transition at $T_N$ = 58 K and finally a superconducting one at $T_c$ = 15 K.
Technique
---------
It has recently been shown that almost single-domain specimens can be achieved by application of uniaxial pressure in situ [@chudw]. This is crucial in order to reveal the intrinsic anisotropy of the orthorhombic phase. To this goal we have extended the basic cantilever concept originally developed for transport measurements to allow optical measurements under constant uniaxial pressure. The device consists of a mechanical clamp (Fig. \[Clamp\](a)) and an optical mask (Fig. \[Clamp\](b) and \[Clamp\](c)) attached on top of it in tight contact. The pressure-device was designed according to the following specific criteria: i) the uniaxial stress is applied to the sample (S) by tightening a screw and drawing the clamp against the side of the crystal (Fig. \[Clamp\](a) and Fig. \[Clamp\](c)). Even though our clamp set-up still lacks of a precisely tunable pressure, the uniaxial stress was gradually increased, so to observe optical anisotropy. The applied pressure is modest, such that $T_{N}$ is unaffected, and can be adjusted over a limited range (up to approximately 5 MPa [@chudw]). Cooling samples under such uniaxial stress results in a significantly larger population of domains for which the shorter ferromagnetic $b$-axis is oriented along the direction of the applied stress, almost fully detwinning the crystals. ii) The major axis of the tightening screw lies nearby and parallel to the surface of the sample so that the shear- and thermal-stress effects are minimized. The thermal expansion $\Delta$$L$ of the tightening screw, exerting the uniaxial pressure, can be estimated to be of the order of $\Delta$$L = \alpha$$LdT = 20 \mu$m (for screw-length $L = 5$ mm, typical metallic thermal expansion coefficient $\alpha = 2\cdot10^{-5}$K$^{-1}$, and thermal excursion $dT = 200$ K). This corresponds to a relative variation of about 0.4%. By reasonably assuming $\Delta$$L/L = \Delta$$p/p$, the influence of thermal expansion effects is then negligible. iii) The clamp set-up leaves the (001) facet of the single domain samples exposed, enabling us to perform optical reflectivity measurements ($R$($\omega$)). iv) The optical mask guarantees data collection on surfaces of the same dimension for S and reference mirror (M) and therefore on equivalent flat spots.
The reflectivity ($R(\omega)$) at room temperature was first collected from different spectrometers such as the Bruker IFS48 for the mid-infrared (MIR, 500-4000 cm$^{-1}$) and near-infrared (NIR, 4000-7000 cm$^{-1}$) measurements and a PerkinElmer Lambda 950 capable to measure absolute reflectivity from NIR up to the ultra-violet (UV) range, i.e. 3200-4.8x10$^4$ cm$^{-1}$. The detwinning device was then placed inside our cryostat and finely aligned with micrometric precision within the optical path of the Fourier transform infrared Bruker Vertex 80v interferometer, so that we could perform optical measurements of ($R(\omega)$) at different temperatures in the spectral range from the far-infrared (FIR, $\omega<400$ cm$^{-1}$) up to the MIR, i.e. between 30 and 6000 cm$^{-1}$. Light in all spectrometers was polarized along the $a$ and $b$ axes of the detwinned samples, thus giving access to the anisotropic optical functions.
![(color online) Reflectivity ratio $R(\omega)_{E\parallel a}/R(\omega)_{E\parallel b}$ measured above and below $T_s$ along the $a$- and $b$-axis polarization directions for Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ with $x = 0$ and 0.025 compared with the same ratio for a Cu sample of equivalent surface dimensions and thickness.[]{data-label="CuRatio"}](Fig_2_vfin.png){width="6cm"}
The real part $\sigma$$_{1}(\omega)$ of the optical conductivity was obtained via the Kramers-Kronig transformation of $R(\omega)$ by applying suitable extrapolations at low and high frequencies. For the $\omega\to0$ extrapolation, we made use of the Hagen-Rubens (HR) formula ($R(\omega)=1-2\sqrt{\frac{\omega}{\sigma_{dc}}}$), inserting the $dc$ conductivity values ($\sigma_{dc}$) from Ref. [@chudw], while above the upper frequency limit $R(\omega)\sim\omega^{-s}$ $(2\leqslant s \leqslant 4)$ [@grunerbook].
Several precautions were taken in order to avoid experimental artifacts: i) The polarizers chosen for each measured frequency range have an extinction ratio greater than 200, thus reducing leakages below our 1% error limit. ii) As control measurements for the detwinning setup we collected at different temperatures the optical reflectivity of a Cu sample of comparable surface dimensions and thickness with respect to the pnictide crystals and under equivalent uniaxial pressure. As expected, we could not observe any polarization dependence of the Cu reflectivity from room temperature down to 10 K (see e.g. the data at 250 K in Fig. \[CuRatio\]). The Cu test measurements set to about 1-2% the higher limit of the polarization dependence due to any possible experimental artifacts (i.e. bended surfaces, leakage of the polarizers etc.), which is notably lower then the anisotropy ratio measured for the iron-pnictides (Fig. 2 and 3). iii) Prior to performing optical experiments as a function of the polarization of light, the electrodynamic response of the twinned (i.e., unstressed) samples was first checked with unpolarized light, consistently recovering the same spectra previously presented in Ref. [@lucarelli]. vi) We achieved the same alignment conditions of M and S (Fig. \[Clamp\](c)) by imaging on both spots a red laser point source.
Results
=======
Reflectivity
------------
The three investigated compositions display overall similar features in their optical response but their polarization and temperature dependences show small but significant differences as we clarify in the presentation and discussion of our results below. Figure \[Ref\] presents the optical reflectivity $R(\omega)$ in the whole measured frequency range of detwinned Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ ($x = 0$, $x = 0.025$ and $x = 0.045$) at different temperatures and for the two polarization directions E$\parallel$$a$ and E$\parallel$$b$. As already recognized in the twinned (i.e., unstressed) specimens [@lucarelli], $R(\omega)$ gently increases from the UV to the MIR region displaying an overdamped-like behavior. Below the MIR energy range, $R(\omega)$ gets progressively steeper with a sharp upturn at frequencies lower than 200 cm$^{-1}$ (Fig. \[Ref\]). Close to zero frequency, $R(\omega)$ consistently merges in the HR extrapolations calculated with the $\sigma_{dc}$ values from Ref. [@chudw]. For all measured dopings we observed a polarization and temperature dependence of $R(\omega)$ from the FIR up to the MIR-NIR range, while between 5000 and 6000 cm$^{-1}$ the $R(\omega)$ spectra tend to merge together. The optical anisotropy is rather pronounced at low temperatures in the FIR region, when approaching the zero frequency limit. Interestingly for $x=0$, $R(\omega)$ increases with decreasing temperature along the $a$-axis, in agreement with the metallic character of the $dc$ transport properties [@chudw]. On the contrary, at low temperatures along the $b$-axis, there is first a depletion of $R(\omega)$ in the FIR energy range below 700 cm$^{-1}$ and then a steeper upturn below 100 cm$^{-1}$, consistently merging with the HR extrapolation. While somehow common to all compositions this depletion is less pronounced at higher doping levels such as $x = 0.025$ and $x = 0.045$, when entering the magnetic ordered phase at $T<T_{N}$ (Fig. \[Ref\]b and \[Ref\]c, respectively), similarly to what has been previously observed in the twinned specimens [@lucarelli].
The anisotropy, discussed so far in the $dc$-limit of $R(\omega)$ at low temperatures, persists up to temperatures well above all phase transitions for the crystals held under uniaxial pressure, predominantly in the MIR-NIR region than in the FIR one. The black arrow in Fig. 3 highlight the $R(\omega)$ spectra collected at temperatures close to the respective $T_s$. The MIR-band centered at about 1500 cm$^{-1}$ (dotted-dashed lines and red arrow in Fig. 3) is of particular interest for both its temperature and polarization dependence. For $x=0$, we observe an interchange in the polarization dependence of $R(\omega)$ with decreasing temperature (Fig. \[Ref\]a bottom panels). Such an interchange exactly occurs at the coupled magnetic and structural phase transition at 135 K. At higher doping levels ($x = 0.025$ and $x = 0.045$) the MIR-band observed at 1500 cm$^{-1}$ for $x=0$ is shifted towards lower frequencies (red arrows in bottom panels of Fig. \[Ref\]). Above 150 K the anisotropy of $R(\omega)$ in the MIR region is strongly reduced, but differently from the parent compound the $a$-axis spectrum of $x = 0.025$ and $x = 0.045$ is constantly above the $b$-axis one at all temperatures (Fig. \[Ref\]b and \[Ref\]c bottom panels).
Optical Conductivity
--------------------
Figure \[Sig1\] shows the real part $\sigma_{1}(\omega)$ of the optical conductivity of detwinned Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ ($x = 0$, $x = 0.025$ and $x = 0.045$) for different measured temperatures along both polarization directions. In the visible and UV energy interval $\sigma_{1}(\omega)$ is characterized by polarization independent broad absorption bands which overlap with a dominant NIR contribution peaked at about 5000 cm$^{-1}$. Consistently with previous measurements on twinned samples [@lucarelli], these components of $\sigma_{1}(\omega)$ are generally ascribed to the electronic interband transitions. These features in the UV-NIR range are not substantially altered by changing temperature, polarization, or doping level.
As already observed for $R(\omega)$, the temperature and doping dependent optical anisotropy in $\sigma_{1}(\omega)$ is mainly evident in the FIR and MIR regions. In the FIR region, there is a strong polarization dependence of the itinerant charge carriers contribution to $\sigma_1(\omega)$. Along the $a$-axis $\sigma_{1}(\omega)$ shows a more pronounced metallic behavior which gets enhanced below $T_N$. Along the $b$-axis $\sigma_{1}(\omega)$ below $T_N$ is depleted due to the formation of a pseudogap, prior to displaying a metallic-like upturn for $\omega\rightarrow 0$. As formerly observed for the twinned samples [@lucarelli] this depletion of $\sigma_{1}(\omega)$ along the $b$-axis is less evident when increasing the doping.
The strong absorption peak dominating $\sigma_{1}(\omega)$ at about 5000 cm$^{-1}$ develops into a pronounced shoulder on its MIR frequency tail at about 1500 cm$^{-1}$. As anticipated in the presentation of the $R(\omega)$ data, this latter MIR-band in $\sigma_{1}(\omega)$ shows a strong polarization and doping dependence, as highlighted in Fig. 4 (bottom panels). One recognizes again the already mentioned interchange in the polarization dependence when crossing the structural transition for $x=0$ and its absence at higher dopings. Above $T_N$ in the MIR range, $\sigma_{1}(\omega)$ of $x=0.025$ and $x=0.045$ along the $a$-axis direction is above the $b$-axis values and remains constantly above it also below $T_N$ (top and bottom panels of Fig. \[Sig1\]b and \[Sig1\]c, respectively). Interestingly enough, for increasing doping the maximum of the MIR-band shifts to lower frequencies indicating that the MIR-band is significantly affected by the doping (dotted-dashed lines in top panels and red arrows in the bottom panels of Fig. \[Sig1\]).
The anisotropy in the optical response for the magnetic state can be anticipated by $ab-initio$ calculations based on density-functional-theory (DFT) as well as dynamical mean-field theory (DMFT) [@yin; @sanna; @sugimoto]. It was first shown that the optical anisotropy of the magnetic state, not present within the local spin density approximation, may result from DMFT-correlation [@yin]. Alternatively, DFT-calculations of the optical conductivity within the full-potential linear augmented plane-wave (LAPW) method reproduce most of the observed experimental features, in particular an anisotropic magnetic peak located at about 0.2 eV (1600 cm$^{-1}$), which was ascribed to antiferromagnetically ordered stripes [@sanna]. The optical anisotropy, as observed experimentally, was even shown to agree with the solution of a three dimensional five-orbital Hubbard model using the mean-field approximation in the presence of both orbital and magnetic order [@lv_anis]. Moreover, it has been recently pointed out that interband transitions, whose relevance is manifested by first-principle calculations, give a non negligible contribution already in the infrared region, spanning the experimental energy interval of the MIR-band [@benfatto]. We will return later on to the comparison between experiment and theory.
Fits
----
![(color online) Optical conductivity along the $b$-axis at 10 K of detwinned Ba(Fe$_{0.975}$Co$_{0.025}$)$_2$As$_2$ compared with the total Drude-Lorentz fit (black dashed line) and the corresponding components: the narrow (D$_N$) and broad (D$_B$) Drude terms, the mid-infrared (MIR) band and the oscillators ($I_1$, $I_2$, $I_3$) fitting the interband transitions. The inset shows the comparison between the measured reflectivity $R(\omega)$ and the resulting fit.[]{data-label="Fits"}](Fig_5_Fits_x025_vfin.png){width="8cm"}
In order to study the various contributions shaping the optical conductivity at different energies, we apply the well-established phenomenological Drude$-$Lorentz approach. Consistent with our previous investigations on twinned samples [@lucarelli], we ascribe two Drude contributions (one narrow and one broad) to the effective metallic part of $\sigma_{1}(\omega)$ and a series of Lorentz harmonic oscillators (h.o.) for all excitations (phononic and electronic) at finite frequencies. Figure \[Fits\] presents all the fitting components for $x = 0.025$ along the $b$-axis direction measured at 10 K, acting here as a representative example. The use of two Drude components in the fit procedure phenomenologically mimics the multi-band scenario and implies the existence of two electronic subsystems as revealed for a wide range of iron-pnictide compounds [@wu]. The narrow Drude term is relevant at very low frequencies and it is obviously tied to the necessary HR extrapolation of $R(\omega)$ for $\omega\rightarrow$0. The broad one acts as a background of $\sigma_1(\omega)$ and dominates the optical conductivity up to the MIR energy interval. As we shall elaborate later on, both Drude terms contribute to the total $dc$ conductivity. Besides the Drude terms, we chose one broad h.o. for the temperature dependent MIR-band and three broad h.o.’s ($I_1$, $I_2$ and $I_3$ in Fig. \[Fits\]) for the strong absorption featuring the broad peak centered at about 5000 cm$^{-1}$. The complex dielectric function $\tilde{\varepsilon}=\varepsilon_1(\omega)+i\varepsilon_2(\omega)$ can be expressed as follows: $$\begin{aligned}
\nonumber
\tilde{\varepsilon}=\varepsilon_{\infty}-\frac{\omega^2_{PN}}{\omega^2-i\omega\Gamma_N}-\frac{\omega^2_{PB}}{\omega^2-i\omega\Gamma_B}+ \\
+\frac{S^2_{MIR}}{\omega^2_{MIR}-\omega^2-i\omega\gamma_{MIR}}+
\sum_{j=1}^3\frac{S^2_j}{\omega^2_j-\omega^2-i\omega\gamma_j}\end{aligned}$$ where $\varepsilon_{\infty}$ is the optical dielectric constant, $\omega^2_{PN}$, $\omega^2_{PB}$ and $\Gamma$$_{N}$, $\Gamma$$_{B}$ are respectively the plasma frequencies, defined as $\omega^2_{P}=\frac{4\pi e^2n}{m^*}$, and the widths of the narrow and broad Drude peaks. The latter parameters represent the scattering rates of the itinerant charge carriers, of which $n$, $m^*$ and $e$ are then the density, the effective mass and the charge, respectively. The parameters of the $j$th Lorenz h.o. as well as those of the MIR-band are: the center-peak frequency ($\omega_{j}$ and $\omega_{MIR}$), the width ($\gamma_{j}$ and $\gamma_{MIR}$) and the mode strength ($S_j^2$ and $S_{MIR}^2$). The fit constraints are such that the measured reflectivity and the real part of the optical conductivity are simultaneously reproduced by the identical set of fit-parameters [@lucarelli], which reduces the degree of freedom in the parameters choice. The upper boundary for the temperature dependence of the optical conductivity is found to be close to the NIR peak in $\sigma_1(\omega)$ at about 5000 cm$^{-1}$. Thus, for all temperatures and dopings we fit $R(\omega)$ and $\sigma_1(\omega)$ by varying the parameters for both Drude terms, the MIR-band and the $I_1$ h.o., while keeping constant the parameters associated to the two high frequency oscillators $I_2$ and $I_3$. We systematically adopted our fitting procedure (Fig. 5) for both polarization directions and for all measured temperatures from 10 K to 270 K of the studied compositions. The only exception, however, is for the MIR-band of $x=0$ along the $b$-axis, where we added one more component, in order to achieve a better fit quality. We checked that this fit-variation has negligible impact on the overall trend of the extracted parameters. The remarkable agreement of the fitting results with the measured reflectivity and optical conductivity (see e.g. Fig. \[Fits\]) further demonstrates the overall good quality of the fits. Therefore, we are confident that the fit results allow us to identify robust trends in relevant physical parameters.
Discussion
==========
We can first exploit our phenomenological fits from the perspective of a so-called spectral-weight ($SW$) analysis, represented by the squared plasma frequencies and mode strengths of the Drude terms and Lorentz h.o.’s, respectively. The Drude-Lorentz procedure thus allows us to disentangle the redistribution of $SW$ in selected energy intervals and tells us how the same $SW$ is reshuffled among the various components as a function of temperature. Of particular interest is the total Drude weight given by $SW_{Drude}=\omega_{PN}^2+\omega_{PB}^2$. Furthermore, we consider the $SW$ encountered in the MIR-band and electron interband transitions, defined as $SW_{MIR}=S_{MIR}^2$ and $SW_{Int}=\sum_{j=1}^{3}S_{j}^2$, respectively. The total spectral weight is then given by the area under the conductivity spectrum and can be expressed as [@grunerbook]: $SW_{Total}=SW_{Drude}+SW_{MIR}+SW_{Int}$. Due to the high fit-quality of $\sigma_1(\omega)$ for the chosen components, $SW_{Total}$ is also equivalent to $\int^{\omega_c}_0\sigma_1(\omega)d\omega$, where $\omega_c$ corresponds to a cutoff frequency and basically sets the upper frequency limit of our measurements. This implies that, if $SW_{Total}$ does not change with temperature within the energy interval between zero and $\omega_c$, any redistribution of the spectral weight will fully occur among the fitting components.
The lower panels of Fig. \[SW\_Gamma\] show the temperature dependence of $SW_{Total}$ and the spectral weight associated to the two Drude terms, the MIR-band and the three interband excitations ($I_i$). $SW_{Total}$ stays constant with temperature along both polarization directions for all doping values thus satisfying the $f$-sum rule [@grunerbook]. When crossing the structural and magnetic transitions, the spectral weight of the parent compound along the $a$-axis is redistributed from high to low frequencies, since the interband transition components loose weight in favor of the Drude and MIR-band (bottom panel of Fig. \[SW\_Gamma\]a). On the contrary along the $b$-axis, the Drude term looses $SW$ for $T<T_N$ in favor of the MIR-band. At higher dopings ($x = 0.025$ and 0.045) the overall $SW_{Int}$ shows less pronounced temperature dependence for both lattice directions. The spectral weight variations with temperature mainly occur between the Drude terms and the MIR-band for both directions. The resulting temperature dependence of $SW_{Drude}$, increasing along the $a$- but decreasing along the $b$-axis across $T_N$ and $T_s$ for all dopings, is rather intriguing and unanticipated, the significance of which with respect to the $dc$ transport properties will be addressed below. For the time being, it is worth emphasizing that the Drude weight anisotropy may strongly depend on the topology and morphology of the reconstructed Fermi surface below $T_s$ (i.e., anisotropy of the Fermi velocity), as evinced from a five-band Hamiltonian at the mean-field level [@valenzuela]. Of interest is also for $x=0$ the temperature independent total Drude weight, which is larger along the $b$-axis than along the $a$-axis for $T>T_s$, thus inverting the polarization dependence observed below $T_s$. This astonishing behavior for $x=0$ might be compatible with a recent multi-orbital model [@lv_anis]. Given the ARPES results [@zxshen], showing that the band splitting diminishes with increasing temperature, one would have eventually anticipated that the Drude weight gets indeed isotropic above $T_s$. Nonetheless, the Drude weight anisotropy above $T_s$ is suppressed upon doping (Fig. \[SW\_Gamma\]). Experimentally, such a trend of $SW_{Drude}$ remains to be verified under controlled uniaxial pressure conditions, while theoretically it awaits confirmation within doping-dependent models.
From the width at half maximum of the Drude resonance we can extract the scattering rates of the itinerant charge carriers. The broad ($\Gamma_{B}$) and narrow ($\Gamma_{N}$) Drude widths for $x = 0$, 0.025 and 0.045 are shown in the top panels of Fig. \[SW\_Gamma\]. An anisotropy in the scattering rate is most evident for $x = 0$ and $x = 0.025$. For $x = 0$ the Drude scattering rates $\Gamma_{N}$ and $\Gamma_{B}$ increase along the $a$-axis and decrease along the $b$-axis for decreasing temperatures across the phase transitions. Above $T_s$ for $x=0$, $\Gamma_N$ along both axes saturates to the identical constant value, while $\Gamma_B$ displays an inversion in the polarization dependence with respect to the situation below $T_s$ (Fig. 6, upper panel), before saturating to temperature independent values. For $x=0.025$ only $\Gamma_B$ along the $a$-axis undergoes a sudden incremental change at $T_N$, while all other scattering rates remain almost constant. $\Gamma_B$ above $T_s$ is switched with respect to below $T_s$ between the two polarization directions, similarly to $x=0$ yet less pronounced. For $x = 0.045$ the $\Gamma_B$ scattering rates display a more metallic behavior with decreasing temperature above $T_N$ and an almost negligible polarization dependence. Below $T_N$ there is a weak upturn to higher values, particularly along the $a$-axis. $\Gamma_N$ remain constant at all temperatures and for both directions.
The overall temperature dependence of the scattering rates below $T_N$ as evinced from the analysis of the optical response is expected with respect to the well-established magnetic order [@li]. Particularly for $x=0$ and $x=0.025$, the larger scattering rates along the elongated antiferromagnetic $a$-axis than along the shorter ferromagnetic $b$-axis for temperatures below $T_N$ may arise because of reduced hopping or of scattering from spin-fluctuations with large momentum transfer (i.e., by incoherent spin waves) [@turner; @devereaux]. The anisotropic scattering rate in the paramagnetic state, at least for $x=0$ and $x=0.025$, might also be in agreement with predictions based on interference between scattering by impurities and by critical spin fluctuations in the Ising nematic state [@fernandes2]. Similarly to our previous discussion on the Drude weight, we shall caution the readership, that the trend in the scattering rates, particularly above $T_s$, as well as their doping dependence should be also verified with tunable uniaxial pressures, in order to guarantee equal experimental conditions. A comprehensive theoretical framework, approaching different temperature regimes and considering the impact of doping, is also desired.
Having determined the two parameters governing the $dc$ transport properties, it is worth pursuing at this point the compelling comparison between the temperature dependence of the optical anisotropy and the anisotropy ratio of the $dc$ transport properties, defined as $\frac{\Delta\rho}{\rho}$=$\frac{2(\rho_b-\rho_a)}{(\rho_b+\rho_a)}$ (Fig. \[Anis\_Ratio\]) [@devereaux]. From the Drude terms, fitting the effective metallic contribution of $\sigma_1(\omega)$ over a finite energy interval, we can estimate the $dc$ limit of the conductivity ($\sigma_0^{opt}=(\omega_p^N)^2/4\pi\Gamma_N+(\omega_p^B)^2/4\pi\Gamma_B$) more precisely than simply extrapolating $\sigma_1(\omega)$ to zero frequency. The anisotropy ratio $\frac{\Delta\rho^{opt}}{\rho}$, reconstructed from the optical data, is thus compared in Fig. \[Anis\_Ratio\] to the equivalent quantity from the transport investigation. The agreement in terms of $\frac{\Delta\rho}{\rho}$ between the optical and $dc$ investigation is outstanding for $x$=0.025 and 0.045 at all temperatures. For $x$= 0, $\frac{\Delta\rho^{opt}}{\rho}$ is nonetheless slightly larger than the $dc$ transport anisotropy for $T<T_s$. This disagreement might originate from a difference in the applied stress in the optical and $dc$ transport measurements, or from differences in scattering rate of samples used for the two types of measurements.
Significantly, analysis of the optical properties for all compositions seems to indicate that anisotropy in the Fermi surface parameters, such as the enhancement(depletion) of the total Drude spectral weight occurring along the $a(b)$-axis, outweighs the (large at some compositions) anisotropy in the scattering rates (Fig. \[SW\_Gamma\]) that develops below $T_N$ in terms of the effect on the $dc$ transport properties (Fig. \[Anis\_Ratio\]). This is an important result from the optical investigation, which indeed enables to extract both pieces of information governing the behavior of the $dc$ transport properties.
In order to emphasize the relevant polarization dependence at high frequencies, we calculate the linear dichroism $\Delta\sigma_1(\omega)$, as defined in the Introduction. For the purpose of further enhancing the optical anisotropy, we show in Fig. \[DeltaSig1\] $\widetilde{\Delta}\sigma_1(\omega,T)$, which is defined as $\Delta\sigma_1(\omega,T)$ from the MIR to the UV for $x = 0$, 0.025 and 0.045 at various temperatures after having subtracted its corresponding room temperature values and being appropriately normalized. The dichroism persists above $T_N$ in the MIR range (Fig. \[DeltaSig1\]), pairing our direct observations in terms of $R(\omega)$ and $\sigma_1(\omega)$ (Fig. \[Ref\] and \[Sig1\]). This representation highlights once more that the MIR-feature moves towards lower frequencies upon increasing doping.
It is especially interesting to compare the temperature dependence of the $dc$ ($\frac{\Delta\rho}{\rho}$) [@chudw] and optical ($\Delta\sigma_1(\omega)$) anisotropy [@dusza]. Two characteristic frequencies, identifying the position of the peaks in $\sigma_1(\omega)$ (Fig. \[Sig1\]), are selected to follow the temperature dependence of $\Delta\sigma_1(\omega)$; namely, $\omega_1$=1500 cm$^{-1}$ and $\omega_2$=4300 cm$^{-1}$ for $x=0$; $\omega_1$=1320 cm$^{-1}$ and $\omega_2$=5740 cm$^{-1}$ for $x=0.025$; $\omega_1$=912 cm$^{-1}$ and $\omega_2$=5182 cm$^{-1}$ for $x=0.045$. It is remarkable that the temperature dependence of $\Delta\sigma_1(\omega)$ at $\omega_1$ and $\omega_2$ follows the temperature dependence of $\frac{\Delta\rho}{\rho}$ in both compounds (Fig. \[Anis\_Ratio\]). $\Delta\sigma_1(\omega_i)$ ($i=1,2$) saturates at constant values well above $T_s$ and then displays a variation for $T < 2T_s$. Here we first underscore that the rather pronounced optical anisotropy, extending up to temperatures higher than $T_s$ for the stressed crystals, clearly implies an important pressure-induced anisotropy in the electronic structure, which is also revealed by ARPES measurements [@zxshen; @wang].
Since the dichroism directly relates to a reshuffling of spectral weight in $\sigma_1(\omega)$ in the MIR-NIR range (Fig. 4a and 4b), $\Delta\sigma_1(\omega)$ at $\omega_1$ is interrelated to that at $\omega_2$ (i.e., the right $y$-axis for $\Delta\sigma_1(\omega_i)$ in Fig. \[Anis\_Ratio\] are inverted between $\omega_1$ and $\omega_2$), so that the behavior of $\Delta\sigma_1(\omega)$ is monotonic as a function of temperature and opposite in sign between $\omega_1$ and $\omega_2$ (Fig. \[Anis\_Ratio\] and \[DeltaSig1\]). For $x$=0.025 (Fig. \[Anis\_Ratio\]) $\Delta\sigma_1(\omega_i)$=0 at $T >> T_s$. However for $x$= 0 and 0.045, $\Delta\sigma_1(\omega_i)$ is found to be constant but apparently different from zero for $T >> T_s$. The origin of the finite (but constant) dichroism at high temperatures for these samples is at present unclear, and might reflect a systematic effect due to imperfect experimental conditions (e.g., too strong applied uniaxial pressure). Nevertheless, the overall temperature dependence seems to behave in a very similar manner for all compositions. Significantly, the absolute variation of the dichroism across the transitions at selected frequencies is larger for $x$=0 than for $x$=0.025 and 0.045, contrary to the anisotropy in the $dc$ resistivity [@chudw]. This doping-dependence needs to be studied in a controlled pressure regime in order to exclude effects arising from different degrees of detwinning ($T<T_s$) and different magnitude of induced anisotropy ($T>T_s$). Even so, it is encouraging that, contrary to the $dc$ resistivity, the changes in the electronic structure appear to follow a similar trend to doping as the lattice orthorhombicity [@Prozorov]. Our data might thus reveal a pronounced sensitivity of the electronic properties to structural parameters, like the iron-pnictogen angle $\alpha$ [@calderon], altered by external tunable variables like uniaxial pressure. Indeed, changes in $\alpha$ seem to induce relevant modifications in the shape of the Fermi surface and its nesting properties as well as in its orbital makeup, thus implying consequences in terms of the superconducting order parameter, critical temperature and magnetic properties [@calderon].
The origin of the orthorhombic transition has been discussed from the closely related perspectives of spin fluctuations (a so called spin-induced nematic picture [@xu; @johannes; @fernandes; @fradkin]), and also in terms of a more direct electronic effect involving, for instance, the orbital degree of freedom [@lv_anis; @kruger; @lee; @lv; @devereaux; @valenzuela; @bascones]. The present measurements alone cannot distinguish between these related scenarios, because in all cases some degree of electronic anisotropy is anticipated, and indeed it is likely that both orbital and spin degrees of freedom play a combined role in the real material. Nevertheless, it is instructive to compare the observed dichroism with specific predictions made within models based on orbital order. The two well-defined energy scales $\omega_1$ and $\omega_2$ (Fig. \[Sig1\]) may represent optical transitions between states with the strongest $d_{xz}/d_{yz}$ character, which are separated by about 0.3-0.4 eV. Such an energy splitting is indeed compatible with the theoretical calculations of the anisotropic optical conductivity [@lv_anis] and of the linear dichroism in the X-ray absorption spectroscopy [@devereaux]. Nonetheless, the debate about the impact of orbital order on the electronic properties is far from being settle down. Valenzuela et al. point out that an increasing degree of the orbital order may favor a diminishing anisotropy of the excitation spectrum and above all of the Drude weights [@valenzuela], opposite to what we may conclude from our optical experiment. Bascones et al. even claim that the orbital order accompanies the magnetization within a wide range of parameters but it is not correlated with the magnetic exchange anisotropy [@bascones]. While these latter calculations might be relevant at low temperatures, it remains to be seen how they can explain the onset of the anisotropy at/above $T_s$, where the spin symmetry is not yet broken.
![image](Fig_9_ExpTheo_vfin.png){width="17cm"}
Finally, we wish to come back to the comparison between our optical results and the outcome of the LAPW calculations [@sanna; @software]. The model uses as initial lattice parameters those determined experimentally from the crystal structure of BaFe$_2$As$_2$ but neglects the small orthorhombic distortion occurring at low temperatures. The magnetic ordering is modeled using a fixed striped phase, where stripes of spins up are alternated with stripes of spins down on each Fe plane, compatible with the magnetic configuration determined experimentally below $T_N$ [@li]. The Co doping of BaFe$_2$As$_2$ rapidly destabilizes the magnetic phase which disappears above 6% Co concentrations. This Co-induced destabilization of magnetism was simulated by scaling the magnetic moment, calculated within the virtual crystal approximation [@sanna], accordingly to the experimental critical temperature $T_N$ [@chu]. This approach corrects the deficiencies of the local spin density approximation and mimics the two main Co-induced effects: a localized electron doping on the Fe layer and a magnetic to nonmagnetic phase transition. The resulting theoretical Co-dopings agree within 0.5% variation with the experimental ones.
Figure \[Exp\_Theory\] show the low temperature measured (top panels) and calculated (bottom panels) optical conductivity for both the $a$- and $b$-axis and for the three dopings. For a direct comparison we have normalized all the measured and calculated $\sigma_{1}(\omega)$ to their respective maxima, thus obtaining $\tilde{\sigma}_{1}(\omega)$. We clearly see a fairly good agreement between theory and experiment in the general shape of $\tilde{\sigma}_{1}(\omega)$. In the MIR range the DFT calculated $\tilde{\sigma}_{1}(\omega)$ finely reproduces the observed polarization dependence, both as far as the MIR-band position and its polarization dependence (black arrows in Fig. \[Exp\_Theory\]) are concerned. Indeed, the predictions of the enhancement of the MIR-band along the $a$-direction and its depletion along the $b$-direction are fairly close the experimental findings. Interestingly, the center of the calculated band (black arrows in Fig. \[Exp\_Theory\] bottom panels) shifts at lower frequencies for increasing dopings which is also in agreement with the experiments. This MIR-band is linked to the modeled magnetic stripe configuration which was shown to correspond to the energy-minimum configuration of these systems [@sanna]. Therefore, the DFT calculation strongly supports a “magnetic origin” of the MIR-band which would originate from the Fermi topology reconstruction in the magnetically ordered state. In this scenario one would reasonably expect that the MIR-band disappears above the magnetic phase transition, contrary to our observations. However, a dynamic antiferromagnetic order due to spin fluctuations could persist in the paramagnetic phase well above the phase transition temperature [@mazin]. The fingerprints of such underlying spin fluctuations would be frozen to fast enough probes as optics, thus explaining the persistence above the phase transition of the MIR-band in our spectra. At higher (NIR) frequencies the agreement result depleted because of the finite $k$-point sampling. The major interband peak of $\tilde{\sigma}_{1}(\omega)$ experimentally observed at about 5000 cm$^{-1}$ is shifted to slightly higher frequencies in the theoretical calculations and shows a steeper rise. We notice that spin-polarized DFT calculations require a smaller renormalization factor with respect to unpolarized ones in order to account for these frequency-shifts.
Conclusions
===========
The charge dynamics of detwinned Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ single crystals in the underdoped regime reveals an in-plane temperature and doping dependent optical anisotropy. At low frequencies the optical measurements offer the unique opportunity to disentangle the distinct behaviors of the Drude weights and scattering rates of the itinerant charge carriers, which are both enhanced along the antiferromagnetic $a$-axis with respect to the ferromagnetic $b$-axis. Our findings on such single domain specimens allow us to shed light on the counterintuitive anisotropic behavior ($\rho_b>\rho_a$) of the $dc$ resistivity. The $dc$ anisotropy below $T_N$ is principally determined by the anisotropy in the low frequency Drude weight (i.e., changes in the electronic structure close to the Fermi energy), outweighing the non-negligible anisotropy of the scattering rates between the $a$- and $b$-axis. Of equal or perhaps greater interest is the temperature regime above $T_N$ for which the Fermi surface is not reconstructed. One would like to understand whether the resistivity anisotropy at high temperatures also originates from the Fermi surface, perhaps due to the difference in the orbital occupancy revealed by ARPES [@devereaux; @lv_anis], or from anisotropic scattering, perhaps associated with incipient spin fluctuations [@fernandes2]. The current optical data may be in partial agreement with both points of view and therefore do not permit a conclusive answer to this question, thus motivating further experiments in order to definitely address the origin of the anisotropy above $T_N$.
The optical anisotropy extends to relatively high frequencies and temperatures above the phase transitions for crystals held under uniaxial stress. The resulting linear dichroism reveals the electronic nature of the structural transition and implies a substantial nematic susceptibility. In order to clarify the subtle interplay of magnetism and Fermi surface topology we elaborate on a comparison of our optical measurements with theoretical calculations obtained from density functional theory within the full-potential LAPW method. The calculations are able to reproduce most of the observed experimental features, in particular, to identify the MIR-band located at about 1500 cm$^{-1}$ as a magnetic peak, ascribed to antiferromagnetic ordered stripes. The measured large in-plane anisotropy of the optical response and its doping dependence is consistently tracked by the LAPW calculations.\
The authors acknowledge fruitful discussions with S. Kivelson, T. Devereaux, C. Homes, D.N. Basov, R.M. Fernandes, J. Schmalian, W. Lv and D. Lu and valuable help by J. Johannsen in collecting part of the data. This work has been supported by the Swiss National Foundation for the Scientific Research within the NCCR MaNEP pool. This work is also supported by the Department of Energy, Office of Basic Energy Sciences under contract DE-AC02-76SF00515. The work in Cagliari is supported by the Italian MIUR through PRIN2008XWLWF.
$^{*}$ Both authors equally contributed to the present work.
[99]{}
M.R. Norman, *Science* **332**, 196 (2011) and references therein.
D.N. Basov and A.V. Chubukov, *Nature Physics* **7**, 272 (2011) and references therein.
D.C. Johnson, *Adv. Physics* **59**, 803 (2010).
H.-F. Li et al. *Phys. Rev. B* **82**, 140503(R) (2010).
T.-M. Chuang et al., *Science* **327**, 181 (2010).
Q. Wang et al., (unpublished) *arXiv:1009.02711* (2010).
T. Terashima et al., (unpublished) *arXiv:1103.3329v1* (2011).
M.A. Tanatar et al., *Phys. Rev. B* **79**, 180508(R) (2009).
J.-H. Chu et al., *Science* **329**, 824 (2010).
J.-H. Chu et al., *Phys. Rev. B* **81**, 214502 (2010).
M. Yi et al., *PNAS* **108**, 6878 (2011).
Y. Kim et al., *Phys. Rev. B* **83**, 064509 (2011).
M.A. Tanatar et al., *Phys. Rev. B* **81**, 184508 (2010).
E.C. Blomberg et al., *Phys. Rev. B* **83**, 134505 (2011).
R. Prozorov et al., *Phys. Rev. B* **80**, 174517 (2009).
H.-H. Kuo et al., (unpublished) *arXiv:1103.4535* (2011).
A. Dusza et al., *Europhys. Lett* **93**, 37002 (2011).
A. Sanna et al., *Phys. Rev. B* **83**, 054502 (2011).
J.-H. Chu et al., *Phys. Rev. B* **79**, 014506 (2009).
C. Lester et al., *Phys. Rev. B* **79**, 144523 (2009).
M. Dressel and G. Grüner, [*Electrodynamics of Solids*]{}, Cambridge University Press (2002).
A. Lucarelli et al., *New J. Phys.* **12**, 073036 (2010).
Z.P. Yin et al., *Nature Physics* **7**, 294 (2011).
K. Sugimoto et al., *J. Phys. Soc. Jpn.* **80**, 033706 (2011).
W. Lv and P. Phillipps, (unpublished) *arXiv:1105.4630* (2011).
L. Benfatto et al., (unpublished) *arXiv:1104.1346* (2011).
D. Wu et al., *Phys. Rev. B* **81**, 100512(R) (2010).
B. Valenzuela et al., *Phys. Rev. Lett.* **105**, 207202 (2010).
C.-C. Chen et al., *Phys. Rev. B* **82**, 100504(R) (2010).
A.M. Turner et al., *Phys. Rev. B* **80**, 224504 (2009).
R.M. Fernandes et al., (unpublished) *arXiv:1105.5960* (2011).
M.J. Calder$\acute{o}$n et al., *Phys. Rev. B* **80**, 094531 (2009).
C. Xu, M. Müller and S. Sachdev, *Phys. Rev. B* **78**, 020501(R) (2008).
M.D. Johannes and I. Mazin *Phys. Rev. B* **79**, 220510(R) (2009).
R.M. Fernandes et al., *Phys. Rev. Lett.* **105**, 157003 (2010).
E. Fradkin et al., *Annu. Rev. Condens. Matter Phys.* **1**, 153 (2010) and references therein.
F. Krüger et al., *Phys. Rev. B* **79**, 054504 (2009).
C.-C. Lee et al., *Phys. Rev. Lett.* **103**, 267001 (2009).
W. Lv et al., *Phys. Rev. B* **82**, 045125 (2010).
E. Bascones et al., *Phys. Rev. Lett.* **104**, 227201 (2010).
The ELK code at [http://elk.sourceforge.net/ ](http://elk.sourceforge.net/ ) was employed for the calculations.
I.I. Mazin and M.D. Johannes, *Nature Physics* **5**, 141 (2009).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The legacy of solar neutrinos suggests that large neutrino detectors should be sited underground. However, to instead go underwater bypasses the need to move mountains, allowing much larger contained water Čerenkov detectors. Reaching a scale of $\sim 5$ Megatons, the size of the proposed Deep-TITAND, would permit observations of “mini-bursts” of neutrinos from supernovae in the nearby universe on a yearly basis. Importantly, these mini-bursts would be detected over backgrounds without the need for optical evidence of the supernova, guaranteeing the beginning of time-domain MeV neutrino astronomy. The ability to identify, to the second, every core collapse would allow a continuous “death watch” of all stars within $\sim 5$ Mpc, making previously-impossible tasks practical. These include the abilities to promptly detect otherwise-invisible prompt black hole formation, provide advance warning for supernova shock-breakout searches, define tight time windows for gravitational-wave searches, and identify “supernova impostors” by the non-detection of neutrinos.'
author:
- 'Matthew D. Kistler'
- 'Hasan Y[ü]{}ksel'
- 'Shin’ichiro Ando'
- 'John F. Beacom'
- Yoichiro Suzuki
date: 'October 10, 2008'
title: 'Core-Collapse Astrophysics with a Five-Megaton Neutrino Detector'
---
Introduction
============
Core-collapse supernovae have long been suspected to be the solution of many long-standing puzzles, including the production of neutron stars and black holes, radioactive isotopes and heavy elements, and cosmic rays [@Baade]. Understanding these issues, and the properties of neutrinos and hypothesized new particles, requires improving our knowledge of supernovae. It is not enough to record their spectacular visual displays, as these do not reveal the dynamics of the innermost regions of the exploding stars, with their extremes of mass and energy density. Moreover, sophisticated simulations of the core collapse of massive stars do not robustly lead to supernova explosions [@Buras:2003sn; @Burrows:2005dv; @Mezzacappa:2005ju], raising the suspicion that crucial physics is missing.
Neutrinos are the essential probe of these dynamics, as they are the only particle that escapes from the core to the observer (gravitational waves may be emitted, but they are energetically subdominant). There is an important corollary to this, namely [*until supernovae besides SN 1987A are detected by neutrinos, our fundamental questions about supernovae will never be decisively answered.*]{} In fact, the most interesting problems–associated with the presence, nature, variety, and frequency of core collapse in massive stars–can only be solved by detecting [*many*]{} supernova neutrino bursts.
![Probabilities to obtain the indicated numbers of $\bar{\nu}_e$ neutrino events (with $E_{e^+} > 18$ MeV) in a 5 Mton detector as a function of the supernova distance. We assume a Fermi-Dirac $\bar{\nu}_e$ spectrum with an average energy of 15 MeV and a total energy of $5\times 10^{52}$ erg. Optical supernovae observed in the last 10 years are noted at their distances; those in red indicate multiple supernovae in the same galaxy.[]{data-label="fig:yields"}](yields){width="3.25in"}
The challenges of supernova neutrino burst detection are that Milky Way sources are rare and that more common distant sources have little flux. The 32 kton Super-Kamiokande (SK) detector is large enough to detect with high statistics a burst from anywhere in the Milky Way or its dwarf companions, but the expected supernova rate is only 1–3 per century, and there is no remedy but patience. Proposed underground detectors [@Nakamura:2003hk; @Jung:1999jq; @deBellefon:2006vq; @Autiero:2007zj], like the $\sim 0.5$-Mton Hyper-Kamiokande (HK), could detect one or two neutrinos from supernovae in some nearby galaxies [@Ando:2005ka]. As shown in Fig. \[fig:yields\], to robustly detect all neutrino bursts within several Mpc, where recent observations show the supernova rate to be at least 1 (2) per year within $\sim 6$ $(10)$ Mpc, requires scaling up the detector mass of SK by about two orders of magnitude, to at least $\sim 5$ Mton.
A recent proposal for the Deep-TITAND detector shows in detail how it might be feasible to build such a large detector in a cost-effective way [@Suzuki:2001rb; @Suzuki]. To avoid the high costs and slow pace of excavating caverns underground, this proposal conceives of a modular 5 Mton undersea detector that could be constructed quickly. Key motivations for such a detector are superior exposure for studies of proton decay, long-baseline neutrinos, and atmospheric neutrinos. To reduce costs, the detector would be built with a shallower depth and lower photomultiplier coverage than SK; these decisions would sacrifice the low-energy capabilities for all but burst detection.
There is a compelling case for a 5 Mton detector based on supernova neutrino detection alone, and the science benefits that we discuss below will hold even if a Milky Way supernova is detected first. On an annual basis, one would expect a burst of $\gtrsim\,$3 events, and every several years, a burst comparable to the $\sim 10$ events from SN 1987A detected by each of Kamiokande-II [@Hirata:1987hu; @Hirata:1988ad] or IMB [@Bionta:1987qt; @Bratton:1988ww]. Indeed, a 5 Mton supernova neutrino detector is one of the most promising prospects for developing an observatory for non-photon time-domain astrophysics. There are no serious uncertainties in the number of sources or the strengths of their signals. The minimal size of the required detector is known now, and it is not out of reach, with costs comparable to those of existing or near-term high-energy neutrino and gravitational-wave observatories.
Before elaborating on details concerning detection rates, we will begin by exploring how the data obtained from multiple neutrino bursts would transform the way that we consider questions about supernovae; these considerations are a major part of our new results. We will then examine recent developments concerning the rate and properties of supernovae observed in the nearby universe. This will lead into our discussion of the detector properties required to measure neutrino bursts from these supernovae, focusing on the Deep-TITAND proposal [@Suzuki:2001rb; @Suzuki], and the quantitative neutrino yields expected.
Discovery Prospects {#prospects}
===================
Our primary interest is on the scientific impact of measuring neutrino “mini-bursts,” detectable signals of 3 or more events within 10 seconds (the observed duration of the SN 1987A neutrino burst), from many supernovae in the nearby universe. As we will show in Sections \[rate\] and \[detection\], the minimum detector size for achieving this purpose is about 5 Mton. We emphasize in advance that such signals can be separated from backgrounds even at shallow depth, so that the presence of a core collapse can be deduced independently of photon-based observations. Additionally, for nearby transients identified through photons, a non-detection in neutrinos means that a conventional supernova neutrino flux was not present. These facts have new and profound implications.
While our principal focus is thus on individual objects, the aggregate data would, of course, also be useful. For science goals that require a large number of accumulated events, the most certain signal is the Diffuse Supernova Neutrino Background (DSNB), which is a steady flux arising from all core-collapse supernovae in the universe (e.g., Ref. [@DSNB] and references therein). In the proposed $\sim 0.5$ Mton HK detector, with added gadolinium to reduce backgrounds by neutron tagging [@Beacom:2003nk], $\sim\,$50–100 DSNB signal events with little background could be collected per year. The ratio of DSNB signal to detector background in Deep-TITAND would be the same as in the background-dominated SK search of Ref. [@Malek:2002ns], which set an upper limit. To reach the smallest plausible DSNB signals, one needs an improvement of about a factor 3 in signal sensitivity and thus a factor of about 10 in exposure. After four years, as in the SK search, the Deep-TITAND exposure would be about 100 times larger than that of Ref. [@Malek:2002ns], thus allowing a robust detection of the DSNB flux. (To measure the spectrum well, HK with gadolinium would be needed.) The fortuitous occurrence of a supernova in the Milky Way, or even Andromeda (M31) or Triangulum (M33), would also give a very large number of neutrino events (see Table \[tab:detectors\]). The physics prospects associated with such yields from a single supernova have been discussed for underground detectors at the 0.5 Mton scale [@MTnu].
-------- ------------- ----------- ---------- ---------------
32 kton 0.5 Mton 5 Mton
(SK) (HK) (Deep-TITAND)
10 kpc (Milky Way) $10^4$ $10^5$ $10^6$
1 Mpc (M31, M33) $1$ $10$ $10^2$
3 Mpc (M81, M82) $10^{-1}$ $1$ $10$
-------- ------------- ----------- ---------- ---------------
: Approximate neutrino event yields for core-collapse supernovae from representative distances and galaxies, as seen in various detectors with assumed fiducial volumes. Super-Kamiokande is operating, and Hyper-Kamiokande and Deep-TITAND are proposed.[]{data-label="tab:detectors"}
Probing the core collapse mechanism
-----------------------------------
The optical signals of supposed core-collapse supernovae show great diversity [@Zwicky(1940); @Filippenko:1997ub], presumably reflecting the wide range of masses and other properties of the massive progenitor stars. In contrast, the neutrino signals, which depend on the formation of a $\sim 1.4 M_\odot$ neutron star, are presumed to be much more uniform. However, since we have only observed neutrinos from SN 1987A, it remains to be tested whether all core-collapse supernovae do indeed have comparable neutrino emission. The total energy emitted in neutrinos is $\simeq 3 G M^2 / 5 R$, and some variation is expected in the mass $M$ and radius $R$ of the neutron star that is formed, though proportionally much less than in the progenitor stars.
With at least $\sim 1$ nearby supernova per year, a wide variety of supernovae can be probed, including less common types. For example, the observational Types Ib and Ic are now believed to be powered by core collapse, despite their original spectroscopic classification that defined them as related to Type Ia supernovae, which are thought to be powered by a thermonuclear runaway without significant neutrino emission. While each of the Types Ib/Ic and Ia are only several times less frequent than Type II, some of each should occur nearby within a reasonable time, so that the commonality of the Type II/Ib/Ic explosion mechanism can be tested.
While the nature of the explosion in the above supernova types is very likely as expected, there are other bright transients observed for which the basic mechanism is much more controversial. For these events, the detection or non-detection of neutrinos could decisively settle debates that are hard to resolve with only optical data. One type of so-called “supernova impostor” is thought to be the outburst of a Luminous Blue Variable (LBV) [@Humphreys], which seem to require a stellar mass of $M_* \gtrsim 20\,M_\odot$. Since this type of outburst affects only the outer layers, with the star remaining afterward, there should be no detectable neutrino emission.
There are several recent examples in nearby galaxies where neutrino observations could have been conclusive, including the likely LBV outburst SN 2002kg in NGC 2403 [@VanDyk:2006ay]. SN 2008S in NGC 6946 [@Prieto:2008bw] and a mysterious optical transient in NGC 300 [@Thompson08] warrant further discussion for another reason. In neither case was a progenitor seen in deep, pre-explosion optical images; however, both were revealed as relatively low-mass stars ($M_* \sim 10\,M_\odot$) by mid-infrared observations made years before the explosions. This suggests that they were obscured by dust expelled from their envelopes, a possible signature of stars dying with cores composed of O-Ne-Mg instead of iron [@Prieto:2008bw; @Thompson08]. As we will address in detail later, these events were sufficiently near for a 5 Mton detector to have identified them as authentic supernovae or impostors.
Measuring the total core collapse rate
--------------------------------------
In the previous subsection, we implicitly considered supernovae for which the optical display was seen. However, as we will calculate, the detection of $\ge 3$ neutrinos is sufficient to establish that a core collapse occurred, including those events not later visible to telescopes. This provides a means of measuring the total rate of true core collapses in the nearby universe. A successful supernova may be invisible simply if it is in a very dusty galaxy, of which there are examples quite nearby, such as NGC 253 and M82. These are supposed to have very high supernova rates, perhaps as frequent as one per decade each, as deduced from radio observations of the number of young supernova remnants [@Muxlow]. However, only a very few supernovae have been seen [@CBAT].
More interestingly, it remains unknown if, as in numerical models of supernova explosions, some core collapses are simply not successful at producing optical supernovae. This can occur if the outgoing shock is not sufficiently energetic to eject the envelope of the progenitor star, in which case one expects the prompt formation of a black hole with very little optical emission [@Heger:2002by]. Indirect evidence for such events follows from a deficit of high-mass supernova progenitors compared to expectations from theory [@Kochanek:2008mp; @Smartt:2008zd], as well as from the existence of black holes recently discovered to have $M_{\rm BH}\gtrsim 15\,M_\odot$ [@Orosz:2007ng]. One way to probe this exotic outcome would be to simply watch the star disappear [@Kochanek:2008mp]. However, a detectable burst of neutrinos should be emitted before the black hole forms (and typically, if the duration of the emission is shorter, the luminosity is higher) [@Burrows:1988ba; @Beacom:2000qy; @Sumiyoshi:2008zw]. Taken together, these would be a dramatic and irrefutable signal of an otherwise invisible event. While the rate of prompt black hole formation probably cannot exceed the visible supernova rate without violating constraints on the DSNB, reasonable estimates indicate that up to $\gtrsim 20\%$ of core collapses may have this fate [@Kochanek:2008mp].
Testing the neutrino signal
---------------------------
By measuring neutrinos from many supernovae, the deduced energy spectra and time profiles could be compared to each other and to theory. In most cases, only several events would be detected, but this is enough to be useful. The highest neutrino energies range up to $\simeq 50$ MeV. The thermal nature of the neutrino spectrum makes it relatively narrow, and since it is falling exponentially at high energies, even a small number of events can help determine the temperature. Recall that for SN 1987A, the Kamiokande-II and IMB detectors collected only $\sim 10$ events each [@Hirata:1987hu; @Bionta:1987qt], but that this data strongly restricts the details of the collapsed core.
The time profile is thought to rise quickly, over perhaps at most 0.1 s, and then decline over several seconds, as seen for SN 1987A. The neutrino events collected would most likely be at the early peak of the emission, and hence the most relevant for the question of whether heating by the emergent neutrino flux is adequate for shock revival [@Bethe:1984ux; @Thompson:2002mw; @Murphy:2008dw] or whether $\nu$-$\nu$ many-body effects are important [@nu-nu].
Over time, as many supernovae are detected, the average energy spectrum and time profile will be built up. (For the time profile, there will be some uncertainties in the start times.) If there are large variations from one supernova to the next, then these average quantities will ultimately provide a more useful template for comparison than the theoretical results that must be used at present. If there is no evidence for significant variations between supernovae, then the accumulated data will be equivalent to having detected one supernova with many events. It is quite likely that such a detector would observe a supernova in one of the Milky Way, M31, or M33; the high-statistics yield from these would also provide a point of comparison. Taken together, all of these data will provide new and exacting tests of how supernovae work.
With enough accumulated events, it is expected that neutrino reactions besides the dominant inverse beta decay process will be present in the data. One oddity still remaining from SN 1987A is that the first event in Kamiokande-II seems to be due to $\nu_e + e^-\rightarrow \nu_e + e^-$ scattering and points back to the supernova [@Hirata:1988ad], which is improbable based upon standard expectations [@Beacom:2006]. This can be tested, however, and if it turns about to be ubiquitous, could be exploited in determining the directionality of the larger future bursts without optical signals, as the inverse beta decay signal is not directional [@Vogel:1999zy].
Since Earth is transparent to supernova neutrinos, the whole sky can be monitored at once. For neutrinos that pass through Earth, particularly those which cross the core, matter-enhanced neutrino mixing can significantly affect the spectrum relative to those which do not [@earth]. Dividing the accumulated spectra appropriately based on optical detections, this would allow a new test of neutrino mixing, sensitive to the sense of the neutrino mass hierarchy. Detecting neutrinos from distant sources would also allow tests of neutrino decay [@decay], the equivalence principle [@equivalence], and other exotic possibilities [@exotic].
Revealing other transient signals
---------------------------------
Detection of a neutrino burst means detection of the instant of core collapse, with a precision of $\sim 1$ second determined by the sampling of the peak of the $\simeq 10$ second time profile. This would provide a much smaller time window in which to search for gravitational wave signals [@LIGO; @GW] from core-collapse supernovae; otherwise, one must rely on the optical signal of the supernova, which might optimistically be determined to a day ($\sim 10^5$ seconds). This is important, since the gravitational-wave signal remains quite uncertain, making searches more difficult. Knowing the instant of core collapse would also be useful for searches for high-energy neutrinos from possible choked jets that do not reach the surface of the star [@Jets], where again the timing information can be used to reduce backgrounds and improve sensitivity.
Once core collapse occurs, the outward appearance of the star initially remains unchanged. Knowing that a signal was imminent would give unprecedented advanced warning that photons should soon be on the way, allowing searches to commence for the elusive UV/X-ray signal of supernova shock breakout [@SBO] and also the early supernova light curve. Those signals are expected to emerge within hours and days, respectively. While the neutrino signal is likely not directional, the number of events detected will provide constraints for triggered searches [@Kistler].
Finally, it is possible that such large detector would find not only core-collapse supernovae in nearby galaxies, but also other types of transients that are presently unknown. In the Milky Way, there would be sensitivity to any transient with a supernova-like neutrino signal, as long as its overall strength is at least $\sim 10^{-6}$ as large as that for a supernova. To be detectable, the key requirement is a $\gtrsim 15$ MeV $\bar{\nu}_e$ component.
![Estimates of the core-collapse supernova rate in the nearby universe, based on that expected from the optical luminosities of known galaxies (line) and supernovae observed within the last decade (bins). Note that SN 2002kg is a likely LBV outburst, while SN 2008S and the NGC 300 transient are of unusual origin. These estimates are all likely to be incomplete.[]{data-label="fig:snrates"}](snrates){width="3.25in"}
Nearby Supernova Rate {#rate}
=====================
Over the past decade, there has been rapid growth in the level of interest among astronomers in measuring the properties of core-collapse supernovae. There is also a renewed interest in completely characterizing the galaxies in the nearby universe, within 10 Mpc. In nearby galaxies, both amateurs and automated surveys (e.g., KAIT [@Li:1999sd]) are finding many supernovae. For these, archival searches have revealed pre-explosion images of about a dozen supernova progenitor stars, allowing a better understanding of which types of massive stars lead to which kinds of core-collapse supernovae (e.g., [@Prieto:2008bw; @Smartt:2008zd; @Li07; @GalYam:2006iy]).
Figure \[fig:snrates\] shows the expected rate of core-collapse supernovae in the nearby universe (dashed line) calculated using the galaxy catalog of Ref. [@Karachentsev:2004dx] (designed to be $\sim$70–80% complete up to 8 Mpc), with a conversion from $B$-band optical luminosity to supernova rate from Ref. [@Cappellaro:1999qy]. The effects of clustering and of incompleteness at large distances can clearly be seen, since the histogram would rise as the distance squared for a smooth universe of identical galaxies. Ultimately, a more accurate result could be obtained by combining the information from star-formation rate measurements in the ultraviolet [@Salim:2004dg], H$\alpha$ [@Halpha], and infrared [@Kennicutt:2003dc], likely leading to a larger prediction for the supernova rates.
Also displayed in Fig. \[fig:snrates\] is the rate deduced from supernovae discovered in this volume in the last 10 years [@CBAT], with distances primarily from Ref. [@Karachentsev:2004dx] (when available; otherwise from [@WEBdist]). While the observed rate is already $\sim 2$ times larger than the above calculation, even this estimate is likely incomplete, as supernova surveys under-sample small galaxies and the Southern hemisphere. As previously mentioned, supernovae with little or no optical signal, e.g., due to direct black hole formation or dust obscuration, would also have been missed. This is particularly important for nearby dusty starburst galaxies with large expected, but low observed, supernova rates, like NGC 253 and M82.
Distance measurements of nearby galaxies also stand to be improved. For example, at the largest distances, SN 1999em, SN 1999ev, SN 2002bu, and SN 2007gr are probably not all truly within 10 Mpc, as some distance estimates put them outside. We emphasize that their inclusion or not does not affect our approximate supernova rates, and barely matters for the neutrino bursts of sufficient multiplicity, which are dominantly from closer supernovae. It would be very helpful to refine distance measurements, not just for star formation/supernova rate estimates, but also to determine the absolute neutrino luminosities once a supernova has been detected.
Overall, there is a strong case that the core-collapse supernova rate within $\sim $ 6 (10) Mpc is at least 1 (2) per year. This can be compared to the estimated Milky Way rate of $2 \pm 1$ per century (see Ref. [@Diehl:2006cf] and references therein), with Poisson probabilities ultimately determining the odds of occurrence, as shown in Fig. \[fig:forecast\].
![Probabilities for one or more supernovae in the Milky Way over time spans relevant for the lifetimes of large neutrino detectors, depending on the assumed supernova rate.[]{data-label="fig:forecast"}](forecast){width="3.25in"}
Neutrino Burst Detection {#detection}
========================
A goal of measuring supernova neutrino “mini-bursts” from galaxies at a few Mpc necessitates a large detector, roughly $\sim$100 times the size of SK. We focus on the Deep-TITAND proposal for a 5 Mton (fiducial volume) enclosed water-Čerenkov detector [@Suzuki:2001rb; @Suzuki]. The detector would be constructed in modules sized by Čerenkov light transparency and engineering requirements. We assume a photomultiplier coverage of $20\%$, similar to that of SK-II (half that of the original SK-I and the rebuilt SK-III). As in SK, the detection efficiency at the energies considered here would be nearly unity.
The backgrounds present in deep detectors have been well-characterized by SK and other experiments. Deep-TITAND is proposed to be at a relatively shallow depth of 1000 meters of seawater, which would increase the downgoing cosmic ray muon rate per unit area by a factor $\simeq 100$ compared to SK, which is at a depth of 2700 meters water equivalent. A nearly perfect efficiency for identifying cosmic ray muons in the outer veto or the detector itself is required. This was achieved in SK, where the only untagged muons decaying in the detector were those produced inside by atmospheric neutrinos [@Malek:2002ns]. Simple cylinder cuts around cosmic ray muon tracks would veto all subsequent muon decays while introducing only a negligible detector deadtime fraction.
Low-energy backgrounds include natural radioactivities, solar neutrinos, photomultiplier noise, and beta decays from nuclei produced following spallation by cosmic ray muons. Of these, only the last is depth-dependent, and this would be much larger than in SK (a factor $\simeq 30$ for the higher muon rate per area but lower muon average energy, and a factor $\simeq 30$ for the larger detector area). The high muon rate means that it would not be possible to use the cylinder cuts employed in SK to reduce spallation beta decays without saturating the deadtime fraction (note that these beta decays have lifetimes more than $10^6$ times longer than the muon lifetime). At low energies, the above background rates are large, but the spectrum falls steeply with increasing energy, essentially truncating near 18 MeV [@Malek:2002ns; @Ikeda:2007sa].
This allows for a significant simplification and reduction in the background rate by considering only events with a reconstructed energy greater than 18 MeV (a neutrino energy of 19.3 MeV). Which events to reconstruct would be determined by a simple cut on the number of hit photomultipliers, just as in SK, but with a higher threshold. The backgrounds above this cut are due to atmospheric neutrinos, and thus the rates scale with the detector volume but are independent of depth. The dominant background contribution is from the decays of non-relativistic muons produced by atmospheric neutrinos in the detector, i.e., the so-called invisible muons. The background rate in 18–60 MeV in SK is about 0.2 events/day, of which the energy-resolution smeared tail of the low-energy background is only a minor component [@Malek:2002ns; @Ikeda:2007sa].
Scaling this rate to a 5 Mton detector mass ($\sim 5 \times 10^{-4}$ s$^{-1}$) and considering an analysis window of 10 sec duration (comparable to the SN 1987A neutrino signal) allows calculation of the rate of accidental coincidences [@Ikeda:2007sa]. For $N = 3$ events, this corresponds to about only once every five years, and when it does, examination of the energy and timing of the events will allow further discrimination between signal and background (a subsequent optical supernova would confirm a signal, of course). For $N \ge 4$, accidental coincidences are exceedingly rare ($\sim\,$1 per 3000 years), therefore we require at least $N = 3$ signal events to claim detection of a supernova (a somewhat greater requirement than in Ref. [@Ando:2005ka], where a smaller detector was assumed). Since the backgrounds observed by SK in this energy range are from atmospheric neutrinos, we expect no correlated clusters of background events.
To estimate detection prospects, for the $\bar{\nu}_e$ flavor we assume a Fermi-Dirac spectrum with an average energy of 15 MeV and a total energy of $5\times 10^{52}$ erg. The dominant interaction for the neutrino signal is inverse-beta decay, $\bar{\nu}_e + p\rightarrow n + e^+$, where $E_{e^+} \simeq E_{\bar{\nu}_e} - 1.3$ MeV and the positron direction is nearly isotropic [@Vogel:1999zy]. Combining the emission spectrum, cross section, and number of free target protons in 5 Mton of water, we find that the average number of neutrino events (for $E_{e^+} > 18$ MeV) from a burst at distance $D$ is $$\mu (D; E_{e^+} > 18 ~ \mathrm{MeV}) \simeq 5 \left(\frac{D}{3.9 ~ \mathrm{Mpc}}\right)^{-2}.$$ This is the key normalization for the supernova signal. In Table \[tab:yields\], we list recent nearby supernovae within 6 Mpc, with type, host galaxy name, distance, and the expected neutrino yields $\mu$ in a 5 Mton detector. As can be seen in Fig. 3 of Ref. [@Ando:2005ka], our $E_{e^+} > 18$ MeV threshold still allows us to detect $\sim 70\%$ of the total supernova signal.
The probability to detect $\geq N$ neutrino events from a given core collapse is then $$P( \geq N; D) = \sum_{n = N}^{\infty} P_n [\mu (D)] = \sum_{n=N}^\infty \frac{\mu^n(D)}{n!}e^{-\mu (D)},$$ where $P_n(\mu)$ represents the Poisson probability. $P(\geq N; D)$ is shown in Fig. \[fig:yields\] as a function of $D$ for several values of $N$. From this figure, we see, for example, that from a 4 Mpc supernova, we have an excellent chance ($\gtrsim 90\%$) to get more than 3 neutrino events. For 8 Mpc, like those shown in Fig. \[fig:snrates\], there is still a $\lesssim 10\%$ chance to get $\ge 3$ events.
SN Type Host D \[Mpc\] $\nu$ events
-------- --------- ---------------- ----------- --------------
2002hh II-P NGC 6946 5.6 2.4
2002kg IIn/LBV NGC 2403 3.3 6.8
2004am II-P NGC 3034 (M82) 3.53 5.9
2004dj II-P NGC 2403 3.3 6.8
2004et II-P NGC 6946 5.6 2.4
2005af II-P NGC 4945 3.6 5.7
2008S IIn NGC 6946 5.6 2.4
2008bk II-P NGC 7793 3.91 4.8
2008? II? NGC 300 2.15 16.0
: Recent core-collapse supernova candidates within 6 Mpc, with their expected neutrino event yields ($E_{e^+} > 18$ MeV) in a 5 Mton detector.[]{data-label="tab:yields"}
For a particular supernova rate, $R_{{\rm SN},i}$, we can get the expected total rate of $N$-tuplet detections from distances $D_i$ as $$R_{N,{\rm burst}} = \sum_i R_{{\rm SN},i} P_N [\mu (D_i)].$$ In Fig. \[fig:multiplicities\], we show this as an annual rate, $R_{N,{\rm burst}}$, plotted versus $N$. For the supernova rate $R_{{\rm SN},i}$, we have adopted three different models: (i) all supernova candidates shown in Fig. \[fig:snrates\] (20 in total); (ii) same as (i), except excluding SN 2002kg, SN 2008S, and the NGC 300 transient as exceptional events (17 in total); the catalog-based rate estimate (line in Fig. \[fig:snrates\]). As the detection criterion is $N \geq 3$, the annual rate of detectable mini-bursts is obtained by summing $R_{N,{\rm burst}}$ for $N \ge 3$, which yields 0.8, 0.6, and 0.4 supernovae per year, for supernova rate models (i), (ii), and (iii), respectively. Adding supernovae from beyond 10 Mpc would not change the rate of $N \ge 3$ multiplets, only increasing the number of unremarkable lower-$N$ multiplets (which, as shown, are already dominated by supernovae in the 8–10 Mpc range).
The total neutrino event counts, $N_{\rm total}$, can be obtained from $R_{N,{\rm burst}}$ by $$N_{\rm total} = \sum_{N = 3}^{\infty} N R_{N,{\rm burst}},$$ which are 48, 31, and 22 per decade, for rate estimates (i), (ii), and (iii), respectively. Since each burst is triggered with $E_{e^+} > 18$ MeV events, one would also look for somewhat lower-energy events in the same time window, potentially raising the total yield by $\simeq 20\%$.
![Frequency of neutrino mini-bursts expected with a 5 Mton detector. The bins with $N = 3$ or more can be used for burst detection because the background rate is small enough. Three different estimates of the supernova rate are shown, as labeled.[]{data-label="fig:multiplicities"}](multiplicities){width="3.25in"}
Conclusions
===========
The $\sim 10$ neutrino events associated with SN 1987A in each of the Kamiokande-II and IMB detectors [@Hirata:1987hu; @Bionta:1987qt] were the first and, thus far, only detection of neutrinos from a supernova. This detection showed that we can learn a great deal even from a small number of events, and revealed that an immense amount of energy is released in the form of neutrinos ($> 10^{53}$ erg) during a core collapse. Measuring “mini-bursts” of neutrino events from multiple supernovae would allow for the study of the core-collapse mechanism of a diverse range of stellar deaths, including optically-dark bursts that appear to be relatively common [@Kochanek:2008mp; @Smartt:2008zd].
This would be made possible by a $\sim\,$5 Mton scale water Čerenkov detector [@Suzuki:2001rb; @Suzuki], which has the special advantages of being able to trigger on supernovae using neutrinos alone, and to guarantee detection if neutrinos are produced with the expected flux. Moreover, for burst detection, a relatively-high low-energy background rate can be tolerated, significantly decreasing the required detector depth, so that construction could be relatively quick and inexpensive. Our conservative estimates shows that the occurrence rate of mini-bursts that give $\ge 3$ neutrino events is likely $\sim$1 yr$^{-1}$ or higher.
In conclusion, we wish to reiterate that, even if a supernova occurs in the Milky Way tomorrow, the important problems discussed in Section \[prospects\] will remain unresolved, and can only be addressed by a suitable “census” of core collapses in the nearby universe. The possibilities mentioned here almost certainly do not exhaust the scientific potential of such an instrument. As is now almost commonplace in the business of observing supernovae with photons, it would be surprising [*not*]{} to find new and unexpected phenomena.
[**Acknowledgments:**]{} We thank Shunsaku Horiuchi, Chris Kochanek, Y. Ohbayashi, José Prieto, Stephen Smartt, Michael Smy, Kris Stanek, Todd Thompson, and Mark Vagins for helpful discussions. This work was supported by Department of Energy grant DE-FG02-91ER40690 (MDK); National Science Foundation CAREER grant PHY-0547102 to JFB (HY and JFB); and by the Sherman Fairchild Foundation at Caltech (SA).
[99]{}
W. Baade and F. Zwicky, Proc. Nat. Acad. Sci. [**20**]{}, 254 (1934); [**20**]{}, 259 (1934). R. Buras, M. Rampp, H. T. Janka and K. Kifonidis, Phys. Rev. Lett. [**90**]{}, 241101 (2003). A. Burrows, [*et al.*]{}, Astrophys. J. [**640**]{}, 878 (2006). A. Mezzacappa, Ann. Rev. Nucl. Part. Sci. [**55**]{} (2005) 467. K. Nakamura, Int. J. Mod. Phys. A [**18**]{}, 4053 (2003). C. K. Jung, AIP Conf. Proc. [**533**]{}, 29 (2000). A. de Bellefon [*et al.*]{}, hep-ex/0607026. D. Autiero [*et al.*]{}, JCAP [**0711**]{}, 011 (2007). S. Ando, J. F. Beacom and H. Yuksel, Phys. Rev. Lett. [**95**]{}, 171101 (2005). Y. Suzuki [*et al.*]{} \[TITAND WG\], hep-ex/0110005. Y. Suzuki, “Future large volume water Cherenkov detectors,” talk at Twenty Years After SN1987A, http://sn1987a-20th.physics.uci.edu/.
K. Hirata [*et al.*]{}, Phys. Rev. Lett. [**58**]{}, 1490 (1987). K. S. Hirata [*et al.*]{}, Phys. Rev. D [**38**]{}, 448 (1988). R. M. Bionta [*et al.*]{}, Phys. Rev. Lett. [**58**]{}, 1494 (1987). C. B. Bratton [*et al.*]{}, Phys. Rev. D [**37**]{}, 3361 (1988). D. H. Hartmann and S. E. Woosley, Astropart. Phys. [**7**]{}, 137 (1997); S. Ando, K. Sato and T. Totani, Astropart. Phys. [**18**]{}, 307 (2003); L. E. Strigari, M. Kaplinghat, G. Steigman and T. P. Walker, JCAP [**0403**]{}, 007 (2004); C. Lunardini, Astropart. Phys. [**26**]{}, 190 (2006); H. Yuksel and J. F. Beacom, Phys. Rev. D [**76**]{}, 083007 (2007); C. Volpe and J. Welzel, arXiv:0711.3237. J. F. Beacom and M. R. Vagins, Phys. Rev. Lett. [**93**]{}, 171101 (2004). M. Malek [*et al.*]{} \[Super-Kamiokande Collaboration\], Phys. Rev. Lett. [**90**]{}, 061101 (2003). R. Tomas, M. Kachelriess, G. Raffelt, A. Dighe, H. T. Janka and L. Scheck, JCAP [**0409**]{}, 015 (2004); G. L. Fogli, E. Lisi, A. Mirizzi and D. Montanino, JCAP [**0504**]{}, 002 (2005); S. Choubey, N. P. Harries and G. G. Ross, Phys. Rev. D [**76**]{}, 073013 (2007); H. Minakata, H. Nunokawa, R. Tomas and J. W. F. Valle, arXiv:0802.1489. F. Zwicky, Rev. Mod. Phys. [**12**]{}, 66 (1940). A. V. Filippenko, Ann. Rev. Astron. Astrophys. [**35**]{}, 309 (1997). R. M. Humphreys and K. Davidson, Publ. Astron. Soc. Pac. [**106**]{}, 1025 (1994). S. D. Van Dyk [*et al.*]{}, astro-ph/0603025. J. L. Prieto [*et al.*]{}, Astrophys. J. [**681**]{}, L9 (2008). T. A. Thompson [*et al.*]{}, arXiv:0809.0510. T. W. B. Muxlow [*et al.*]{}, Mon. Not. Roy. Astron. Soc. [**266**]{}, 455 (1994). Central Bureau for Astronomical Telegrams (CBAT), http://www.cfa.harvard.edu/iau/lists/Supernovae.html
A. Heger, C. L. Fryer, S. E. Woosley, N. Langer and D. H. Hartmann, Astrophys. J. [**591**]{}, 288 (2003). C. S. Kochanek [*et al.*]{}, Astrophys. J. [**684**]{}, 1336 (2008). S. J. Smartt [*et al.*]{}, arXiv:0809.0403. J. A. Orosz [*et al.*]{}, Nature [**449**]{}, 872 (2007); A. H. Prestwich [*et al.*]{}, Astrophys. J. [**669**]{}, L21 (2007); J. M. Silverman and A. V. Filippenko, Astrophys. J. [**678**]{}, L17 (2008). A. Burrows, Astrophys. J. [**334**]{}, 891 (1988). J. F. Beacom, R. N. Boyd and A. Mezzacappa, Phys. Rev. D [**63**]{}, 073011 (2001). K. Sumiyoshi, S. Yamada and H. Suzuki, arXiv:0808.0384. H. A. Bethe and J. R. Wilson, Astrophys. J. [**295**]{}, 14 (1985). T. A. Thompson, A. Burrows and P. A. Pinto, Astrophys. J. [**592**]{}, 434 (2003). J. W. Murphy and A. Burrows, arXiv:0805.3345. A. B. Balantekin and H. Yuksel, New J. Phys. [**7**]{}, 51 (2005); G. G. Raffelt and G. Sigl, Phys. Rev. D [**75**]{}, 083002 (2007); G. L. Fogli, E. Lisi, A. Marrone and A. Mirizzi, JCAP [**0712**]{}, 010 (2007); H. Duan, G. M. Fuller, J. Carlson and Y. Z. Qian, Phys. Rev. Lett. [**100**]{}, 021101 (2008); B. Dasgupta and A. Dighe, Phys. Rev. D [**77**]{}, 113002 (2008); S. Chakraborty, S. Choubey, B. Dasgupta and K. Kar, arXiv:0805.3131; J. Gava and C. Volpe, arXiv:0807.3418. J. F. Beacom and L. E. Strigari, Phys. Rev. C [**73**]{}, 035807 (2006). P. Vogel and J. F. Beacom, Phys. Rev. D [**60**]{}, 053003 (1999). C. Lunardini and A. Y. Smirnov, Nucl. Phys. B [**616**]{}, 307 (2001); K. Takahashi and K. Sato, Phys. Rev. D [**66**]{}, 033006 (2002); M. Lindner, T. Ohlsson, R. Tomas and W. Winter, Astropart. Phys. [**19**]{}, 755 (2003); A. Mirizzi, G. G. Raffelt and P. D. Serpico, JCAP [**0605**]{}, 012 (2006); B. Dasgupta, A. Dighe and A. Mirizzi, arXiv:0802.1481. J. A. Frieman, H. E. Haber and K. Freese, Phys. Lett. B [**200**]{}, 115 (1988); M. Lindner, T. Ohlsson and W. Winter, Nucl. Phys. B [**622**]{}, 429 (2002); J. F. Beacom and N. F. Bell, Phys. Rev. D [**65**]{}, 113009 (2002); S. Ando, Phys. Rev. D [**70**]{}, 033004 (2004). M. J. Longo, Phys. Rev. Lett. [**60**]{}, 173 (1988); L. M. Krauss and S. Tremaine, Phys. Rev. Lett. [**60**]{}, 176 (1988); M. M. Guzzo, H. Nunokawa and R. Tomas, Astropart. Phys. [**18**]{}, 277 (2002). E. W. Kolb and M. S. Turner, Phys. Rev. D [**36**]{}, 2895 (1987); J. Baker, H. Goldberg, G. Perez and I. Sarcevic, Phys. Rev. D [**76**]{}, 063004 (2007). LIGO Collaboration, arXiv:0711.3041; F. Acernese [*et al.*]{} \[Virgo Collaboration\], Class. Quant. Grav. [**24**]{}, S381 (2007). K. Kotake, S. Yamada and K. Sato, Phys. Rev. D [**68**]{}, 044023 (2003); C. L. Fryer, D. E. Holz and S. A. Hughes, Astrophys. J. [**609**]{}, 288 (2004); C. D. Ott, A. Burrows, L. Dessart and E. Livne, Phys. Rev. Lett. [**96**]{}, 201102 (2006); H. Dimmelmeier, C. D. Ott, H. T. Janka, A. Marek and E. Mueller, Phys. Rev. Lett. [**98**]{}, 251101 (2007). S. Razzaque, P. Meszaros and E. Waxman, Phys. Rev. Lett. [**93**]{}, 181101 (2004) \[Erratum-ibid. [**94**]{}, 109903 (2005)\]; S. Ando and J. F. Beacom, Phys. Rev. Lett. [**95**]{}, 061103 (2005); M. Kowalski and A. Mohr, Astropart. Phys. [**27**]{}, 533 (2007); S. Horiuchi and S. Ando, Phys. Rev. D [**77**]{}, 063007 (2008). S. W. Falk and W. D. Arnett, Astrophys. J. Suppl. [**33**]{}, 515 (1977); R. I. Klein and R. A. Chevalier, Astrophys. J. [**223**]{}, L109 (1978). M. D. Kistler [*et al.*]{}, in preparation.
W. D. Li [*et al.*]{}, AIP Conf. Proc. [**522**]{}, 103 (2000). W. D. Li [*et al.*]{}, Astrophys. J. [**661**]{}, 1013 (2007). A. Gal-Yam [*et al.*]{}, Astrophys. J. [**656**]{}, 372 (2007). I. D. Karachentsev [*et al.*]{}, Astron. J. [**127**]{}, 2031 (2004). E. Cappellaro, R. Evans and M. Turatto, Astron. Astrophys. [**351**]{}, 459 (1999). S. Salim [*et al.*]{}, Astrophys. J. Suppl. [**173**]{}, 267 (2007). P. A. James [*et al.*]{}, arXiv:0802.4421; R. C. Kennicutt [*et al.*]{}, arXiv:0807.2035. R. C. Kennicutt [*et al.*]{}, Publ. Astron. Soc. Pac. [**115**]{}, 928 (2003). R. B. Tully, *Nearby Galaxies Catalog* (Cambridge Univ. Press, 1988); G. Paturel [*et al.*]{}, Astron. Astrophys. [**412**]{}, 45 (2003). R. Diehl [*et al.*]{}, Nature [**439**]{}, 45 (2006). M. Ikeda [*et al.*]{} \[Super-Kamiokande Collaboration\], Astrophys. J. [**669**]{}, 519 (2007).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, we consider the solutions of Einstein gravity in the presence of a generalized Maxwell theory, namely power Maxwell invariant. First, we investigate the analogy of nonlinear charged black hole solutions with the Van der Waals liquid–gas system in the extended phase space where the cosmological constant appear as pressure. Then, we plot isotherm $P$–$V$ diagram and study the thermodynamics of AdS black hole in the (grand canonical) canonical ensemble in which (potential) charge is fixed at infinity. Interestingly, we find the phase transition occurs in the both of canonical and grand canonical ensembles in contrast to RN black hole in Maxwell theory which only admits canonical ensemble phase transition. Moreover, we calculate the critical exponents and find their values are the same as those in mean field theory. Besides, we find in the grand canonical ensembles universal ratio $\frac{P_{c}v_{c}}{T_{c}}$ is independent of spacetime dimensions.'
author:
- 'S. H. Hendi$^{1,2}$[^1] and M. H. Vahidinia$^{1,3}$[^2]'
title: |
Extended phase space thermodynamics and\
$P$–$V$ criticality of black holes with a nonlinear source
---
Introduction
============
Theoretically one may be expect the cosmological constant term to arise from the vacuum expectation value of a quantum field and hence can vary. Therefore, it may be considered in the first law of thermodynamics with its conjugate [@Gibbons1; @BrownPLB1987; @CaldarelliCQG2000]. By this generalization, the cosmological constant and its conjugate can be interpreted as geometrical pressure and volume of a black object system, respectively. Moreover, this approach leads to an interesting conjecture on reverse isoperimetric inequality for black holes in contrast to a Euclidean version of isoperimetric inequality. Regarding the inequality conjecture, some of the black hole processes may be restricted [@RayCQG2009; @Gibbons2; @PVpapers].
Furthermore, the extension of thermodynamic phase space has dramatic effects on the studying of famous phase transition of black holes in AdS space [MyersPRD1999,Banerjee:2011au,Wu2012]{} and improves the analogy between small/large black hole with the Van der Waals liquid/gas phase transitions. Indeed, the AdS charged black holes exhibit an interesting phase transition with the same critical behavior as Van der Waals model, qualitatively [PVpapers]{}.
Taking into account the above statements, we should note that the charge of the black hole plays a crucial role in this phase transition. Therefore it is important to know effects of any modification in the electromagnetic field. Indeed, some characteristic features of universality class of phase transitions such as the value of critical exponents or universal ratio $\frac{%
P_{c}v_{c}}{T_{c}}$ may depend on electromagnetic source or spacetime dimension. On the other hand, considering strong electromagnetic field in regions near to point-like charges, Dirac suggested that one may have to use generalized nonlinear Maxwell theory in those regions [@Dirac]. Similar behavior may occur in the vicinity of neutron stars and black objects and so it is expected to consider nonlinear electromagnetic fields with an astrophysical motive [@Bialynicka]. In addition, within the framework of quantum electrodynamics, it was shown that quantum corrections lead to nonlinear properties of vacuum which affect the photon propagation [@Heisenberg; @Delphenich; @Schwinger; @Stehle]. Moreover, the effects of Born–Infeld (BI) source in the thermodynamics and phase transition of black hole [@ThermoBI] have been studied. Besides, in context AdS/CFT some authors consider roles of BI source on shear viscosity [@Sun08] and holographic superconductors [@AdSCFTBI].
By this observations one may find it is worthwhile to study the effects of nonlinear electrodynamics (NLEDs) on phase transition of black holes in the extended phase space. In this direction, the effects of nonlinear electromagnetic field of static and rotating AdS black holes in the extended phase space have been analyzed [@PVnonlinear]. It has been shown that for the BI black holes, one may obtain the same qualitative behavior as RN black holes. Indeed, BI electromagnetic field does not have any effect on the values of critical exponents, but it changes the universal ratio $\rho_{c}=%
\frac{P_{c}v_{c}}{T_{c}}$ [@PVnonlinear].
Although BI theory is a specific model in the context of NLEDs, the recent interest on the NLEDs theories is mainly due to their emergence in the context of low-energy limit of heterotic string theory or as an effective action for the consideration of effects loop corrections in QED where quartic corrections of Maxwell field strength appear [@Kats].
In the last five years, a class of NLEDs has been introduced, the so-called power Maxwell invariant (PMI) field (for more details, see [@PMIpapers1; @PMIpapers2]). The PMI field is significantly richer than that of the Maxwell field, and in the special case ($s=1$) it reduces to linear electromagnetic source. The black hole solutions of the Einstein-PMI theory and their interesting thermodynamics and geometric properties have been examined before [@PMIpapers1; @PMIpapers2]. In addition, in the context of AdS/CFT correspondence, the effects of PMI source on strongly coupled dual gauge theory have been investigated [@AdSCFTPMI].
The bulk action of Einstein-PMI gravity has the following form [PMIpapers2]{} $$I_{b}=-\frac{1}{16\pi }\int_{M}d^{n+1}x\sqrt{-g}\left( R+\frac{n(n-1)}{l^{2}}%
+\mathcal{L}_{PMI}\right) , \label{Action}$$where $\mathcal{L}_{PMI}=(-\mathcal{F})^{s}$ and $\mathcal{F}=F_{\mu \nu
}F^{\mu \nu }$. Before we proceed, we provide some of reasonable motivation for considering this form of NLEDs.
*First*, between NLEDs theories, the PMI theory is a toy model to generalize Maxwell theory which reduces to it for $s=1$. One of the most important properties of the PMI model in $(n+1)$-dimensions occurs for $%
s=(n+1)/4$ where the PMI theory becomes conformally invariant and so the trace of energy-momentum tensor vanishes, the same as Maxwell theory in four-dimensions [@PMIpapers1]. Considering this value for the nonlinearity parameter, $s$, one can obtain inverse square law for the electric field of charged pointlike objects in arbitrary dimensions (the same as Coulomb’s field in four-dimensions). Furthermore, it has been shown that there is an interesting relation between the solutions of a class of pure $F(R)$ gravity and those of conformally invariant Maxwell source ($%
s=(n+1)/4$) in Einstein gravity [@CIMFR].
*Second*, we should note that considering the $E_{8}\times
E_{8}$ heterotic string theory, the $SO(32)$ gauge group has a $U(1)$ subgroup. It has been shown that [@StrinNL] taking into account a constant dilaton, the effective Lagrangian has Gauss-Bonnet term as well as a quadratic Maxwell invariant in addition to the Einstein-Maxwell Lagrangian. Since, unlike the quadratic Maxwell invariant, the Gauss-Bonnet term becomes a topological invariant and does not give any contribution in four-dimensions, it is natural to investigate Einstein-NLEDs in four dimensions. Taking into account a PMI theory as a NLEDs Lagrangian and expanding it for $\mathcal{F\longrightarrow F}_{0}$ (where we considered $\mathcal{F}_{0}$ as a unknown constant which we should fix it.), we find $$\mathcal{L}_{PMI}\simeq -a_{1}\mathcal{F}+(s-1)\left[ a_{0}+a_{2}(-\mathcal{F%
})^{2}+a_{3}(-\mathcal{F})^{3}+...\right] . \label{Expand}$$In other words, one can consider series expansion of $\mathcal{L}_{PMI}$ near a constant $\mathcal{F}_{0}$ and obtain Eq. ([Expand]{}), in which the constants $a_{i}$’s are depend on $s$ and $\mathcal{F%
}_{0}$. In order to obtain $\mathcal{F}_{0}$ and also have a consistent series expansion with linear Maxwell Lagrangian ($s=1$), one should set $a_{1}=1$. Taking into account $a_{1}=1$ and obtaining $\mathcal{F}_{0}$, we are in a position to get a new series expansion for $\mathcal{L}_{PMI}$ $$\mathcal{L}_{PMI}\simeq -\mathcal{F}+(s-1)\left[ b_{0}+b_{2}(-\mathcal{F}%
)^{2}+b_{3}(-\mathcal{F})^{3}+...\right] , \label{Expand2}$$where $b_{i}$’s are only depend on $s$. Although, $\mathcal{L}_{PMI}=(-%
\mathcal{F})^{s}$ can lead to Eq. (\[Expand2\]) by a series expansion, working with Eq. (\[Expand2\]) is more complicated and we postpone the study of this scenario to another paper.
*Third*, taking into account the applications of the AdS/CFT correspondence to superconductivity, it has been shown that the PMI theory makes crucial effects on the condensation as well as the critical temperature of the superconductor and its energy gap [@AdSCFTPMI].
Motivated by the recent results mentioned above, we consider the PMI theory to investigate the effects of nonlinearity on the extended phase space thermodynamics and $P$–$V$ criticality of the solutions. Moreover, to better understand the role of nonlinearity, we relax the conformally invariant constraint and take $s$ as an arbitrary constant. It helps us to have a deep perspective to study the universal behavior of large/small black hole phase transitions. In particular, we are keen on understanding sensitivity of the critical exponents, universal ratio and other thermodynamic properties to nonlinearity parameter, $s$.
Outline of this paper is as follows: In Sec. \[BH\], we consider spherically symmetric black hole solutions of Einstein gravity in the presence of the PMI source. Regarding the cosmological constant as thermodynamic pressure, we study thermodynamic properties and obtain Smarr’s mass relation. In Sec. \[Cano\], we investigate the analogy of black holes with Van der Waals liquid–gas system in the grand canonical ensemble by fixing charge at infinity. In this ensemble we find the free energy and plot the coexistence curve of a small/large black hole. Then, we calculate the critical exponent and find they match to mean field value (same as a Van der Waals liquid). Moreover, we consider the special case $s=n/2$ as BTZ-like solution, study its phase transition and show the critical exponents are the same as former case. In Sec. \[GCano\], we consider the possibility of the phase transition in the grand canonical ensemble and find that in contrast to RN black holes, the phase transition occurs for $s
\neq 1$. Finally, we finish this work with some concluding remarks.
Extended phase-space thermodynamics of black holes with PMI source {#BH}
==================================================================
We consider a spherically symmetric spacetime as $$ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega _{d-2}^{2},
\label{Metric}$$ where $d\Omega _{d}^{2}$ stands for the standard element on $S^{d}$. Considering the field equations following from the variation of the bulk action with Eq. (\[Metric\]), one can show that the metric function $f(r)$, gauge potential one–form $A$ and electromagnetic field two–form $F$ are given by [@PMIpapers2] $$\begin{aligned}
f(r) &=&1+\frac{r^{2}}{l^{2}}-\frac{m}{r^{n-2}}+\frac{(2s-1)^{2}\left( \frac{%
(n-1)(2s-n)^{2}q^{2}}{(n-2)(2s-1)^{2}}\right) ^{s}}{%
(n-1)(n-2s)r^{2(ns-3s+1)/(2s-1)}}, \label{metfunction} \\
A &=&-\sqrt{\frac{n-1}{2(n-2)}}qr^{(2s-n)/(2s-1)}dt, \label{A} \\
F&=&dA. \label{dA}\end{aligned}$$ The power $s \neq n/2$ denotes the nonlinearity parameter of the source which is restricted to $s>1/2$ [@PMIpapers2], and the parameters $m$ and $q$ are, respectively, related to the ADM mass $M$ and the electric charge $%
Q $ of the black hole $$\begin{aligned}
M &=&\frac{\omega _{n-1}}{16\pi }(n-1)m, \label{Mass} \\
Q &=&\frac{\sqrt{2} (2s-1)s\; \omega _{n-1}}{8\pi }\left( \frac{n-1}{n-2}%
\right) ^{s-1/2}\left( \frac{\left( n-2s\right) q}{2s-1}\right) ^{2s-1},
\label{Charge}\end{aligned}$$ where $\omega _{n-1}$ is given by $$\omega _{n-1}=\frac{2\pi ^{\frac{n}{2}}}{\Gamma \left( \frac{n}{2}\right) }.
\label{Omega}$$ It has been shown that [@PMIpapers2] Eqs. (\[Metric\]) and ([metfunction]{}) describe a black hole with a cauchy horizon ($r_{-}$) and an event horizon ($r_{+}$). The event horizon radius of this black hole can be calculated numerically by finding the largest real positive root of $%
f(r=r_{+})=0$. Using the surface gravity relation, we can obtain the temperature of the black hole solutions as $$T=\frac{f^{\prime }(r_{+})}{4\pi }=\frac{n-2}{4\pi r_{+}}\left( 1+\frac{n}{%
n-2}\frac{r_{+}^{2}}{l^{2}}-\frac{(2s-1)\left( \frac{(n-1)(2s-n)^{2}q^{2}}{%
(n-2)(2s-1)^{2}}\right) ^{s}}{(n-1)(n-2)r_{+}^{2(ns-3s+1)/(2s-1)}}\right).
\label{T}$$
The electric potential $\Phi $, measured at infinity with respect to the horizon while the black hole entropy $S$, was determined from the area law. It is easy to show that $$\begin{aligned}
\Phi &=&\sqrt{\frac{n-1}{2(n-2)}}\frac{q}{r_{+}^{(n-2s)/(2s-1)}},
\label{Phi} \\
S &=&\frac{\omega _{n-1}r_{+}^{n-1}}{4}. \label{S}\end{aligned}$$ Now, as it was considered before [@PVpapers], we interpret $\Lambda $ as a thermodynamic pressure $P$, $$P=-\frac{1}{8\pi }\Lambda =\frac{n(n-1)}{16\pi l^{2}}, \label{PLambda}$$ where its corresponding conjugate quantity is the thermodynamic volume [Gibbons2]{} $$V=\frac{\omega _{n-1}{r_{+}}^{n}}{n}. \label{volrp}$$ Considering obtained quantities, one can show that they satisfy the following Smarr formula $$M=\frac{n-1}{n-2}TS+\frac{ns-3s+1}{s(2s-1)(n-2)}\Phi Q-\frac{2}{n-2}VP.
\label{Smarr}$$ It has been shown that Eq. (\[Smarr\]) may be obtained by a scaling dimensional argument [@scaling; @RayCQG2009]. In addition, the (extended phase-space) first law of thermodynamics can be written as $$dM=TdS+\Phi dQ+VdP. \label{firstLaw}$$ In what follows, we shall study the analogy of the liquid–gas phase transition of the Van der Waals fluid with the phase transition in black hole solutions in the presence of PMI source.
Canonical ensemble {#Cano}
==================
In order to study the phase transition, one can select an ensemble in which black hole charge is fixed at infinity. Considering the fixed charge as an extensive parameter, the corresponding ensemble is called a canonical ensemble.
Equation of state
-----------------
Using the Eqs. (\[PLambda\]) and (\[T\]) for a fixed charge $Q$, one may obtain the equation of state, $P(V,T)$ $$P=\frac{(n-1)}{4r_{+}}T-\frac{(n-1)(n-2)}{16\pi r_{+}^{2}}+\frac{1}{16\pi }%
\frac{(2s-1)\left( \frac{(n-1)(2s-n)^{2}q^{2}}{(n-2)(2s-1)^{2}}\right) ^{s}}{%
r_{+}^{2s(n-1)/(2s-1)}}, \label{state}$$where $r_{+}$ is a function of the thermodynamic volume, $V$ \[see Eq. ([volrp]{})\]. Following [@PVpapers], we identify the geometric quantities $%
P $ and $T$ with physical pressure and temperature of system by using dimensional analysis and $l_{P}^{n-1}=G_{n+1}\hbar /c^{3}$ as $$\lbrack \mbox{Press}]=\frac{\hbar c}{l_{p}^{n-1}}[P],\quad \lbrack %
\mbox{Temp}]=\frac{\hbar c}{k}[T]. \label{dimless1}$$Therefore, the physical pressure and physical temperature are given by $$\begin{aligned}
\mbox{Press} &=&\frac{\hbar c}{l_{p}^{n-1}}P=\frac{\hbar c}{l_{p}^{n-1}}%
\frac{(n-1)T}{4r_{+}}+\dots \notag \\
&=&\frac{k\mbox{Temp}(n-1)}{4l_{p}^{n-1}r_{+}}+\dots \;. \label{dimless2}\end{aligned}$$Now, one could compare them with the Van der Waals equation [@PVpapers], and identify the specific volume $v$ of the fluid with the horizon radius as $v=\frac{4r_{+}l_{P}^{n-1}}{n-1}$, and in geometric units ($l_{P}=1$, $%
r_{+}=\left( n-1\right) v/4$), the equation of state (\[state\]) can be written in the following form $$\begin{aligned}
P &=&\frac{T}{v}-\frac{(n-2)}{\pi (n-1)v^{2}}+\frac{1}{16\pi }\frac{\kappa
q^{2s}}{v^{2s(n-1)/(2s-1)}}, \label{StateV} \\
\kappa &=&\frac{4^{2s(n-1)/(2s-1)}(2s-1)\left( \frac{(n-1)(2s-n)^{2}}{%
(n-2)(2s-1)^{2}}\right) ^{s}}{(n-1)^{2s(n-1)/(2s-1)}}, \label{kappa}\end{aligned}$$Considering Eq. (\[StateV\]), we plot the $P-V$ isotherm diagram in Fig. \[FPV\]. This figure shows that, similar to Van der Waals gas, there is a critical point which is a point of inflection on the critical isotherm. The pressure and volume at the critical point are known as the critical pressure and the critical volume, respectively. Above the critical point and for large volumes and low pressures, the isotherms lose their inflection points and approach equilateral hyperbolas, the so-called the isotherms of an ideal gas. It is shown that the slope of the isotherm $P-V$ diagram passing through the critical point is zero. Furthermore, as we mentioned before, the critical point is a point of inflection on the critical isotherm, hence $$\begin{aligned}
\frac{\partial P}{\partial v} &=&0, \label{dpdv} \\
\quad \frac{\partial ^{2}P}{\partial v^{2}} &=&0. \label{d2pdv2}\end{aligned}$$Using Eqs. (\[dpdv\]) and (\[d2pdv2\]) with the equation of state ([StateV]{}), we will be able to calculate the critical parameters $$\begin{aligned}
v_{c} &=&\left[ \frac{\kappa s(n-1)^{2}(2ns-4s+1)q^{2s}}{16(n-2)(2s-1)^{2}}%
\right] ^{(2s-1)/[2(ns-3s+1)]}, \label{Vc} \\
T_{c} &=&\frac{4(n-2)(ns-3s+1)\left[ \frac{\kappa s(n-1)^{2}(2ns-4s+1)q^{2s}%
}{16(n-2)(2s-1)^{2}}\right] ^{(1-2s)/[2(ns-3s+1)]}}{\pi (n-1)(2ns-4s+1)},
\label{Tc} \\
P_{c} &=&\frac{(n-2)(ns-3s+1)}{\pi s(n-1)^{2}\left[ \frac{\kappa
s(n-1)^{2}(2ns-4s+1)q^{2s}}{16(n-2)(2s-1)^{2}}\right] ^{(2s-1)/(ns-3s+1)}}.
\label{Pc}\end{aligned}$$These relations lead us to obtain the following universal ratio $${\rho }_{c}=\frac{P_{c}v_{c}}{T_{c}}=\frac{2ns-4s+1}{4s(n-1)}.
\label{UniversalRatio}$$Note that for $s=-2/(n-5)$ with arbitrary spacetime dimensions, one can recover the ratio $\rho _{c}=3/8$, characteristic for a Van der Waals gas.
$%
\begin{array}{cc}
\epsfxsize=7cm \epsffile{PVs34n3.eps} & \epsfxsize=7cm \epsffile{PVs2n5.eps}%
\end{array}
$
Free energy
------------
Thermodynamic behavior of a system may be governed by the thermodynamic potentials such as the free energy. It is known that the free energy of a gravitational system may be obtained by evaluating the Euclidean on-shell action. In order to calculate it, we use the counterterm method for cancelling of divergences. Furthermore, to make an action well-defined, one should add the Gibbons-Hawking boundary term to the bulk action. In addition, in order to fix charge on the boundary (working in canonical ensemble) we should consider a boundary term for electromagnetic field. So the total action is [@DEhShakVah] $$I=I_{b}+I_{ct}-\frac{1}{8\pi }\int_{\partial M}d^{n}x~\sqrt{\gamma }~K-\frac{%
s}{4\pi }\int_{\partial M}d^{n}x~\sqrt{\gamma }(-\mathcal{F})^{s-1}~n_{\mu
}F^{\mu \nu }A_{\nu }, \label{FullAction}$$where $I_{ct}$ is the counterterm action, and $\gamma _{ij}$ and $K$ denote the induced metric and extrinsic curvature of the boundary. Using Eq. ([FullAction]{}), it is straightforward to calculate the on-shell value of the total action $$I=\frac{\beta \omega _{n-1}}{16\pi }\left( 1-{\frac{r_{+}^{2}}{{l}^{2}}}+{%
\frac{\left( 2s-1\right) (2sn-4s+1)\Psi ^{s}r_{+}^{2}}{\left( n-1\right)
\left( n-2s\right) }}\right) r_{+}^{n-2}, \label{Onshell}$$where $$\Psi =\left( \frac{n-1}{n-2}\right) \left( \frac{2s-n}{2s-1}\right)
^{2}q^{2}r_{+}^{-\frac{2(n-1)}{2s-1}}.$$and $\beta $ is the periodic Euclidean time which is related to the inverse of Hawking temperature. Using the fact that $G=I\beta
^{-1}$ with Eq. (\[PLambda\]), the (fixed charge) free energy in the extended phase space may be written as $$G(T,P)=\frac{\omega _{n-1}}{16\pi }\left( {1}-\frac{16\pi Pr_{+}^{2}}{n(n-1)}%
+{\frac{\left( 2s-1\right) (2sn-4s+1)\Psi ^{s}r_{+}^{2}}{\left( n-1\right)
\left( n-2s\right) }}\right) {r}_{+}^{n-2}.$$
$%
\begin{array}{cc}
\epsfxsize=7cm \epsffile{GS65n4.eps} & \epsfxsize=7cm \epsffile{GS34n3.eps}%
\end{array}
$
$%
\begin{array}{cc}
\epsfxsize=7cm \epsffile{COPTplot.eps} & \epsfxsize=7cm %
\epsffile{COPconformal345.eps}%
\end{array}
$
The behavior of the free energy is displayed in Fig. \[FG\]. In this figure the characteristic swallowtail behavior of the free energy shows the first order phase transition happen between large and small charged black holes. Using the fact that the free energy, temperature and the pressure of the system are constant during the phase transition, one can plot the coexistence curve of two phases large and small charged black holes in the PMI theory (see Fig. \[PT\]). Along this curve, small and large black holes have alike temperature (horizon radii) and pressure.
### Critical exponents
One of the most important characteristics of the phase transition is the value of its critical exponents. So, following the approach of [@PVnonlinear], we calculate the critical exponents $\alpha
$, $\beta $, $\gamma $, $\delta $ for the phase transition of $(n+1)$-dimensional charged black holes with an arbitrary $s$. In order to obtain the critical exponent $\alpha $, we consider the entropy of horizon $S$ and rewrite it in terms of $T$ and $V$. So we have $$S=S(T,V)=\Bigl[\omega _{n-1}\Bigl(nV\Bigr)^{n-1}\Bigr]^{\frac{1}{n}}.
\label{Ent}$$Obviously, this is independent of $T$ and then the specific heat vanishes, ($%
C_{V}=0$), and hence $\alpha =0$. To obtain other exponents, we study equation of state (\[StateV\]) in terms of reduced thermodynamic variables $$p=\frac{P}{P_{c}},\quad \nu =\frac{v}{v_{c}},\quad \tau =\frac{T}{T_{c}}.
\label{Reduced}$$So, Eq. (\[StateV\]) translates into the following reduced equation of state $$p=\frac{4(n-1)s\tau }{(2ns-4s+1)\nu }-\frac{n-1}{(ns-3s+1)\nu ^{2}}+\frac{%
(2s-1)^{2}}{(2ns-4s+1)(ns-3s+1)\nu ^{\frac{2s(n-1)}{2s-1}}}. \label{statesd}$$To study the recent equation, we will slightly generalize the argument of [@PVnonlinear] for nonlinear Maxwell theory. Indeed, we can rewrite the equation of state (\[statesd\]) as $$p=\frac{1}{\rho _{c}}\frac{\tau }{\nu }+f(\nu ,s), \label{general}$$where $\rho _{c}$ stands for the critical ratio and $$f(\nu ,s)=\frac{1}{s\left( 1-4{\rho }_{c}\right) }\left( \frac{1}{\nu ^{2}}-%
\frac{\left( \frac{2s-1}{n-1}\right) ^{2}}{4s{\rho }_{c}\nu ^{\frac{2s(n-1)}{%
2s-1}}}\right) .$$The function $f(v,s)$ depends on $v$ and $s$ compared to [@PVnonlinear] where it is independent of $s$. But as we will see the nonlinearity parameter $s$ does not play any dramatic role and does not change critical exponents. Following the method of Ref. [@PVnonlinear], one may define two new parameters $t$ and $\omega $ $$\tau =t+1,\quad \nu =(\omega +1)^{1/\epsilon }, \label{omegat}$$where $\epsilon $ is a positive parameter. Now we can expand (\[statesd\]) near the critical point to obtain $$p=1+At-Bt\omega -C\omega ^{3}+O(t\omega ^{2},\omega ^{4}),
\label{generalexpansion}$$with $$A=\frac{1}{\rho _{c}},\quad B=\frac{1}{\epsilon \rho _{c}},\quad C=\frac{%
2s(n-1)}{3\epsilon ^{3}(2s-1)}. \label{ABC}$$We consider a fixed $t<0$ and differentiate the Eq. (\[generalexpansion\]) to obtain $$dP=-P_{c}(Bt+3 C \omega ^{2})d\omega . \label{dPgeneral}$$Now, we denote the volume of small and large black holes with $\omega _{s}$ and $\omega _{l}$, respectively, and apply the Maxwell’s equal area law. One obtains $$\begin{aligned}
p &=&1+At-Bt\omega _{l}-C\omega _{l}^{3}=1+At-Bt\omega _{s}-C\omega _{s}^{3}
\notag \\
0 &=&\int_{\omega _{l}}^{\omega _{s}}\omega dP.\end{aligned}$$ This equation leads to a unique non-trivial solution $$\omega _{s}=-\omega _{l}=\sqrt{\frac{-Bt}{C}},$$ and therefore we can find $$\eta =V_{c}(\omega _{l}-\omega _{s})=2V_{c}\omega _{l}\propto \sqrt{-t}\quad
\Rightarrow \quad \beta =\frac{1}{2}.$$ Now, we should calculate the next exponent, $\gamma $. In order to obtain it, one should consider Eq. (\[generalexpansion\]). After some manipulation one can obtain $$\kappa _{T}=-\frac{1}{V}\frac{\partial V}{\partial P}\Big |_{T}\propto \frac{%
1}{P_{c}}\frac{1}{Bt}\quad \Rightarrow \quad \gamma =1.$$Next, we calculate the final exponent, $\delta $. To do this, we should obtain the shape of the critical isotherm $t=0$ (\[generalexpansion\]), i.e., $$p-1=-C\omega ^{3}\quad \Rightarrow \quad \delta =3.$$We conclude that the thermodynamic exponents associated with the nonlinear charged black holes in any dimension $n\geq 3$ with arbitrary nonlinearity parameter, $s\neq n/2$, coincide with those of the Van der Waals fluid (the same as critical exponents of the linear Maxwell case).
Equation of state for the BTZ-like black holes
----------------------------------------------
So far, we have investigated the phase transition of black holes in the presence of nonlinear PMI source with the nonlinearity $s\neq n/2$. Interestingly, for $s=n/2$, the solutions (the so-called BTZ-like black holes) have different properties. In other words, the solutions for $s=n/2$ are not the special limit of the solutions for general $s$. In fact, the solutions of $s=n/2$ are completely special and differ from the solutions of other values of $s$. As we will see, for $s=n/2$ the charge term in metric function is logarithmic and the electromagnetic field is proportional to $%
r^{-1}$ (logarithmic gauge potential). In other words, in spite of some differences, this special higher dimensional solution has some similarity with the charged BTZ solution and reduces to the original BTZ black hole for $%
n=2 $.
Considering the metric (\[Metric\]) and the field equations of the bulk action (\[Action\]) with $s=n/2$, we can find that the metric function $%
f(r)$ and the gauge potential may be written as $$\begin{aligned}
f(r) &=&1+\frac{r^{2}}{l^{2}}-\frac{m}{r^{n-2}}-\frac{2^{n/2}q^{n}}{r^{n-2}}%
\ln \left( \frac{r}{l}\right) , \label{Vbtz} \\
A &=&q\ln \left( \frac{r}{l}\right) dt, \label{Abtz}\end{aligned}$$ Straightforward calculations show that BTZ-like spacetime has a curvature singularity located at $r=0$, which is covered with an event horizon. The temperature of this black hole can be obtained as [@BTZlike] $$T=\frac{n-2}{4\pi r_{+}}\left( 1+\frac{n}{n-2}\frac{r_{+}^{2}}{l^{2}}-\frac{%
2^{n/2}q^{n}}{(n-2)r_{+}^{n-2}}\right) , \label{TBTZ}$$
In this section, we will investigate the analogy of the liquid–gas phase transition of the Van der Waals fluid with the phase transition in BTZ-like black hole solutions [@BTZlike]. Following the same approach and using Eqs. (\[PLambda\]) and (\[TBTZ\]) for a fixed charge $Q$, we obtain $$P=\frac{(n-1)}{4r_{+}}T-\frac{(n-1)(n-2)}{16\pi r_{+}^{2}}+\frac{1}{16\pi }%
\frac{2^{n/2}(n-1)q^{n}}{r_{+}^{n}}. \label{P1}$$
Using Eqs. (\[dimless1\]) and (\[dimless2\]) with the fact that in geometric units $v=\frac{4r_{+}}{n-1}$, Eq. (\[P1\]) may be rewritten as $$\begin{aligned}
P &=&\frac{T}{v}-\frac{(n-2)}{\pi (n-1)v^{2}}+\frac{1}{16\pi }\frac{\kappa
^{\prime }q^{2s}}{v^{n}}, \label{P2} \\
\kappa ^{\prime } &=&\frac{2^{5n/2}}{(n-1)^{n-1}}.\end{aligned}$$
Now, we plot the isotherm $P-V$ diagram in Fig. \[FPVbtz\]. The behavior of these plots is the same as the Van der Waals gas. In order to find the critical quantities, one may use Eqs. (\[dpdv\]) and (\[d2pdv2\]). These relations help us to obtain $$\begin{aligned}
v_{c} &=&\left[ \frac{\kappa ^{\prime }n(n-1)^{2}q^{n}}{32(n-2)}\right]
^{1/(n-2)}, \label{Vc2} \\
T_{c} &=&\frac{2(n-2)^{2}\left[ \frac{\kappa ^{\prime }n(n-1)^{2}q^{n}}{%
32(n-2)}\right] ^{-1/(n-2)}}{\pi (n-1)^{2}}, \label{Tc2} \\
P_{c} &=&\frac{(n-2)^{2}}{\pi n(n-1)\left[ \frac{\kappa ^{\prime
}n(n-1)^{2}q^{n}}{32(n-2)}\right] ^{2/(n-2)}}. \label{Pc2}\end{aligned}$$Having the critical quantities at hand, we are in a position to obtain the following universal ratio $${\rho }_{c}=\frac{P_{c}v_{c}}{T_{c}}=\frac{n-1}{2n}. \label{ratio}$$It is notable that only for $n=4$ ($5$-dimensional BTZ-like black holes), one can recover the ratio $\rho _{c}=3/8$, characteristic for a Van der Waals gas, where for higher dimensional Reissner–Nordström black holes, this ratio has been recovered only for $4$-dimensions [@PVnonlinear]. In addition, considering $s=n/2$ in Eq. (\[UniversalRatio\]), one can obtain $%
{\rho }_{c}$ of the BTZ-like black holes.
$%
\begin{array}{cc}
\epsfxsize=7cm \epsffile{PVbtzn4.eps} & \epsfxsize=7cm \epsffile{PVbtzn5.eps}%
\end{array}
$
### Critical exponents of BTZ-like black holes
In order to obtain the critical exponents of the phase transition, we follow the same procedure of [@PVnonlinear]. The entropy of horizon $S(T,V)$ is the same as Eq. (\[Ent\]), which is independent of $T$ and hence $\alpha =0 $. In addition we can use Eq. (\[Reduced\]) and rewrite Eq. (\[P2\]) in the following form $$p=\frac{2n\tau }{(n-1)\nu }-\frac{n}{(n-2)\nu ^{2}}+\frac{2}{(n-1)(n-2)\nu
^{n}}. \label{statesd2}$$
It is straightforward to show that the thermodynamic exponents associated with the BTZ-like black holes in arbitrary dimension, coincide with those of the Van der Waals fluid (the same as critical exponents of the PMI case).
Grand canonical ensemble {#GCano}
========================
In addition to the canonical ensemble, one can work with a fixed electric potential at infinity. The ensemble of this fixed intensive quantity translates into the grand canonical ensemble. It is worthwhile to note that, for linear Maxwell field, the criticality cannot happen in the grand canonical ensemble [@PVpapers].
Equation of state
-----------------
In this section, we study the critical behavior of charged black holes in the grand canonical (fixed $\Phi $) ensemble. We take $q=\Phi
r_{+}^{(n-2s)/(2s-1)}$ with $v=\frac{4r_{+}}{n-1}$ to rewrite Eq. ([state]{}) in the following form $$P=\frac{T}{v}\mathbf{-}\frac{(n-2)}{(n-1)\pi v^{2}}\mathbf{+}\frac{2s-1}{%
16\pi }\left( \frac{4\sqrt{2}(n-2s)\Phi }{(2s-1)(n-1)v}\right) ^{2s}\mathbf{,%
} \label{State Grand}$$ In the Maxwell theory ($s=1$) Eq. (\[State Grand\]) reduces to $$Pv^{2}=Tv-\frac{(n-2)}{(n-1)\pi }+\frac{2}{\pi }\frac{(n-2)^{2}}{(n-1)^{2}}%
\Phi ^{2}\mathbf{.}$$
Clearly, this is a quadratic equation and does not show any criticality. Interestingly, in contrast to the Maxwell field ($s=1$), the PMI theory admits phase transition in this ensemble and one can study the fixed potential $P-V$ phase transition of the black holes in extended phase space. However, the general behavior of the isotherm $P-V$ diagram for fixed potential is same as fixed charge ensemble as displayed in Fig \[PotPV\]. Applying Eqs. (\[dpdv\]) and (\[d2pdv2\]) to the equation of state, it is easy to calculate the critical point in the grand canonical ensemble $$\begin{aligned}
v_{c} &=&\frac{4\sqrt{2}(n-2s)}{(2s-1)(n-1)}\left[ \frac{2s(2s-n)^{2}}{%
(n-2)(n-1)}\right] ^{\frac{1}{2(s-1)}}\Phi ^{\frac{s}{s-1}}, \\
T_{c} &=&\frac{(s-1)(n-2)}{\pi (n-2s)}\left[ \frac{(n-2)(n-1)}{%
s2^{s}(n-2s)^{2}}\right] ^{\frac{1}{2(s-1)}}\Phi ^{\frac{-s}{s-1}}, \\
P_{c} &=&\frac{(s-1)(2s-1)^{2}2^{(4-5s)/(s-1)}}{s\pi }\left[ \frac{%
(n-2s)^{2}s^{\frac{1}{s}}}{(n-1)(n-2)}\right] ^{\frac{-s}{s-1}}\Phi ^{\frac{%
-2s}{s-1}}.\end{aligned}$$One must consider that in contrast to the canonical ensemble for $s>\frac{n}{2}$ or $s<1~$the $v_{c}$, $T_{c}$ or $P_{c}~$ take the negative value so there is not any physical phase transition in these cases. Using the values of $v_{c},T_{c\text{ }}$and $P_{c}$ we will be able to obtain the following universal ratio$${\rho }_{c}=\frac{P_{c}v_{c}}{T_{c}}=\frac{2s-1}{4s}. \label{ratio2}$$Note that here this universal ratio is independent of $n$ and for $s=2$ one can recover the ratio $\rho _{c}=3/8$, characteristic for a Van der Waals gas. Although Eq. (\[ratio2\]) does not depend on the spacetime dimensions, but one can take $s=n/2$ to recover Eq. (\[ratio\]) of BTZ-like black holes. Furthermore, it is interesting to mention that one can set $n=2s$ in Eq. (\[UniversalRatio\]) to obtain Eq. (\[ratio2\]).
$%
\begin{array}{cc}
\epsfxsize=7cm \epsffile{PotPS65n3.eps} & \epsfxsize=7cm %
\epsffile{PotPS54n4.eps}%
\end{array}
$
Free energy
------------
By ignoring the surface term of PMI and fixing the potential on the boundary $%
\delta A_{\mu }|_{\partial M}=0$, one can find the on-shell action correspondence to free energy in the grand canonical ensemble. So, we take the action as follows $$I=I_{b}+I_{ct}-\frac{1}{8\pi }\int_{\partial M}d^{n}x~\sqrt{\gamma }~K.$$ Now we can find the free energy $G=I\beta ^{-1}$ as $$G(T,P)=\frac{\omega _{n-1}}{16\pi }\left( {1}-\frac{16\pi Pr_{+}^{2}}{n(n-1)}%
+{\frac{2^{s}\left( 2s-1\right) ^{2-2s}\Phi ^{2s}r_{+}^{2-2s}}{\left(
n-1\right) \left( 2s-n\right) ^{1-2s}}}\right) {r}_{+}^{n-2}.$$ Now, we are looking for the phase transition. We plot figure \[PotG\] and find that, in contrast to Maxwell case, there is a first order phase transition. In other words, the nonlinearity parameter, $s$, affects the existence of the phase transition in the grand canonical ensemble.
$%
\begin{array}{cc}
\epsfxsize=7cm \epsffile{PotGS65n3.eps} & \epsfxsize=7cm %
\epsffile{PotGS54n4.eps}%
\end{array}
$
Concluding Remarks
==================
In this paper, we have considered the cosmological constant and its conjugate quantity as thermodynamic variables and investigated the thermodynamic properties of a class of charged black hole solutions. At the first step, we have introduced the black hole solutions of the Einstein-$%
\Lambda $ gravity in the presence of the PMI source.
Then, we have used the Hawking temperature as an equation of state and calculated the the critical parameters, $T_{c}$, $v_{c}$ and $P_{c}$. We have plotted the isotherm diagram ($P$–$V$) of charged black holes in PMI theory and found that the total behavior of this diagram is the same as that of the Van der Waals gas. Also, we have obtained the free energy of a gravitational system through the use of Euclidean on-shell action to investigate its thermodynamics behavior.
Furthermore, we have calculated the critical exponents of the phase transition and concluded that the thermodynamic exponents associated with the nonlinear charged black holes in arbitrary dimension coincide with those of the Van der Waals fluid (the mean field theory).
Also, we have applied the same procedure for the BTZ-like black holes to obtain their phase transition. Calculations showed that thermodynamic behaviors of BTZ-like black holes are the same as PMI ones.
Moreover, we have studied the grand canonical ensemble in which the potential, instead of charge, should be fixed on the boundary. In contrast to the Maxwell case [@PVpapers], here one sees a phase transition. We have also computed the universal ratio $\frac{P_{c}v_{c}}{T_{c}}=\frac{2s-1}{4s}$ and found that it does not depend on the spacetime dimensions.
Finally, we have found that, $v_{c}$, $T_{c}$ and $P_{c}$ have different dependencies of $n$ and $s$ for PMI and BTZ-like in the canonical ensemble, but when $n=2s$ both of them reduce to the universal ratio $\rho_{c}=\frac{P_{c}v_{c}}{T_{c}}=\frac{2s-1}{4s}$ which we have found in the grand canonical ensemble.
It is interesting to investigate underlining reasons for the mentioned universality and figure out why this ratio in the grand canonical ensembles is independent of spacetime dimensions. Moreover, in the statistical physics, it is known that a universality class of criticality is characterized by dimensions of space, order parameters and fluctuations [@Gold]. However, it is not clear which features of gravitational theories and black holes determine the universality class of phase transition or modify critical exponents. Clearly as we found in this paper and have been shown in [@PVnonlinear], the critical exponents do not change by crucial modifications of matter fields (such as PMI modification) or changing the space-time dimensions. In addition, it seems the geometry of spacetime is also irrelevant to critical exponents as one can see in the slowly rotating black holes [@PVnonlinear]. Interestingly, these critical exponents remain unchanged in the mean field class even when one considers some corrections to gravity action [@GB]. Besides, it is worthwhile to think about whether there is any holographic interpretation for the extended phase space thermodynamics and the universal classification. Perhaps a holographic approach helps us to have a better understanding of this problem. We leave the study of these interesting questions for future studies.
Acknowledgement
===============
We thank an anonymous referee for useful comments. M. H. Vahidinia would like to thank A. Montakhab, S. Jalali, P. Manshour and A. Moosavi for useful discussions. S. H. Hendi wishes to thank Shiraz University Research Council. This work has been supported financially by Research Institute for Astronomy & Astrophysics of Maragha (RIAAM), Iran.
[99]{} G. Gibbons, R. Kallosh and Barak Kol, Phys. Rev. Lett. **77**, 4992 (1996).
J.D. Brown and C. Teitelboim, Phys. Lett. B **195**, 177 (1987).
M. M. Caldarelli, G. Cognola and D. Klemm, Class. Quantum Gravit. **17**, 399 (2000).
D. Kastor, S. Ray and J. Traschen, Class. Quantum Gravit. **26**, 195011 (2009).
M. Cvetic, G. Gibbons, D. Kubiznak and C. Pope, Phys. Rev. D **84**, 024037 (2011).
B. P. Dolan, Class. Quantum Gravit. **28**, 125020 (2011);
B. P. Dolan, Class. Quantum Gravit. **28**, 235017 (2011);
D. Kubiznak and R. B. Mann, JHEP **07**, 033 (2012);
B. P. Dolan, \[arXiv:1209.1272\].
C. Niu, Y. Tian and X. Wu, Phys. Rev. D **85**, 024017 (2012).
A. Chamblin, R. Emparan, C. Johnson and R. Myers, Phys. Rev. D **60**, 064018 (1999);
A. Chamblin, R. Emparan, C. Johnson and R. Myers, Phys. Rev. D **60**, 104026 (1999).
R. Banerjee and D. Roychowdhury, JHEP **1111**, 004 (2011).
P. A. M. Dirac, *Lectures on Quantum Mechanics*, Yeshiva University, Belfer Graduate School of Science, New York (1964).
Z. Bialynicka-Birula and I. Bialynicka-Birula, Phys. Rev. D **2**, 2341 (1970).
W. Heisenberg and H. Euler, Z. Phys. **98**, 714 (1936); *Translation by*: W. Korolevski and H. Kleinert, *Consequences of Dirac’s Theory of the Positron*, \[physics/0605038\];
H. Yajima and T. Tamaki, Phys. Rev. D **63**, 064007 (2001).
D. H. Delphenich, *Nonlinear electrodynamics and QED*, \[arXiv: hep-th/0309108\];
D. H. Delphenich, *Nonlinear optical analogies in quantum electrodynamics*, \[arXiv: hep-th/0610088\].
J. Schwinger, Phys. Rev. **82**, 664 (1951).
P. Stehle and P. G. DeBaryshe, Phys. Rev. **152**, 1135 (1966).
Y. S. Myung, Y. W. Kim and Y. J. Park, Phys. Rev. D **78**, 044020 (2008);
S. Fernando, Phys. Rev. D **74**, 104032 (2006);
O. Miskovic and R. Olea, Phys. Rev. D **77**, 124048 (2008);
Y. S. Myung, Y. W. Kim and Y. J. Park, Phys. Rev. D **78**, 084002 (2008);
R. Banerjee and D. Roychowdhury, Phys. Rev. D **85**, 044040 (2012);
R. Banerjee and D. Roychowdhury, Phys. Rev. D **85**, 104043 (2012).
R. G. Cai and Y. W. Sun, JHEP **09**, 115 (2008);
X. H. Ge, Y. Matsuo, F. W. Shu, S. J. Sin and T. Tsukioka, JHEP **10**, 009 (2008).
S. Gangopadhyay and D. Roychowdhury, JHEP **1205**, 002 (2012);
D. Roychowdhury, Phys. Rev. D **86**, 106009 (2012);
S. Gangopadhyay and D. Roychowdhury, JHEP **1205**, 156 (2012).
S. Gunasekaran, R. B. Mann and D. Kubiznak, \[arXiv:1208.6251\].
Y. Kats, L. Motl and M. Padi, JHEP **0712**, 068 (2007);
D. Anninos and G. Pastras, JHEP `0907`, 030 (2009);
R. G. Cai, Z. Y. Nie and Y. W. Sun, Phys. Rev. D **78**, 126007 (2008);
N. Seiberg and E. Witten, JHEP **09**, 032 (1999);
E. Fradkin and A. Tseytlin, Phys. Lett. B **163**, 123 (1985);
R. Matsaev, M. Rahmanov and A. Tseytlin, Phys. Lett. B **193**, 205 (1987);
E. Bergshoff, E. Sezgin, C. Pope and P. Townsend, Phys. Lett. B **188**, 70 (1987);
A. Tseytlin, Nucl. Phys. B **276**, 391 (1986);
D.J. Gross and J. H. Sloan, Nucl. Phys. B **291**, 41 (1987).
M. Hassaine and C. Martinez, Phys. Rev. D **75**, 027502 (2007);
S. H. Hendi and H. R. Rastegar-Sedehi, Gen. Relativ. Gravit. **41**, 1355 (2009);
S. H. Hendi, Phys. Lett. B **677**, 123 (2009);
M. Hassaine and C. Martinez, Class. Quantum Gravit. **25**, 195023 (2008);
H. Maeda, M. Hassaine and C. Martinez, Phys. Rev. D **79**, 044012 (2009);
S. H. Hendi and B. Eslam Panah, Phys. Lett. B **684**, 77 (2010);
S. H. Hendi, Prog. Theor. Phys. **124**, 493 (2010);
S. H. Hendi, Eur. Phys. J. C **69**, 281 (2010);
S. H. Hendi, Phys. Rev. D **82**, 064040 (2010).
J. Jing, Q. Pan and S. Chen, JHEP **1111**, 045 (2011);
D. Roychowdhury, Phys. Lett. B **718**, 1089 (2013).
S. H. Hendi, Phys. Lett. B **690**, 220 (2010);
D. J. Gross and J. H. Sloan, Nucl. Phys. B **291**, 41 (1987);
W. A. Chemissany, M. de Roo and S. Panda, JHEP **08**, 037 (2007).
J. P. Gauntlett, R. C. Myers and P. K. Townsend, Class. Quantum Gravit. **16**, 1 (1999).
M. H. Dehghani, C. Shakouri and M. H. Vahidinia, Phys. Rev. D. **87**, 084013 (2013).
S. H. Hendi, Eur. Phys. J. C **71**, 1551 (2011).
N. Goldenfeld, *“Lectures on Phase Transitions and the Renormalization Group,”* (Westview Press, New York, 1992).
S. W. Wei and Y. X. Liu, Phys. Rev. D **87**, 044014 (2013);
R. G. Cai, L. M. Cao, L. Li and R. Q. Yang, \[arXiv:1306.6233\].
[^1]: email address: hendi@shirazu.ac.ir
[^2]: email address: vahidinia@shirazu.ac.ir
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present 237 new spectroscopically confirmed pre-main-sequence K and M-type stars in the young Upper Scorpius subgroup of the Sco-Cen association, the nearest region of recent massive star formation. Using the Wide-Field Spectrograph at the Australian National University 2.3m telescope at Siding Spring, we observed 397 kinematically and photometrically selected candidate members of Upper Scorpius, and identified new members by the presence of Lithium absorption. The HR-diagram of the new members shows a spread of ages, ranging from $\sim$3-20Myr, which broadly agrees with the current age estimates of $\sim$5-10Myr. We find a significant range of Li 6708 equivalent widths among the members, and a minor dependence of HR-diagram position on the measured equivalent width of the Li 6708Å line, with members that appear younger having more Lithium. This could indicate the presence of either populations of different age, or a spread of ages in Upper Scorpius. We also use Wide-Field Infrared Survey Explorer data to infer circumstellar disk presence in 25 of the members on the basis of infrared excesses, including two candidate transition disks. We find that 11.2$\pm$3.4% of the M0-M2 spectral type (0.4-0.8[M$_{\odot}$]{}) Upper Sco stars display an excess that indicates the presence of a gaseous disk.'
author:
- |
\
\
bibliography:
- 'master\_reference.bib'
title: 'New Pre-main-Sequence Stars in the Upper Scorpius Subgroup of Sco-Cen'
---
stars: pre-main-sequence - stars: formation - open clusters and association: individual: Sco-Cen - surveys - protoplanetary disks
Introduction {#intro}
============
The Scorpius-Centaurus-Lupus-Crux Association (Sco OB2, Sco-Cen) is the nearest location to the Sun with recent high-mass star formation [@zeeuw99]. Young OB associations, such as Sco-Cen, provide an incredible laboratory in the form of a primordial group of stars directly after formation, which can be exploited in the study of the output of star formation including searches for young exoplanets. The obvious prerequisite for such study is a level of completeness in the identification of association members that is currently not yet attained in Sco-Cen in any mass regime, other than the most massive B-type stars. Sco-Cen contains approximately 150 B-type stars [@myfirstpaper] which have been typically split into three subgroups: Upper Scorpius, Upper-Centaurus-Lupus (UCL) and Lower-Centaurus-Crux (LCC) with only the B, A and F-type membership of Sco-Cen being considered relatively complete, with some 800 members. Even in this high-mass regime, there is expected to be a $\sim$30% contamination by interlopers in the kinematic membership selections, mainly due to the lack of precision radial velocity measurements for these objects [@myfirstpaper]. Additionally, in light of the upcoming high-precision GAIA proper motions and parallaxes, a well characterised spectroscopically confirmed Sco-Cen membership will be instrumental in illuminating the substructure of the association.
Unfortunately, Sco-Cen is poorly characterised for its proximity, the reason for which is the enormous area of sky the association inhabits at low Galactic latitudes ($\sim80^\circ\times25^\circ$ or $\sim150\times50$pc). IMF extrapolation from the high-mass members implies, with any choice of IMF law, that Sco-Cen is expected to have $\sim 10^4$ PMS G, K and M-type members, most of which are, as yet, undiscovered. This implies that the vast majority of PMS ($<$20Myr) stars in the solar neighbourhood are in Sco-Cen [@preibisch02], making Sco-Cen an ideal place to search for young, massive planetary companions. Although some work has been done in illuminating the lower-mass population of Sco-Cen (see @preibisch08), the late-type membership of Sco-Cen cannot be considered complete in any spectral-type or colour range. A more complete picture of the late-type membership of Sco-Cen is the primary requirement for determining the age spread, structure, and star formation history of the association, for illuminating the properties of star formation, and for embarking on further searches for young exoplanets to better define their population statistics.
The age of the Sco-Cen subgroups has been contentious. Upper Scorpius has long been considered to be $\sim$5Myr old, however recent work has shown that it may be as old as 11Myr [@geus92; @pecaut12]. Similarly, B, A and F-type UCL and LCC members have main-sequence turn off/on ages of $\sim16-18$Myr, while studies of the incomplete sample of lithium-rich G, K and M-type members show a variety of mass-dependent age estimates. The HR-diagram age for the known K-type stars in UCL and LCC is $\sim$12Myr, the few known M-type stars indicate a significantly younger age of $\sim$4Myr, most likely due to a bias produced by a magnitude limited sample, and the G-type members have an age of $\sim$17Myr, which is consistent with the more massive stars [@preibisch08; @song12]. There is also a positional trend in the age of the PMS stars of the older subgroups, with stars closer to the Galactic Plane appearing significantly younger than objects further north. This is almost certainly the result of as yet undiscovered and un-clarified substructure within the older subgroups, which may have a very complex star-formation history.
The above is clear motivation for the identification of the full population of the Sco-Cen association, a task that will require significant observational and computational effort to complete. In this paper, we describe a new search for PMS members of the Upper Scorpius region of the Sco-Cen association. We have used statistical methods to select a sample of likely Upper Scorpius members from all-sky data, and have conducted a spectroscopic survey to determine youth and membership in the Sco-Cen association using the Wide-Field Spectrograph instrument at the Australian National University 2.3m telescope.
Selection of Candidate Members
==============================
We have selected candidate Upper Scorpius members using kinematic and photometric data from UCAC4, 2MASS, USNO-B and APASS [@ucac4; @2mass; @usnob; @apass]. A purely kinematic selection of the low-mass members of Sco-Cen is not sufficient to assign membership to G,K and M-type stars because the quality of the astrometric data available would produce an interloper contamination much higher than would be acceptable for future studies using Sco-Cen as an age benchmark. In order to clearly separate young Upper Scorpius members from field stars, spectroscopic follow-up is needed to identify stellar youth indicators. We employed two separate selection methods to prioritise targets based on kinematic and photometric data.
The first selection used was based on the Bayesian Sco-Cen membership selection of @myfirstpaper, which uses kinematic and spatial information to assign membership probabilities. We further developed this method to apply to K and M-type stars, in order to properly treat the absence of a parallax measurement. We took the proper-motions from the UCAC4 catalog [@ucac4] and photometry from 2MASS and APASS [@2mass; @apass], and used the photometry and a premain-sequence isochrone [@siess00] to estimate each candidate member’s distance. We then treated the proper-motion and estimated distance together to calculate the membership probability. This selection was magnitude limited, and covered all stars in the UCAC4 catalog with 10$<$V$<$16, and comprised of $\sim$2000 candidate members with membership probability greater than 2%. For a more complete explanation of the Bayesian selection, including information from [@myfirstpaper] and the changes adopted for use with the K and M-type star data see Appendix \[bayesapp\].
The second selection was based on the selection used for the Coma-Ber cluster in the study of @kraushillenbrand_comaber, and was designed to select targets for the Upper Scorpius field of the Kepler K2 campaign. Targets which were both placed above the main-sequence based on photometric distance estimates, and had proper-motions consistent with Upper Scorpius membership were deemed to be potential members and included in the observing sample. This selection spanned F to late M-type stars, with targets falling on Kepler silicon prioritized for spectroscopic follow-up. This selection is considerably more conservative than the Bayesian selection, and includes a much larger number of candidates. Where the two selections overlap, we have $>$90% of the Bayesian selected stars included in the sample. Our final, combined sample was then drawn from both of the above selection; we include a candidate in the final target list if it was identified by either method. Figure \[selection\_pm\] displays the proper motions of the selected stars from both samples.
![The proper motions of the candidate Upper Scorpius members selected by both the @kraushillenbrand_comaber selection method (black points) and the Bayesian method (purple circles).[]{data-label="selection_pm"}](sample_pm-eps-converted-to.pdf){width="45.00000%"}
In light of the currently ongoing Galactic Archeology Survey, using the HERMES spectrograph on the Anglo-Ausralian Telescope [@zuckerhermes2012], which will obtain high-resolution optical spectra in the coming years for all stars in the Sco-Cen region of sky, down to V=14, we have decided to primarily observe targets in our sample fainter than this limit. While our selection methods identified candidate Upper Scorpius stars across the entire subgroups $(342^\circ<l<360^\circ, 10^\circ<b<30^\circ)$ in our observations we strongly favored candidate members which fell upon the Kepler K2 field 2 detector regions, which covers the majority of the centre of Upper-Scorpius with rectangular windows. As such, the spatial distribution of this sample will not reflect the true substructure of Upper Scorpius. We observed all the targets in our K2 sample with Kepler interpolated V magnitudes of $(\sim13.5<V_{jk}<15)$, as well as some further brighter targets. In total, we obtained optical spectra for 397 candidate Upper Scorpius K and M-type stars. The full list of observed candidate targets, including both those stars determined to be members and non-members, can be found in Table \[obstable\_lowmass\], along with proper motions, computed Bayesian membership probability, integration time, and SNR in the continuum near H-$\alpha$.
------------- ------------- ------- ------- ------- -------------- -------------- -------- -------------------- ------- ----- ----
R.A. Decl. V K $\mu_\alpha$ $\mu_\delta$ T
(J2000.0) (J2000.0) MJD (mag) (mag) (mas) (mas) Source P$_{\mathrm{mem}}$ (sec) SNR M?
15 39 06.96 -26 46 32.1 56462 12.5 8.7 -35.3 -41.7 a 31 90 131 Y
15 37 42.74 -25 26 15.8 56462 13.5 9.7 -14.6 -26.7 a 85 90 80
15 35 32.30 -25 37 14.1 56462 11.7 8.4 -9.0 -22.9 a 69 90 116
15 41 31.21 -25 20 36.3 56462 10.0 7.2 -16.9 -28.7 a 86 90 151 Y
------------- ------------- ------- ------- ------- -------------- -------------- -------- -------------------- ------- ----- ----
Spectroscopy with WiFeS
=======================
The Wide-Field Spectrograph (WIFES) instrument on the Australian National University 2.3m telescope is an integral field, or imaging, spectrograph, which provides a spectrum for a number of spatial pixels across the field of view using an image slicing configuration. The field of view of the instrument is 38$\times$25 arcseconds, and is made up of 25 slitlets which are each one arc second in width, and 38 arcseconds in length. The slitlets feed two 4096$\times$4096 pixel detectors, one for the blue part of the spectrum and the other for the red, providing a total wavelength coverage of 330 - 900$\mu$m, which is dependent on the specific gratings used for the spectroscopy. Each 15 micron pixel corresponds to 1$\times$0.5 arcseconds on sky.
There are a number of gratings offered to observers for use with WiFeS. For identification of Upper Scorpius members, we required intermediate-resolution spectra of our candidate members, with a minimum resolution of $\sim$3000 at the Li 6708Å line, and so selected the R7000 grating for the red arm and the B3000 grating for the blue arm, which was used solely for spectral-typing. This provided $\lambda / \Delta\lambda\sim7000$ spectra covering the lithium 6708Å and H-$\alpha$ spectroscopic youth indicators. A dichroic, which splits the red and blue light onto the two arms of the detector, can be position either at 4800Å or 5600Å. For the first three successful observing nights we use the dichroic at 4800Å which produced a single joined spectrum from 3600 to 7000AA. For the remaining 7 nights, we position the dichroic at 5600Å which produces two separate spectra, with the the blue arm covering 3600 to 4800Å and the red arm covering 5300 to 7000Å. This change was made to accommodate poor weather backup programs being simultaneously carried out, which will be the subject of future publications. To properly identify members, we required a $3\sigma$-detection of a 0.1Å equivalent width Li line, which corresponds to a signal-to-noise ratio of at least 30 per pixel. In order to achieve this, we took exposures of 5 minutes for R=13 stars (approximately type M3 in Upper Scorpius), and binned by 2 pixels in the y-axis, to create 1$\times$1" spatial pixels and reduce overheads. With overheads we were able to observe 10 targets an hour in bright time, or $\sim$80-90 targets per completely clear night.
In total we obtained 18 nights of time using WiFeS, split over 2013 and 2014, however the majority of the 2013 nights were unusable due to weather. Our first two observing runs, in June 2013, and April 2014 yielded one half-night of observations each, and our final observing run yielded seven partially clear nights. During our first two nights, we positioned the dichroic at 5500Å, and during the June 2014 observing run, positioned the dichroic at 4600Å, which provides more of the red arm, because this mode was deemed better for obtaining radial velocities of B, A and F-type Sco-Cen stars, which we observed as backup targets during poor weather, and will be the subject of a future publication.
Data Reduction
==============
The raw WiFeS data was initially reduced with a pre-existing Python data reduction software package called the “WiFeS PyPeline”, which was provided to WiFeS observers. The purpose of the software is to transform the CCD image, which consists of a linear spectrum for each spatial pixel of the WiFeS field of view, into a data cube. This involves bias subtraction, flat-fielding, bad pixel and cosmic ray removal, sky subtraction, wavelength calibration, flux calibration, reformatting into the cube structure, and interpolation across each pixel to produce a single wavelength scale for the entire image. On each night, we observed at least one flux standard from @bessellflux99, which are included in the data reduction pipeline as flux calibrator objects. Once this process is complete, the user is left with a single cube for each object observed, with dimensions 25”$\times$38”$\times$3650 wavelength units. For the grating resolutions and angles used in our observations, we obtained spectral coverage from $3200-5500$Å in increments of 1.3Å in the blue arm, and $5400-7000$Å in increments of 0.78Å in the red arm.
\
Following the standard WiFeS reduction procedure, we continued with a further custom reduction, the aim of which was to measure the centroid position of the target object in each wavelength, such that the presence of H-$\alpha$ emitting low-mass stellar companions, outflows, and H-$\alpha$-bright planetary mass companion could be detected by the measurement of a wavelength-dependent centroid shift. This consisted of determining a best fit point spread function (PSF) model for the spatial image in a clean section of the spectrum, and then measuring the centroid shift of this PSF at each wavelength along the spectrum. An additional benefit of this is a more accurate sky subtraction, and an integrated spectrum of each object, which can be used to measure equivalent widths of key spectral lines. The results of the centroid measurements and any detected companions will be reported in a further publication.
We first cut out a 10” by 10” wide window (10$\times$10 pixels), centered on the target. The vast majority of the stellar flux is contained within the central 3” by 3” region of the windowed image, and so the adopted width of 10” allows a clear region of background around the target; Figure \[subim\_example\] provides an illustration of the data. We then fit a Moffat point spread function [@racine96] to a region of the spectral continuum which does not include any spectral features, but is close to the H-$\alpha$ line. This region consisted of 400 spectral units, spanning $6368-6544$Å. Figure \[spect\_example\] displays the spectral region used for the initial PSF fit, as well as the H-$\alpha$ and Li 6708Å lines for one target in our sample, 1RXS J153910.3-264633, which shows strong indications of youth.
The particular model that we fit to the spatial image is given by;
$$\mathrm{PSF} = \mathrm{S} + \mathrm{F} \frac{(2^{\frac{1}{\beta}}-1)(\beta-1)}{\pi w^2(1 + (2^{\frac{1}{\beta}}-1)(\theta/w)^2)^{\beta}},
\label{moffat}$$
where S indicates the sky contribution to the flux, $\beta$ is an integer parameter that determines the strength of the wings of the Moffat PSF, $\theta$ is the distance from the centre of the profile, $w$ is the half width of the Moffat PSF, and F is the stellar flux. Given that we have a two dimensional PSF, and that each dimension has a different Moffat function half width, we require two different values of $w$. We create this two dimensional Moffat profile by scaling $\theta$ appropriately;
$$\theta = w_x^2(x-x_0)^2 + w_y^2(y-y_0)^2,
\label{moff_scale}$$
where $w_x$ and $w_y$ are the PSF width parameters in each dimension, $(x,y)$ is the position of a given point on the image, and $(x_0,y_0)$ is the image centroid. Inputting this value of $\theta$ into a Moffat function with width $w=1$ will thus produce the desired asymmetric two dimensional profile.
![The full WiFeS integrated spectrum produced by first processing with the WiFeS Pypeline, and then our spectro-astrometric analysis for the stars USco 48, a known member of the Upper Scorpius subgroup, and 2MASS J16232454-1717270, high probability candidate, and a new member in identified in our survey. The USco 48 spectrum is an example of the data from the 4800Å dichroic setup, and the 2MASS J16232454-1717270 spectrum an example of the 5600Å dichroic setup.[]{data-label="fullspec_example"}](fullspec_examples-eps-converted-to.pdf){width="48.00000%"}
We found that $\beta=4$, a value which describes most telescope PSFs, yielded the closest fit to our data. We also attempted to fit a Gaussian profile to the spatial images, in the same format as the Moffat profile described in equation \[moffat\]; however the Gaussian model produced consistently poorer fits to the data than the Moffat model, particularly in the wings of the PSF, with typical values of $\chi^2_r\sim 4$ for the Gaussian model fit and $\chi^2_r\sim 2$ for the Moffat model. On the basis of the goodness of fit difference, we adopted the Moffat model exclusively in our analysis. For each target observed, we used the continuum spectral region between $6368-6544$Å to determine the parameters of the Moffat PSF that most closely reproduced the spatial images. We then fixed the half width parameters in each dimension, and fit our PSF model to each individual wavelength element image along the spectrum to determine $S$, $F$ and the centroid position for each wavelength. This process provides two useful characteristics, the first of which is the integrated spectrum $(F)$ of the target (see Figure \[fullspec\_example\]), with the sky component $(S)$ subtracted out. Using the cleaned output spectra, we then computed equivalent widths of both the Li 6708Å and H-$\alpha$ lines for each observed star. The second useful characteristic is the centroid position of the star image at each wavelength interval in the spectrum. This can be used to detect accreting stellar and substellar companions by the measurement of a centroid shift in the H-$\alpha$ line image. An analysis of the centroid positions will be presented in a future publication.
Spectral Typing
---------------
We spectral type the reduced spectra created by the centroid-fitting procedure using spectral template libraries as reference. It is also important to incorporate extinction into the spectral-typing procedure for Upper Scorpius, given the typical values of $0.5<$A$_V<2.0$. If an extinction correction is omitted, spectral typing will produce systematically later spectral types for the members. A combination of two template libraries was chosen for the spectral typing, with spectral types earlier than M0 taken from the @pickles98 spectral template library, and the M-type templates taken from the more recent @bochanski_templates.
To carry out the spectral typing, we first computed reduced $\chi^2$ values for each data spectrum on a two-dimensional grid of interpolated template spectra and extinction, with spacing of half a spectral sub-type and 0.1 magnitudes in $E(B-V)$. This was done by first interpolating the template spectra onto the wavelength scale of the data, and then applying the particular amount of extinction according to the @savage_mathis79 extinction law. We also removed the H-$\alpha$ region in the data spectra, because the prevalence of significantly larger H-$\alpha$ emission in young stars will not be adequately reproduced by the templates. The spectral type - extinction point on the grid with the smallest reduced $\chi^2$ was then used as a starting point for least squared fitting with the IDL fitting package MPFIT. The fitting procedure used the same methodology as the grid calculations, with the addition of interpolation between template spectra to produce spectral sub-type models for use in the fitting.
We find the limiting factor in spectral-typing our young Sco-Cen stars to be the fact that the spectral template libraries are built from field stars, and so are not ideal for fitting young, active stars. Hence, while we typically have spectral type fits better than half a spectral sub-type, we report spectral types to the nearest half sub-type, and values of A$_V$ with typical uncertainties of 0.2 magnitudes.
The New Members
===============
------- ------------------- ------------- ------------- -------- ------------------- --------------- ------------------------------------ ------ ------------------
R.A. Decl. EW(Li) $\sigma_{EW(Li)}$ EW(H$\alpha$) $\sigma_{\mathrm{EW(H_{\alpha})}}$ A$_{\mathrm{V}}$
Name 2MASS (J2000.0) (J2000.0) (Å) (Å) (Å) (Å) SpT (mag)
RIK-1 J15390696-2646320 15 39 06.96 -26 46 32.1 0.46 0.02 -1.22 0.03 M0.5 0.2
RIK-2 J15413121-2520363 15 41 31.21 -25 20 36.3 0.40 0.01 -2.70 0.04 K2.5 0.1
RIK-3 J15422621-2247458 15 42 26.21 -22 47 46.0 0.46 0.04 -3.08 0.07 M1.5 0.3
RIK-4 J15450970-2512430 15 45 09.71 -25 12 43.0 0.61 0.02 -2.02 0.04 M1.5 0.4
------- ------------------- ------------- ------------- -------- ------------------- --------------- ------------------------------------ ------ ------------------
Table \[obs\_res\_tab\] lists both the Li 6708Å and H$\alpha$ equivalent widths, and the estimated spectral types and extinction for the new Upper Sco members, and figure \[newmems\_pos\] shows the spatial positions of the new members. We have defined a star as an Upper Scorpius member if the measured equivalent width of the Li 6708Å line was more than $1-\sigma$ above 0.1Å. While this Li threshold is low, it is significantly larger than the field Li absorption, and is in general keeping with previous surveys. The use of this threshold is further justified given the effects of episodic accretion on Li depletion in the latest models [@baraffe10]. In general, the vast majority of the identified members have Li 6708Å equivalent width significantly larger than 0.2Å and so are bonafide young stars. In total we identify 257 stars as members based on their Li 6708Å absorption, 237 of which are new.
The proper-motions of the new members, which were calculated from various all-sky catalogs, or taken from the UCAC4 catalog are shown in Figure \[pm\_all\]. The members have proper motions that overlap the Upper Scorpius B, A and F-type members proper motions (blue crosses), although a significantly large spread is seen. This is consistent with the average uncertainty of $\sim$2-3mas/yr for the K and M-type proper motions.
Figure \[ewli\_all\] displays the Lithium equivalent widths for the identified members as a function of spectral type. The majority of our members are M-type, and we see a sequence of equivalent width with a peak at spectral type M0, and a systematically smaller equivalent width in the M2-M3 range compared to earlier or later M-type members. This is expected as the mid-M range is modelled to show faster Lithium depletion timescale [@dantona94].
Interestingly, we also observe a clear spread in the equivalent width of the Lithium 6708Å line. Figure \[ewli\_zoom\] shows the just the M0 to M5 spectral type range. At each spectral type we see a typical spread of $\sim$0.4Å in Li equivalent width, and a median uncertainty in the equivalent width measurements of $\sim$0.03Å. This implies a $\sim$10-sigma spread in EW(Li) at each spectral type. Wether or not this spread is caused by an age spread in Upper Scorpius is difficult to determine: we have examined the behaviour of EW(Li) as a function of spatial position, both in equatorial and Galactic coordinate frames and found no significant trend. We note that a similar spread of EW(Li) for M-type Upper Scorpius members was observed by @preibisch01. Given the lack of correlation with spatial position, if the EW(Li) spread is caused by an age spread among the members, then the different age populations are overlapping spatially and may not be resolvable without sub-milliarcsecond parallaxes.
In Figure \[ewha\_all\], we display the measured H-$\alpha$ equivalent widths for the members. The majority of the PMS members show some level, of H-$\alpha$ emission, with a clear sequence of increasing emission with spectral type. In combination with the presence of Lithium, this is a further indicator of the youth of these objects. Of our 257 members, $\sim$95% show H-$\alpha$ emission with (1Å$<$EW(H-$\alpha$)$<$10Å), and only 11 of the members do not show emission in H-$\alpha$. All of these 11 members without H-alpha emission are earlier than M0 spectral-type. There are also 35 non-members with H-$\alpha$ emission. Given the values of EW(H$\alpha$) for the M-type members we have identified, the majority of them appear to be weak lined T-Tauri stars and $\sim$10% are Classical T-Tauri stars (CTTS) with EW(H$\alpha$) $>$ 10Å. This proportion agrees with previous studies of Upper-Scorpius members [@walter94; @PZ99; @preibisch01], which find a CTTS fraction of between 4 and 10% for K and M-type Upper-Scorpius stars.
The Efficiency of the Bayesian Selection Algorithm
==================================================
The selection methods we have used to create our target list provide a significant improvement of member detection rate when compared to what can be achieved from simple color-magnitude cuts. We see a large overall identification rate of $\sim$65% for our sample of observed stars. Using the membership probabilities computed for the stars we have observed, we expected that 73$\pm$7% of the observed stars would be members, which agrees with the observed members fraction of 68%. We also find that as a function of computed membership probability, the fraction of members identified among the sample behaves as expected. Figure \[bayes\_hist\] displayed the membership fraction as a function of probability.
Given that our probabilities have been empirically verified to provide a reasonable picture of Upper Scorpius membership, we can derive an estimate for the expected number of M-type members in the subgroup by summation of the probabilities. We find that the total expected number of Upper Scorpius members in the $\sim$0.2 to 1.0[M$_{\odot}$ ]{}range, or late-K to $\sim$M5 spectral type range, is $\sim$2100$\pm$100 members. This agrees with initial mass function estimates which indicate that there are $\sim$1900 members with masses smaller than 0.6[M$_{\odot}$ ]{}in Upper Scorpius [@preibisch02].
![$\mathrm{EW}(\mathrm{H-}\alpha)$ for the new members (black) and the non-members (red). The members follow a clear sequence with H-$\alpha$ increasing with spectral-type. In the K spectal types, we see that non-members show H-$\alpha$ absorption which is generally stronger than that seen in the members, some of which show weak emission.[]{data-label="ewha_all"}](EWHA_all_linear-eps-converted-to.pdf){width="47.00000%"}
The HR-Diagram of the Members
=============================
With the spectral types and extinctions we have determined for the members using the @bochanski_templates and @pickles98 spectral libraries, we can place them on a HR-diagram in the model parameter space. There is significant variability in synthesized photometry between different models for PMS stars, making comparison in the color-magnitude space difficult. Furthermore, the most reliable magnitudes for M-type stars are the near-IR 2MASS photometry, which show minimal variation in the M-type regime where the PMS is near vertical. Instead, we use the spectral types and the empirical temperature scale and J-band bolometric corrections for 5-30Myr stars produced by @pecaut13 and we further correct for extinction using our fitted values of A$_V$ from the spectral typing process, and the @savage_mathis79 extinction law. The resulting HR diagram can be seen in Figure \[cmd\_all\]. We have also superimposed five BT-Settl [@btsettl] isochrones of ages 1, 3, 5, 10 and 20Myr onto the HR-diagram at the typical Upper Scorpius distance of 140pc [@myfirstpaper]. These particular models were chosen because they were used by @pecaut13 in the generation of their temperature scale, and so any relative systematic differences between the models and the temperature scale will most likely be minimized.
Upon initial inspection, it appears that for a given temperature range, the Upper Scorpius members inhabit a significant spread of bolometric magnitudes. This is most likely highly dominated by the distance spread of the Upper Scorpius subgroup, which has members at distances between 100 and 200pc, corresponding to a spread in bolometric magnitude of $\sim$1.5mag between the nearest and furthers reaches of Upper Scorpius. Using the distance distribution of the @myfirstpaper high-mass membership for Upper Scorpius, we find that the expected spread in bolometric magnitude due to distance which encompasses 68% of members is approximately $+0.33$ and $-0.54$ magnitudes. Similarly, unresolved multiple systems can bias the sample towards appearing younger by an increase in bolometric magnitude of up to $\sim$0.7mags for individual stars.
![Fraction of stars identified as members plotted against membership probability computed with our Bayesian selection algorithm. The red line represents the ideal fraction of detected members. We see a very close agreement between the computed membership probability and the fraction of stars which were confirmed as members.[]{data-label="bayes_hist"}](bayes_results_hist-eps-converted-to.pdf){width="45.00000%"}
In the later spectral types, beyond $\log{T_{\mathrm{eft}}}=3.52$ we also begin to see the effects of the magnitude limit of our survey, which operated primarily in the range $13.5<$V$<15$ and so only the brightest, and hence nearest and potentially youngest late M-type members in our original target list were identified, although significant Li depletion at these temperatures is not expected to occur until ages beyond 50Myr. Even with distance spread blurring the PMS in Upper Scorpius, we can see that most of the members appear to be centered around the 5-10Myr age range in the earlier M-type members.
We have also indicated the measured EW(Li) values for the members on the HR-diagram as a color gradient, with darker color indicating a smaller EW(Li). The scale encompasses a range of $0.3<$EW(Li)$<0.7$Å, with values outside this range set to the corresponding extreme color. There is a marginal positional dependence of HR-diagram position with EW(Li): we see that, in particular for the earlier M-type members, the larger values of EW(Li) (light orange) are more clustered around the 3-5Myr position, while the smaller values of EW(Li) (dark red) are clustered closer to 5-10Myr. This could indicate the presence of a spread of ages, or populations of different age in the Upper Scorpius subgroup.
![HR diagram for the Upper Scorpius members we have identified, with bolometric corrections and effective temperatures taken from the @pecaut13 young star temperature-color scale. The blue lines are the BT-Settl isochrones [@btsettl] of ages 1, 3, 5, 10 and 20Myr placed at the typical distance to Upper Scorpius of 140pc. The color of each point indicates the measured EW(Li) for the star, with darker color indicating a lower EW(Li). The color range spans $0.3<$EW(Li)$<0.7$ linearly, with values outside this range set to the corresponding extreme color. The uncertainties are determined by the accuracy of our spectral typing methods, which is typical half a spectral sub-type.[]{data-label="cmd_all"}](mems_hrd-eps-converted-to.pdf){width="48.00000%"}
There is some other evidence of different age populations in the Upper Scorpius subgroup: The existence of very young B-type stars, such as $\tau$-Sco, and $\omega$-Sco which have well measured temperatures and luminosities that indicate an age of $\sim$2-5Myr [@simondiaz06] support a young population in Upper Scorpius. The B0.5 binary star $\delta$-Sco is also likely to be quite young ($\sim$5Myr) [@code76]. @pecaut12 place it on the HR diagram at and age of $\sim$10Myr, however, due to the rapid rotation and possible oblate spheroid nature of the primary, the photometric prescriptions for determining the effective temperature and reddening of the primary used by @pecaut12 are likely to fail for this object. The spectral type is more consistent with a temperature of $\sim$30000K. Additionally, the presence of other evolved B-type stars is evidence for an older population [@pecaut12]. Furthermore, the recent age estimate of 13Myr for the F-type members of Upper Scorpius by @pecaut12 further supports an older population in the subgroup. If the HR diagram position on EW(Li) that we observe among our members is real than this also supports multiple age population in Upper Scorpius.
\
\
Disk Candidates
===============
We have also obtained the Wide-Field Infrared Survey Explorer (WISE) infrared photometry [@wise10], from the ALLWISE version of the catalog, for the observed candidate members in order to determine the prevalence of circumstellar disks among our new members. The identification of new populations of stars bearing disks is valuable because it provides extension to the current samples used in the study of disk property measurements and disk evolution. The AllWISE catalog provides photometry in four bands W1, W2, W3 and W4, with effective wavelengths of 3, 4.5, 12 and 22$\mu$m respectively. The W2 and W3 photometry is effective for tracing the presence of an inner disk, while excess in the W4 band photometry can indicate the presence of a colder, outer disk or transitional disk.
We queried the ALLWISE catalog for the positions of the 397 stars we observed from our sample, including 237 new members, with a search radius of 5". The search returned 395 matches with varying levels of photometric quality. We then placed each star on three spectral type-color diagrams incorporating 2MASS [@2mass] K-band photometry, these were K-W2, K-W3, and K-W4. Past studies have used both K, and W1 as the base photometry for building color-color diagrams [@carpenter06; @carpenter09; @rizzuto12; @luhman10; @luhman2012_disk]. Typically, the presence of a disk within $\sim3$AU of a host star increases the brightness in the IR wavelengths, with $\sim$5$\mu$m being the approximate wavelength where the disk dominates in brightness. Both the W1 and K magnitudes are long enough such that reddening is not a significant issue, but also shorter than the expected point of disk domination. We found that examining the WISE bands relative to the K magnitude produced a better separation of disk bearing stars from photospheric emission, and so we report the analysis in terms of this methodology.
Figure \[excess\_figs\] displays the three spectral type-color diagrams. We excluded any WISE photometry in a given band that was flagged as having a signal to noise ratio of $<4$, as a non-detection, or flagged as being contaminated by any type of image artifact in the catalog. This resulted in the exclusion of 56, 10 and 312 objects in the W2, W3, and W4 bands respectively. The primary source of the exclusions for the W4 band was non-detection or low signal to noise at 22$\mu$m, and most of the exclusions in the W2 and W3 bands were due to contamination by image artifacts. To reduce contamination by extended sources we also excluded any object flagged as being nearby a known extended source or with significantly poor photometry fits, there were eight such objects. The WISE band images for these stars were then inspected visually to gauge the extent of contamination. We found the three of the objects were not significantly effected by the nearby extended source, and so included them in the analysis. After excluding these objects we were left with 333, 379 and 77 objects with photometry of sufficient quality in the W2, W3 and W4 bands respectively.
Due to the age of Upper Scorpius of $\sim$10Myr, the majority of members no longer possess a disk, providing sufficient numbers of stars to clearly identify photospheric emission. Hence the photosphere color can be determined from the clustered sequence in the spectral type-color diagrams. We fit a straight lines in the K-W3 and K-W4 WISE band colors, and a disjointed line in K-W2, and then place a boundary where the photospheric sequence ends. For K-W3 the boundary line is given by the points (K0,0.27) and (M5,0.8) and for K-W4 the points (K0,0.56) and (M5,1.6).The sloped part of the boundary line for K-W2 is defined by the points (M0,0.21) and (M5,0.46), and the flat section by K-W2 $=0.21$, for spectral types earlier than M0. These boundaries are shown as black lines in Figures \[excess\_figs\]. Stars with color redder than these boundaries we deem to display an excess in the particular WISE band. Upon inspection, we find that our placement of the end of the photospheric sequence is closely consistent with that of [@luhman2012_disk]. In the K-W4 color, we find that for stars of spectral type later than $\sim$M2, the photospheric emission in W4 is undetectable by WISE.
For those stars which displayed excesses in any combination of WISE bands, we visually inspected the images to exclude the possibility that the excesses could have been caused by the presence of close companions or nebulosity. We also found that in a few cases background structure in the W4 image could cause the appearance of an excess, although this effect was largely mitigated by our signal-to-noise cutoff. We rejected 23 of the excess detections after inspection, 12 of which were caused by background structure or nearby nebulosity, and 11 of which were due to blending with nearby stars. We further excluded any object which shows an excess in only the K-W2 color as likely being produced by unresolved multiplicity. After these rejections, 27 stars remained with reliable excess detections. Additionally, a single object, 2MASS J16194711-2203112 , displayed an excess in K-W3, but had a W4 detection with signal-to-noise of 3.5. Upon inspection of the corresponding W4 band image, we included it as exhibiting an excess in K-W4.
[ccccccccc]{} & R.A. & Decl. & & & & & &\
2MASS & (J2000.0) & (J2000.0) & M & E & D & W2 & W3 & W4\
\
We can classify the disk types by the amount of excess displayed in different colors compared to the photosphere. We adopt disk type criteria in the E(K-W4), E(K-W3) space consistent with those described in @luhman10 and @luhman2012_disk, which identify four different categories of disk: Full or primordial disks, transition disks, evolved disks, and debris or evolved transition disks. Primordial or full disks exhibit strong emission across the entire IR spectral range. Transition disks are structurally different in that they have a significant cleared inner hole, which is visible as a weaker emission at the shorter IR wavelengths, but still relatively bright in the longer IR wavelengths. Evolved disks do not show a gap in IR emission, but have started to become thinned and appear fainter at all IR wavelengths than unevolved full disks, with a steady decline in IR excess with age [@carpenter09]. Debris disk and evolved transition disk have similar IR SED’s, showing only weak excesses at the longer IR wavelengths. Figure \[wk4wk3\] show both E(K-W4) and E(K-W3) for the stars identified as having displaying an excess. The lines in Figure \[wk4wk3\] bound the different regions populated by the various disk types. We classify all objects with excesses in W3 and W4 beneath the dashed line to be debris or evolved transition disks candidates, and the objects above the solid line to be full disks. Stars with excess between these two lines we classify as evolved disk candidates. Finally, we identify the two objects with a large W4 excess, but W3 excesses too small to be classified as full disks, as transition disk candidates. Table \[excess\_table\] lists the excess status for the stars with detected excesses.
In total, we identify 26 of the Upper Scorpius members as displaying a disk-indicating excess with spectral types later than K0, and one star without significant Lithium absorption that also displays an excess. This latter object is an F4.5 spectral type object,, HD-145778, with $EW(Li)=0.09\pm0.2$. The presence of some Lithium absorption, combined with the disk presence mean that this object can be considered to be a member of Upper Scorpius. We have included it as a member at the end of Table \[obs\_res\_tab\]. HD-145778 is not in this HIPPARCOS catalog [@lindegren97], potentially explaining why it was not included in past memberships.
Due to the WISE detection limit in the W4 band, we are almost certainly not able to identify the vast majority of the evolved transitional and debris disks, which show only a small color excess in K-W4. Indeed, we only detect two such disks in our sample, one of which, USco 41, was previously identified with Spitzer photometry [@carpenter09], when significantly more are expected from previous statistics [@carpenter09; @luhman2012_disk]. Furthermore, it is likely that a number of evolved disks around stars of spectral type later than $\sim$M3 are not detected here. For this reason it is difficult to meaningfully estimate the disk or excess fraction for our entire sample. In the M0 to M2 spectral type range, where we expect the majority of the full, evolved and transitional disks to be detectable by WISE, we have 11 disks, 6 of which are full, 4 evolved and 1 transitional. Excluding all those members flagged for extended emission, confusion with image artifacts, or unreliable excesses, we find and excess fraction of 11.2$\pm$3.4%. @carpenter09 found a primordial disk fraction for M-type Upper Scorpius members of $\sim$17%, and @luhman2012_disk find excess fractions of 12% and 21% for K-type and M0 to M4-type members respectively. Given the strong increase in excess fraction towards the late M-type members and the potential for some missed evolved disks due to the WISE detection limits, we find that our excess fraction estimate is consistent with these past results.
Conclusions
===========
We have conducted a spectroscopic survey of 397 candidate Upper Scorpius association K and M-type members chosen through statistical methods, and revealed 237 new PMS members among the sample based on the presence of Li absorption. We also identify 25 members in our sample with WISE near-infrared excesses indicative of the presence of a circumstellar disk, and classify these disk on the basis of their color excess in different WISE bands. We find that the members show a significant spread in EW(Li), and upon placing the members on a HR diagram, we find that there is a potential age spread, with a small correlation between EW(Li) and HR-diagram position. This could indicate the presence of a distribution of ages, or multiple populations of different age in Upper Scorpius.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We have performed a near-infrared photometric monitoring of 39 galactic young star clusters and star-forming regions, known as [*NIP of Stars*]{}, between the years 2009–2011, using the Swope telescope at Las Campanas Observatory (Chile) and the RetroCam camera. The primary objective of the campaign is to perform a census of photometric variability of such clusters and to discover massive eclipsing binary stars. In this work, we describe the general idea, the implementation of the survey, and the first preliminary results of some of the observed clusters. This monitoring program is complementary to the [*Vista Variables in the Vía Láctea*]{} ([*VVV*]{}), as the brightest sources observed in [*NIP of Stars*]{} are saturated in [*VVV*]{}.'
author:
- 'R. Barbá$^{1,2}$, N. Morrell$^{3}$, G. Gunthardt$^{2,4}$, S. Torres Robledo$^{2}$, M. Jaque$^{1,2}$, M. Soto$^{2}$, G. Ferrero$^{5}$, J. Arias$^{2}$, A. Román-Lópes$^{2}$, R. Gamen$^{5}$, and J. Astudillo Hormazabal$^{2}$'
title: 'Southern near-infrared photometric monitoring of Galactic young star clusters ([*NIP of Stars*]{})'
---
1.0cm
[NIP of Stars]{}
PRESENTACIÓN MURAL 0.3cm
Hemos realizado un monitoreo fotométrico infrarrojo de 39 cúmulos jóvenes y regiones de formación estelar galácticas, conocido como [*NIP of Stars*]{}, entre 2009–2011, utilizando el telescopio Swope del Observatorio las Campanas (Chile) y la cámara RetroCam. El objetivo primario de la campaña es hacer un censo de variabilidad fotométrica de tales cúmulos, y descubrir binarias masivas eclipsantes. En este trabajo presentamos la idea general del proyecto la implementación de un datoducto semiautomático, como así también resultados preliminares en uno de los cúmulos observados. Este programa de monitoreo es complementario al relevamiento [*Vista Variables in the Vía Láctea*]{} ([*VVV*]{}), dado que las fuentes más brillantes observadas en [*NIP of Stars*]{} saturan en [*VVV*]{}.
Motivation and sample selection
===============================
The mass, mass-loss and rotation are among the most important parameters that govern the stellar evolution. In the context of stellar masses, massive O and WR-type binaries are key objects because they enable us to determine minimum masses from the solution of their radial-velocity (RV) curves, and in the case to constrain the orbital inclination (for example through eclipsing binaries), we can get absolute masses. Knowing the multiplicity of massive stars is important because this factor has a deep impact on stellar evolution, the initial mass function, and on the energy balance of its environment and helping with clues about their origins (Zinnecker & Yorke, 2007).
The ample majority of studies about massive eclipsing binaries comes from observations in the optical range, therefore the sample is limited to very few objects with relatively low reddening. Surprising to learn that of the 370 O-type stars which are counted in the [*The Galactic O Star Catalog*]{} ([*GOS*]{}, Maíz Apellániz et al. 2004), there are only 38 eclipsing or ellipsoidal variables. From this group, no more than fifteen systems have reliable light- and RV- curves. The panorama resulting in the infrared is much worse: there are only five massive eclipsing systems with published data. Conclusion: all our knowledge about the absolute masses of massive stars of O- and WN-type is derived from only few tens of objects. A situation which poses challenges.
The primary objective of this project is to conduct a census of photometric variability in a set of young galactic open clusters and star forming regions affected by large extinction ($A_{\rm V} = 6 -30$). From those variable stars, we are specially interested in the massive eclipsing binaries, which can be observed spectroscopically to determine absolute stellar parameters. We have selected thirty-nine galactic young clusters and star-forming regions following these criteria: a) clusters must be more or less resolved at scale of one arcsecond, with uncrowded background to get reliable photometry. Thus, clusters like Arches are discarded; b) some of its massive members must have spectral classification; c) previous studies must have indication of the presence of at least five stars with spectral type earlier than B0; d) such stars must be in the $H$-magnitude range $8 <H< 12$.
This NIR photometric monitoring program is very complementary to the [*Vista Variable in the Vía Láctea*]{} ([*VVV*]{}, Minniti et al. 2010) survey, as the brightest sources of [*NIP of Stars*]{} are saturated in [ *VVV*]{} images.
Observing campaigns and pipeline
================================
The observations were carried out using the RetroCam camera attached to the Swope 1-meter telescope at Las Campanas Observatory (Chile) during three seasons in 2009 to 2011. Thirty-eight observing nights presented photometric conditions, from a total of seventy-three nights. Ten nights were completely lost due to bad weather. The RetroCam camera (Hamuy et al. 2006) consists of an one-megapixel Rockwell Hawaii-1 HgCdTe array, with a spatial scale of $0\farcs54$ per pixel, which provides a $9'\times9'$ field-of-view (FOV). This spatial resolution is about four times better than Two-Micron All Sky Survey images ([*2MASS*]{}, Cutri et al. 2003). The monitoring campaign was performed preferentially in the $H_{\rm C}$ filter, and occasionally in $J_{\rm
S}$ and $Y_{\rm C}$, as this camera does not have $K$-band filter.
For the reduction of hundreds of thousands of observations, we have implemented a semi-automated pipeline, which is based in part on the procedures used in the [*Carnegie Supernova Project*]{} ([*CSP*]{}, Hamuy et al. 2006). The requirements of our project are much more severe than [*CSP*]{} in terms of background subtraction in areas with high-density of stars and very bright nebulosities. The pipeline is based on a series of [*IRAF*]{} scripts, shell scripts in [*gawk*]{}, [*FORTRAN*]{} code, and makes use of [*SExtractor*]{} code (Bertin & Arnouts, 1996). Furthermore, it is structured in [*Python*]{} programming language. In a second stage, we plan to obtain astrometric solutions using [*Swarp*]{} and [*Scamp*]{} codes (Bertin, 2006), and the photometric zero-points using [*2MASS*]{} and [*VVV*]{} surveys. Reduced images and metadata are being stored in a database managed by [ *MySQL*]{}. Figure 1 shows an example of $H_{\rm C}$ images.
![Left. RetroCam $H_{\rm C}$ image of the star-forming region IRAS 16177–5018 obtained in July 2009, the FOV is about $10'$. This is an example of an infrared cluster with relatively uncrowded background. Right: the core of this cluster.[]{data-label="fig:ab1"}](IRAS_1.eps "fig:"){width=".25\textwidth"} ![Left. RetroCam $H_{\rm C}$ image of the star-forming region IRAS 16177–5018 obtained in July 2009, the FOV is about $10'$. This is an example of an infrared cluster with relatively uncrowded background. Right: the core of this cluster.[]{data-label="fig:ab1"}](IRAS_2.eps "fig:"){width=".25\textwidth"}
First steps in the photometric analysis
=======================================
In a first stage, we are performing aperture photometry of a small set of objects in order to check the photometric stability of observations. From this process we are getting excellent results, which guarantee relative errors comparable to those obtained using well exposed CCD images in the optical range.
Figure 2 shows some results of the differential photometry for five $H_{\rm
C}$ mosaics of the young clusters Danks 1 and 2. This differential photometry is relative to a $H_{\rm C}$ mosaic used as reference image. A mean-$\sigma$ of 0.02 mag is obtained for the instrumental magnitude difference for twenty images of Danks 1 and 2, in the range $14-16$ mag (S/N$>200$). Thus, about 5% of the sources show variability greater than $2\sigma$: these stars are potential variables. After this stage of photometric characterization of the sample, we plan to design a pipeline to perform automated [*point-spread-function*]{} photometry and we will start to test the [*image-subtraction algorithm*]{}, (Alard & Lupton, 1998).
As a pilot case, we have observed the massive eclipsing binary FO15 (O5.5V + O9V, $P=1.41$d) in the Carina Nebula, with the aim to evaluate the quality of the differential photometry procedures. Figure 3 shows the phased light-curve of FO15 in the $Y_{\rm C}$ band, which can be compared with that published by Niemela et al. (2006) using optical [*All-Sky Automated Survey*]{} ([*ASAS*]{}) (Pojmański 2003) observations. In spite of the fact that aperture photometry of FO15 was done without the appropriate photometric calibrations, it is clear the superior quality of the NIR light-curve.
[**Acknowledgments.**]{} [We thank support from DIULS PR09101 and FONDECYT 3110188. We thank to Director and staff of LCO for the use of their facilities.]{}
![Left: differential instrumental magnitude for five $H_{\rm C}$ mosaics of Danks 1 and 2 clusters respect to a reference mosaic. The typical $\sigma$ for each star is about 0.02 mag. Sources with $\Delta H_{\rm C} > 2\sigma$ are potential variables. Right: distribution of $\sigma$ (green histogram) for the differential photometry of twenty $H_{\rm C}$ images of the same clusters. Variable star candidates are those beyond the $2\sigma$ interval. The fitted curve (in red) correspond to a Gaussian function with $\sigma=0.020$.[]{data-label="fig:ab2"}](magnitud_ref.eps "fig:"){width=".48\textwidth"} ![Left: differential instrumental magnitude for five $H_{\rm C}$ mosaics of Danks 1 and 2 clusters respect to a reference mosaic. The typical $\sigma$ for each star is about 0.02 mag. Sources with $\Delta H_{\rm C} > 2\sigma$ are potential variables. Right: distribution of $\sigma$ (green histogram) for the differential photometry of twenty $H_{\rm C}$ images of the same clusters. Variable star candidates are those beyond the $2\sigma$ interval. The fitted curve (in red) correspond to a Gaussian function with $\sigma=0.020$.[]{data-label="fig:ab2"}](A_mag.eps "fig:"){width=".48\textwidth"}
![Near-infrared light-curve of the massive eclipsing binary FO15 in the Carina Nebula. The photometric data were phased using the ephemeris published by Niemela et al. (2006).[]{data-label="fig:ab3"}](fo15-nir-curve.eps){width="7.0cm"}
Alard, C. & Lupton, R.H. 1998, , 503, 325. Bertin, E. 2006, ASP Conf. Ser., Vol. 351, Eds.: C. Gabriel et al., 112. Bertin, E. & Arnouts, S. 1996, Astron. Astrophys. Supp. Ser., 317, 393. Cutri, R.M. et al. 2003, “2MASS All-Sky Catalog of Point Sources”, University of Massachusetts and Infrared Processing and Analysis Center, Pasadena. Hamuy, M. et al. 2006, , 118, 2. Maíz Apellániz, J., Walborn, N.R., Galué, H., Wei, L.H. 2004, , 151, 103. Minniti, D. et al. 2010, New Astronomy, 15, 433. Niemela, V.S. et al. 2006, , 367, 1450. Pojmański, G. 2003, Acta Astron., 53, 341 Zinnecker, H. & Yorke, H.W 2007, , 45, 481.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We report a novel synthesis method for, and structural and magnetic characterization of the fluoroperovskite $\mathrm{NaFeF_3}$. We have developed a wet-chemical method that allows preparation of large volumes of air-sensitive fluoroperovskites with high purity. $\mathrm{NaFeF_3}$ has a Néel temperature ($T_N$) of 90 K and a Weiss constant ($\theta$) of -124 K, corresponding to dominant antiferromagnetic interactions. Below $T_N$, a slight difference is observed between zero-field and field cooled samples, indicating spin-canting and weak ferromagnetism. AC magnetometry confirms that weak ferromagnetism is inherent to $\mathrm{NaFeF_3}$ and not due to impurities. From powder neutron diffraction data, we describe the magnetic structure precisely as a weakly canted G-type (magnetic space group $Pn''ma''$). A ferromagnetic component is allowed in $Pn''ma''$, however, this component may be absent in zero magnetic fields and is too small to be confirmed on the basis of powder neutron diffraction data.'
author:
- 'Fabian L. M. Bernal'
- Bruno Gonano
- Fredrik Lundvall
- 'David S. Wragg'
- Helmer Fjellvåg
- Fabien Veillon
- 'Wojciech A. Sławiński'
- 'Øystein S. Fjellvåg'
bibliography:
- 'NaFeF3\_arXiv.bib'
title: ' Canted antiferromagnetism in high purity $\mathrm{NaFeF_3}$ prepared by a novel wet-chemical synthesis method '
---
Introduction {#sec:intro}
============
Fluoroperovskites display rich structural chemistry, strongly ionic bonding character (due to the high electronegativity of the fluoride anions), and corresponding localized electron magnetism [@FLUOR; @FLUOR2]. They exhibit a wide range of properties that can be utilized in e.g. data storage, computer processors, spintronics, multiferroics and batteries [@SPINTRON1; @SPINTRON2; @MultiF1; @MultiF2].
${\rm NaFeF_3}$ has attracted attention as a low-cost cathode material [@NF3SIB; @NF3SIB2]. Its advantages in this application include the Earth’s abundance of the constituent elements, intrinsic anion stability and a theoretical capacity of 197 ${\rm mAhg^{-1}}$ for a one-electron process. ${\rm NaFeF_3}$ nanoplates in particular show good capacity retention compared to other metal fluoride and composite cathode materials with 50% retained capacity for Na after 200 cycles at 0.2 A g$^{-1}$ [@NCATH].
The ${\rm NaFeF_3}$ fluoroperovskite has intriguing phase relations under high-pressure. At ambient temperature and pressure, the compound adopts the orthorhombic perovskite ${\rm GdFeO_3}$-type crystal structure with space group $Pnma$. It transforms into a corrugated layered ${\rm CaIrO_3}$-type post-perovskite (pPv) structure at room temperature at 9 GPa [@ME1]. A second structural phase transition occurs at 20 GPa from pPv-to-ppPv with the ${\rm Sb_2S_3}$-type crystal structure [@ME2]. The remarkable structural flexibility of ${\rm NaFeF_3}$ has made it an interesting candidate for studies and simulations of extreme environments, including the Earth’s interior and exoplanets [@PPV].
The electronic configuration of the ${\rm Fe^{2+}}$ ions in ${\rm NaFeF_3}$ is high-spin (HS) $t_{2g}^4 e_g^2$. They follow the spin-only model with $S=2$ and a theoretical paramagnetic moment of $\mu_{eff}=4.90 \mu_B$, which is influenced by orbital contributions and usually leads to slightly higher $\mu_{eff}$.
${\rm Fe^{2+}}$ is air sensitive and oxidizes easily to ${\rm Fe^{3+}}$. Controlling the anaerobic chemistry of ${\rm Fe^{2+}}$ ions is a prerequisite for synthesis of single phase ${\rm NaFeF_3}$. We have previously utilized a solid state synthesis under inert conditions [@ME1], yielding a sample with $<0.5$ % Fe metal impurity. Impurities, either from an incomplete solid state reaction or from the iron reactor, may introduce inaccuracy to magnetometric studies and disguise the intrinsic magnetic behavior of ${\rm NaFeF_3}$. Indeed, to the best of our knowledge there are no neutron powder diffraction (NPD) studies on the magnetic ordering in ${\rm NaFeF_3}$, probably because the air sensitivity of ${\rm Fe^{2+}}$ makes it very difficult to produce sample volumes sufficient for NPD experiments by conventional solid state methods.
In this article we describe a wet-chemistry synthesis method for ${\rm NaFeF_3}$, which produces iron-free material in substantial volumes. Using the high purity ${\rm NaFeF_3}$ samples, we study the intrinsic magnetic properties of the compound. Further, the magnetic structure is precisely described based on neutron powder diffraction data.
Experimental {#sec:exp}
============
Synthesis of $\mathrm{NaFeF_3}$ {#sec:syn}
-------------------------------
${\rm NaFeF_3}$ was synthesised on a Schlenk-line equipped with flexible hoses. Two polycarbonate vials of 85 and 200 ml volume (denoted A and B respectively) were used for the reaction. Vial A was filled with 2 g of Fe ($\sim$0.035 g mol$^{-1}$, 99.999 % pure) and vial B with 0.08 mol ($\sim$ 3.35 g) NaF. The vials were closed tightly with silicone rubber septa, connected through the hoses to the Schlenk-line and thoroughly flushed with Ar. The Ar flow was maintained throughout the reaction to ensure inert conditions. A needle was placed in each septum to vent the excess gas from the vials. 10 mL of HCl (37 %) and 20 mL ${\rm H_2O}$ were degassed, mixed and added to vial A. 20 mL of degassed ${\rm H_2O}$ was carefully injected (using a syringe first evacuated and flushed with Ar) to the NaF in vial B. Both vials were placed in an oil-bath under constant Ar flow at 90 $^\circ$C until the oxidation of Fe metal to ${\rm FeCl_2}$ was completed. The ${\rm FeCl_2}$ solution from vial A was then quickly transferred to vial B with an Ar flushed syringe. The contents of vial B were stirred constantly during injection to mix the two solutions. Thereafter, vial B was cooled to 80 $^\circ$C and the contents stirred for 30 to 60 minutes. ${\rm NaFeF_3}$ appeared as a beige precipitate. The product was washed repeatedly with degassed water and MeOH under flowing Ar, with decanting the liquid after each washing. Finally the solid product was removed from the vial, filtered, washed thoroughly with degassed MeOH, and dried under vacuum overnight before storing in the Ar-atmosphere of a glove box. Phase purity was confirmed by powder x-ray diffraction (PXRD) and magnetometry.
Powder X-ray diffraction {#sec:sc}
------------------------
PXRD data for $\mathrm{NaFeF_3}$ were collected at the Norwegian National Resource Centre for X-ray Diffraction, Scattering and Imaging (RECX) on a Bruker D8 Advance diffractometer in capillary mode using $\mathrm{Cu_{K\alpha 1}}$ radiation (1.540598Å) selected by a Ge(111) focusing monochromator, and a LynxEye XE detector system.
Magnetic characterization {#sec:mc}
-------------------------
Magnetometry experiments were performed with a 9T Physical Property Measurement System (PPMS, Quantum Design) on 48 mg of polycrystalline powder. Temperature dependent DC magnetic susceptibility $\chi (T)$ measurements were conducted between 2 and 300 K for zero field cooled samples, followed by studies at field cooled conditions (ZFC and FC, respectively). The magnetic susceptibility is calculated by $\chi =M/H$ where $M$ is the magnetization, given in emu.$\mathrm{mol^{-1} Oe^{-1}}$ and H the magnetic field (10 kOe). Isothermal field dependent measurements $M (H)$ were collected at 2K, likewise half-loop isothermal measurements at 70, 120 and 300 K, all up to 90 kOe. AC measurements were carried out with frequencies ranging from 100 Hz to 10 kHz with a 10 Oe field.
Neutron Powder Diffraction {#sec:npd}
--------------------------
NPD data for $\mathrm{NaFeF_3}$ was measured on the WISH instrument at the ISIS pulsed neutron and muon source (UK) [@WISH]. Diffraction patterns were collected between 2 and 297 K, and the data was reduced with the Mantid software [@MANTID]. Data from the four highest resolution detector banks were used, as the lowest resolution bank did not contain any unique information. NPD was collected at selected temperatures below and above $T_N$.
The magnetic refinements were carried out in the magnetic space group $Pn'ma'$, which allows non-zero values for $M_x$, $M_y$ and $M_z$, in the Jana2006 software [@JANA]. The background (5 term Legendre polynomials), peak-shape, atomic positions (according to symmetry restrictions), isotropic thermal displacement parameters for each element type, lattice parameters and scale parameters were refined. The derived values for $M_x$ and $M_z$ are given in [Table \[tab:moment\]]{}.
Results {#sec:RES}
=======
Synthesis procedure and crystal structure
-----------------------------------------
Fluoroperovskites are typically prepared by solid-state reactions and may contain magnetic impurities that make them appear as weak ferromagnets, both below and above the Néel temperature, as e.g. $\mathrm{\alpha}$-Fe impurities in $\mathrm{NaFeF_3}$ [@ME1]. The air-sensitive chemistry of fluoroperovskites is demanding. We currently benefit from a new wet-chemical method that bypass the challenges of conventional solid-state reactions by always working under inert conditions on a Schlenk line. This wet-chemistry approach is ideal for synthesis of air-sensitive fluorides, as recently shown for the extremely air-sensitive ${\rm Cr^{2+}}$ [@ME3]. $\mathrm{NaFeF_3}$ is currently prepared by means of this wet-chemical method, and high purity was confirmed by XRD and NPD.
$\mathrm{NaFeF_3}$ adopts the distorted orthorhombic $\mathrm{GdFeO_3}$ perovskite structure with space group $Pnma$ and Glazer tilt $\mathrm{a^-b^+a^-}$, [Figure \[fig:str\]]{}. The relation to the ideal cubic perovskite is given by $ a \approx c \approx \sqrt{2}a_{c}$ and $b \approx 2a_{c}$ where $a_{c}$ is the lattice parameter of the cubic perovskite. A weak Jahn-Teller distortion is present in the system originating from the high-spin $d^6$ electron configuration of $\mathrm{Fe^{2+}}$. The consequence of the weak Jahn-Teller effect is slight differences in the bond lengths; the Fe-F1 bonds adopt a medium length (2.0744(3) Å), while Fe-F2 forms two short and two long bonds (2.0564(6) Å and 2.0795(6) Å respectively), [Figure \[fig:str\]]{}. Bond length values are from NPD at 297 K, see below.
![image](NaFeF3_red_times_comb3.png)
Magnetic properties
-------------------
Variable temperature DC magnetization measurements were carried out on a polycrystalline sample between 2 and 300 K under a 10 kOe field ([Figure \[fig:mag\]]{}). The data are consistent with long-range antiferromagnetic ordering. A sharp decrease in the molar magnetic susceptibility is associated with a Néel transition at 90 K. From the inverse susceptibility $\chi^{-1}$ curve in the paramagnetic region (100 to 300 K), we calculate a paramagnetic moment of $\mu_{eff} = 5.58~\mu_B$. This is in good agreement with typically observed values for ${\rm Fe^{2+}}$ (5.0-5.6 $\mu_B$). From the Curie-Weiss fit, we extract a Weiss-temperature of $\theta$ = -124 K (data measured in a field of 10 kOe), confirming the dominating antiferromagnetic nature of ${\rm NaFeF_3}$.
![Temperature dependency of the magnetic susceptibility $\chi(T)$ at $H$ = 10 kOe measured in ZFC-FC mode (left axis) and inverse $\chi^{-1}$ (right axis).[]{data-label="fig:mag"}](chi_NaFeF3)
We note a significant difference between the FC and ZFC curves at low temperatures, as well as a minor hysteresis. This might indicate a transition to a spin-glass like state at low temperatures. However, AC magnetization measurements ([Figure \[fig:X\]]{}a) in a 10 Oe field show no variations of the Néel temperature versus frequency for $\chi '$, refuting this hypothesis.
![ Temperature dependence of the (*a*) real ($\chi'$) and (*b*) imaginary ($\chi''$) part of the AC magnetic susceptibility at different frequencies (100 Hz, 1 kHz and 10 kHz). []{data-label="fig:X"}](X_AC_all)
Appearance of magnetic hysteresis may be explained by the presence of a small ferromagnetic moment originating from spin canting, resulting in weak ferromagnetism. For a purely antiferromagnetic transition, the imaginary component $\chi ''$ is expected to be zero in AC measurements [@ACmag]. In fact, for $\chi ''$ ([Figure \[fig:X\]]{}b) we observe a strong peak for an AC field of 10 kHz. The peak is less pronounced for 1 kHz AC field, while only noise is observed for 100 Hz AC field. The peak starts to increase in intensity at the Néel temperature, indicating that it is associated with the magnetic transition in $\mathrm{NaFeF_3}$, and that we can excluded effects caused by an impurity, such as e.g. $\mathrm{\alpha}$-Fe. It is therefore evident that spin-canting results in weak ferromagnetism in $\mathrm{NaFeF_3}$, inherent to the compound. We note that the ferromagnetism is very weak and based on DC ([Figure \[fig:mag\]]{}) and AC ([Figure \[fig:X\]]{}) magnetometry, it should be very close to zero in the absent of a magnetic field. The maximum of the peak is at $\sim$ 32 K, indicating that the spin-canting is further developing below the Néel temperature.
Isothermal field dependent magnetic measurements above the Néel temperature ($T_N$ = 90 K) at 120 and 300 K show a linear behaviour, associated with a paramagnetic state ([Figure \[fig:hyst\]]{}). The low magnetization observed at 90 kOe ($0.25~\mu_B$/Fe at 2 K) confirms the dominating antiferromagnetic behavior of the system. However, below the Néel temperature, a slight hysteresis is observed. The hysteresis is clear at 2 K (inset in [Figure \[fig:hyst\]]{}), while it is less prominent at 70 K. The presence of hysteresis below $T_N$ supports that weak ferromagnetism is intrinsic to the compound.
![Isothermal $M$ versus $H$ curves with an applied field 0 T $\to$ 9 T $\to$ 0 T recorded at 2, 70, 120 and 300 K. Insert: $M$ versus $H$ curve at 2 K to show the full symmetry.[]{data-label="fig:hyst"}](hyst_NAFeF3)
Neutron diffraction and magnetic structure
------------------------------------------
Neutron diffraction was carried out between 2 and 297 K to investigate possible ordering of ${\rm Fe^{2+}}$ magnetic moments. At 2 K, strong additional reflections from long-range magnetic ordering are evident , e.g. two strong reflections at 4.49 and 4.58 Å and one weaker reflection at 7.89 Å ([Figure \[fig:NPD\]]{} and [Figure \[fig:NPD\_RT\]]{}). These reflections were indexed according to the unit cell of the crystal structure of $\mathrm{NaFeF_3}$, and corresponds to a $\Gamma$-point magnetic propagation vector $k = (0, 0, 0)$. However, several of the observed magnetic reflections break the symmetry extinctions of space group $Pnma$. We hence evaluated the possible magnetic structures for the $\Gamma$-point representations ([Table \[tab:irrep\]]{}) by Rietveld refinements against NPD at 2 K, and we found $ \Gamma^+_4 $ to precisely describe the magnetic ordering. This corresponds to the magnetic space group $Pn'ma'$ ([Figure \[fig:magstr\]]{}).
![Measured, calculated and difference curve from Rietveld refinement of the magnetic structure of $\mathrm{NaFeF_3}$ at 2 K for the second detector bank. The green ticks indicate reflections allowed by the magnetic symmetry (space group $Pn'ma'$).[]{data-label="fig:NPD"}](40704_pattern)
![Magnetic structure of the $\mathrm{NaFeF_3}$ with magnetic space group $Pn'ma'$. The magnetic moments of the iron atoms are antiferromagnetically ordered relative to their nearest neighbors, yielding G-type antiferromagnetism. Iron atoms are shown in orange and the bonds to fluorine as pale grey lines. Sodium atoms are removed for clarity.[]{data-label="fig:magstr"}](40704_new_orientation4.png)
----- ------------ ----- -- -- ----- ---------------- ----- -- ----- ---------------- ----- -- ---------------- ----- ----- -- ---------------- ----- -----
Coordinate $ \Gamma^+_1 $ $ \Gamma^+_2 $ $ \Gamma^+_3 $ $ \Gamma^+_4 $
$x$ $y$ $z$ $x$ $y$ $z$ $x$ $y$ $z$ $x$ $y$ $z$ $x$ $y$ $z$
0.0 0.0 0.5 + + + + + + + + + + + +
0.0 0.5 0.5 - + - + - + + - + - + -
0.5 0.5 0.0 + - - - + + + - - - + +
0.5 0.0 0.0 - - + - - + + + - + + -
----- ------------ ----- -- -- ----- ---------------- ----- -- ----- ---------------- ----- -- ---------------- ----- ----- -- ---------------- ----- -----
: Basis functions of the one-dimensional $\Gamma$-point irreducible representations found by decomposition of the magnetic representation for the iron site in $\mathrm{NaFeF_3}$. The + and - symbols denote the relative sign of the magnetic moment along $x$, $y$, and $z$ on the respective site. []{data-label="tab:irrep"}
The magnetic ordering of $ \Gamma^+_4 $ can be described as A$_x$ antiferromagnetic ordering along \[001\], F$_y$ ferromagnetic ordering along \[010\] and G$_z$ antiferromagnetic ordering along \[001\], corresponding to the refined parameters $M_x$, $M_y$ and $M_z$ respectively ([Table \[tab:irrep\]]{}). In our magnetic Rietveld refinements ([Table \[tab:str\_2K\]]{}), we find a large value for the G$_z$ component ($M_z$) and a small value for the A$_x$ component ($M_x$). When performing Rietveld refinements of the ferromagnetic F$_y$ component, a value of $M_y = 0.75(3) ~ \mu_B$ is obtained for the 2 K NPD data. However, since F$_y$-scattering coincides with nuclear peak positions, such quite small values of $M_y$ yield very minor changes to the calculated diffraction pattern and agreement factors. Furthermore, these are strong correlations with other parameters of the refinements, e.g. the thermal displacement parameter of Fe. The ZFC magnetization measurement furthermore suggests that the ferromagnetic moment should almost absent at low temperatures without a magnetic field. As a consequence, the NPD data cannot be used to claim the existence of a F$_y$ component. Hence, $M_y$ was fixed to zero during the final refinements.
------ -------------- ------------- ------------ ------------- ----- -------------------
Site Multiplicity $x$ $y$ $z$ Occ U$_{iso}$ (Å$^2$)
Na 4 0.0544(2) 0.25 0.9826(3) 1 0.0175(4)
Fe 4 0.5 0 0 1 0.00438(17)
F1 4 0.45061(16) 0.25 0.11401(16) 1 0.0105(3)
F2 8 0.29707(11) 0.06114(8) 0.68918(11) 1 0.0096(2)
------ -------------- ------------- ------------ ------------- ----- -------------------
Complete structural details obtained from the Rietveld refinements of NPD data is given in [Table \[tab:str\_2K\]]{} and [Table \[tab:moment\]]{}. The refined model has a dominating G$_z$-type antiferromagnetic structure with moments aligned parallel to \[001\]. There are weak indications for a small canting parallel to \[100\] as given by the A$_x$ component ([Figure \[fig:magstr\]]{}). The magnetic moments are antiferromagnetically oriented with respect to their nearest neighbors along $\mathrm{[010]}$, $\mathrm{[101]}$ and $\mathrm{[\bar{1}01]}$. Neighboring spins are aligned close to the equatorial plane of the octahedra (defined by the four F2 atoms, [Figure \[fig:str\]]{}) in the crystallographic (010)-plane.
At 2 K the Rietveld refinements give an ordered magnetic moment of $4.246(11)~\mu_B$, slightly lower that the theoretical spin-only value for $\mathrm{Fe^{2+}}$ of $4.90~\mu_B$. The value is in good agreement with that of $\mathrm{Fe^{2+}}$ in $\mathrm{KFeF_3}$, which is $4.42~\mu_B$ [@GTYPE]. The slightly lower value than the spin-only value can be accounted for by a slight hybridization between iron and fluorine, effectively reducing the number of magnetically ordered electrons. A very weak additional ferromagnetic component will not change this picture, but should not be neglected, see below.
The refinements (and magnetic peak intensities) show that the ordered magnetic moment decreases upon heating from 2 K. This is evidenced in the refined values for $M_x$ and $M_z$ ([Figure \[fig:str\_prm\]]{}a). The magnetic reflection in NPD disappears at 95 K, which is in compliance with the Néel temperature found by magnetometry.
We note the presence of a strong magnetostructural coupling in $\mathrm{NaFeF_3}$, evidenced by major changes in the lattice parameters at the Néel ordering temperature ([Figure \[fig:str\_prm\]]{}b and c). At the transition, we observe a rather smooth contraction of the unit cell volume, however, this is an average of a contraction of the $a$-axis in contrast to expansion of the $b$- and $c$-axis.
![ Temperature dependence of structural parameters derived from Rietveld refinements of NPD; (a) the total magnetic moment, the inset show the $M_x$ and $M_z$ components, (b) lattice parameter $a$ and cell volume, and (c) lattice parameter $b$ and $c$. The Néel temperature of 90 K is indicated by the purple dashed line. []{data-label="fig:str_prm"}](Str_prm_NaFeF3)
Already at 120 K, well above the ordering temperature, we observe a change in temperature dependence for the $b$-axis ([Figure \[fig:str\_prm\]]{}). Actually, by evaluating the difference intensity plot between NPD patterns at 95 and 120 K (close to, and above the magnetic ordering temperature respectively), we observe a broad peak at around 4.8 Å ([Figure \[fig:NPD\_diff\]]{}), which corresponds to the position of the (011) and (110) magnetic Bragg reflections in the ordered phase. The broad peak at 4.8 Å is thus interpreted as originating from the existence of short-range magnetic ordering just above the Néel temperature. ${\rm Fe^{2+}}$ is reported to display magnetostrictive behavior and diffuse scattering above $T_N$ in $\mathrm{Rb_2FeF_4}$, and we note that the diffuse scattering for $\mathrm{NaFeF_3}$ above $T_N$ also coincides with a tensile effect on the lattice [@MAGSTRICRP1].
![Plot of the difference between NPD patterns at 95 and 120 K measured on the low resolution detector bank. A broad peak around 4.8 Å is marked by a red arrow, which corresponds to the (011)- and (110)-magnetic Bragg reflections and indicates short-range magnetic order appearing at 95 K.[]{data-label="fig:NPD_diff"}](Difference_NPD)
As discussed above, due to a weak Jahn-Teller distortion, $\mathrm{NaFeF_3}$ adopts short, medium and long Fe-F bonds, [Figure \[fig:str\]]{}. Considering the variation of the Fe-F bond length between 2 and 297 K in the NPD experiment, we observe a clear trend: The bond lengths tend towards receiving identical values when temperature approaches 297 K ([Figure \[fig:NPD\_bond\]]{}). In the cubic perovskite $\mathrm{KFeF_3}$, the Fe-F bond lengths adopts a value of 2.06 Å, which ought to be a value expected also for the Fe-F bond lengths in $\mathrm{NaFeF_3}$ [@KFEF3]. On this basis it is tempting to suggest that $\mathrm{NaFeF_3}$ may undergo transitions to higher symmetric structures at elevated temperatures [@ME1; @ME2]. This is beyond the scope of the present work.
![Variation of the Fe-F bond lengths derived from Rietveld refinements of NPD between 2 and 297 K. Uncertainties are smaller than the symbol size. []{data-label="fig:NPD_bond"}](NPD_Bond_length_NaFeF3)
Discussion {#sec:DIS}
==========
We highlight that our new wet chemical synthesis protocol has allowed preparation of $\mathrm{NaFeF_3}$ of very high purity which is a prerequisite for clarification of intrinsic properties. Importantly, we compared ZFC - FC magnetic behaviour of phase pure samples with reported data for samples made by a conventional solid-state method [@ME1]. For $\mathrm{NaFeF_3}$ prepared by solid-state methods, iron impurities easily outweigh the weak antiferromagnetic or paramagnetic signal, resulting in overestimated magnetization values. This is mirrored by a measured ferromagnetic behaviour at 300 K [@ME1], which is not intrinsic to pure $\mathrm{NaFeF_3}$ prepared by our wet-chemical method. Note, that the very weak ferromagnetism observed in AC susceptibility below the Néel temperature of our phase pure $\mathrm{NaFeF_3}$ samples is intrinsic, but with value close to zero.
Although NPD cannot unambiguously conclude on the possible existence of a symmetry allowed ferromagnetic component along \[010\] (F$_y$) according to the magnetic space group $Pn'ma'$, this is clearly indicated by a strong peak in the imaginary component $\chi ''$ of the AC susceptibility. Considering the magnetic F - Fe - F interactions of $\mathrm{NaFeF_3}$ in light of the Goodenough-Kanamori-Anderson (GKA) rules, antiferromagnetism is expected [@GKA]. Furthermore, a linear G-type magnetic structure was predicted by DFT for $\mathrm{NaFeF_3}$ [@ME1], also with moments arranged along \[001\] (G$_z$-type). However, our Rietveld analysis considered an additional weak A$_x$-component, which implies canting of the antiferromagnetism. The latter component is close to the detection limit of the analysis.
G-type magnetic ordering is observed in several other fluoroperovskites, e.g. $\mathrm{KMF_3}$ (M = Mn, Fe, Co, and Ni) [@GTYPE]. However, due to the small sodium cation and the weakly Jahn-Teller active $\mathrm{Fe^{2+}}$, the structure of $\mathrm{NaFeF_3}$ is significantly distorted and the F-Fe-F bond angles deviate from 180$^\circ$. As a consequence, the interactions may deviate from the GKA rules.
Correspondingly, one must consider other magnetic interactions as origin to the weak ferromagnetism. For compounds with $d^6$ electron configuration, Jahn-Teller, as well as spin-orbit interaction mechanism, will contribute to stabilization of the system [@KK]. If spin-orbit coupling is present in $\mathrm{NaFeF_3}$, Dzyaloshinskii-Moriya interactions may occur. Such interactions give rise to ferromagnetic exchange and may thus be the origin of weak ferromagnetism in $\mathrm{NaFeF_3}$ [@DINTER; @MINTER].
Conclusion {#sec:CON}
==========
In summary we have developed a wet-chemical synthesis protocol that allow preparation of $\mathrm{NaFeF_3}$ in large quantities and of high purity. As a consequence we have been able to investigate the intrinsic magnetic properties of $\mathrm{NaFeF_3}$ without potential additional magnetic contributions from impurities like $\mathrm{\alpha}$-Fe that will interfere with the analysis. Magnetic susceptibility and powder neutron diffraction analysis shows that $\mathrm{NaFeF_3}$ has a Néel temperature of 90 K. AC magnetometry clearly show the presence of weak ferromagnetism below the ordering temperature, supported by field dependent DC measurements. Neutron diffraction data describe the compound as a weakly canted G$_z$-type antiferromagnet with a minor A$_x$-component allowed by symmetry. The magnetic space group opens for a F$_y$ component, however, this is almost absent at zero-field and is too weak to be proven the current analysis. The temperature variation of the Fe-F bonds suggest a possible structural phase transition to a higher symmetric structure above 300 K.
Aknowledgements {#sec:AK}
===============
We thank Serena Margadonna (Swansea University, Swansea, UK) for providing project support via the Research Council of Norway project 214260. This work was partially performed within the RIDSEM-project, financed by the Research Council of Norway (Project No. 272253). The U.K. Science and Technology Facilities Council (STFC) is thanked for allocating beamtime at the ISIS Facility. We also thank Pascal Manuel for help during the NPD experiment, and Asbjørn Slagtern Fjellvåg and Vincent Hardy for discussions regarding magnetic properties.
![image](40729_pattern)
[c c c c c c c c c c c c c c c c c c c c c c]{} &&&&\
Temperature (K) &&&& $x$ && $y$ && $z$ &&&& $M_x$ &&& $M_y$ &&& $M_z$\
\
2 &&&& 0.5 && 0 && 0 &&&& 0.42(1) &&& 0 &&& 4.224(4)\
5 &&&& 0.5 && 0 && 0 &&&& 0.41(1) &&& 0 &&& 4.223(4)\
10 &&&& 0.5 && 0 && 0 &&&& 0.41(1) &&& 0 &&& 4.223(4)\
15 &&&& 0.5 && 0 && 0 &&&& 0.41(1) &&& 0 &&& 4.223(4)\
20 &&&& 0.5 && 0 && 0 &&&& 0.41(1) &&& 0 &&& 4.210(4)\
25 &&&& 0.5 && 0 && 0 &&&& 0.41(1) &&& 0 &&& 4.189(4)\
30 &&&& 0.5 && 0 && 0 &&&& 0.40(1) &&& 0 &&& 4.153(4)\
35 &&&& 0.5 && 0 && 0 &&&& 0.39(1) &&& 0 &&& 4.108(4)\
40 &&&& 0.5 && 0 && 0 &&&& 0.38(1) &&& 0 &&& 4.048(4)\
45 &&&& 0.5 && 0 && 0 &&&& 0.36(1) &&& 0 &&& 3.964(4)\
50 &&&& 0.5 && 0 && 0 &&&& 0.34(1) &&& 0 &&& 3.875(4)\
55 &&&& 0.5 && 0 && 0 &&&& 0.32(1) &&& 0 &&& 3.743(3)\
60 &&&& 0.5 && 0 && 0 &&&& 0.30(1) &&& 0 &&& 3.624(3)\
65 &&&& 0.5 && 0 && 0 &&&& 0.28(1) &&& 0 &&& 3.468(3)\
70 &&&& 0.5 && 0 && 0 &&&& 0.25(1) &&& 0 &&& 3.288(3)\
75 &&&& 0.5 && 0 && 0 &&&& 0.22(1) &&& 0 &&& 3.045(3)\
80 &&&& 0.5 && 0 && 0 &&&& 0.19(1) &&& 0 &&& 2.733(3)\
85 &&&& 0.5 && 0 && 0 &&&& 0.15(2) &&& 0 &&& 2.256(3)\
90 &&&& 0.5 && 0 && 0 &&&& 0.09(4) &&& 0 &&& 1.354(4)\
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'By solving radiative transfer equations, we examine three-dimensional radiative properties of a magnetohydrodynamic accretion flow model confronting with the observed spectrum of Sgr A\*, in the vicinity of supermassive black hole at the Galactic centre. As a result, we find that the core of radio emission is larger than the size of the event horizon shadow and its peak location is shifted from the gravitational centre. We also find that the self-absorbed synchrotron emissions by the superposition of thermal electrons within a few tens of the Schwartzschild radius can account for low-frequency spectra below the critical frequency $\nu_{c}\approx 10^{12}$ Hz. Above the critical frequency, the synchrotron self-Compton emission by thermal electrons can account for variable emissions in recent near-infrared observations. In contrast to the previous study by Ohsuga et al. (2005), we found that the X-ray spectra by Bremsstrahlung emission of thermal electrons for the different mass accretion rates can be consistent with both the flaring state and the quiescent state of Sgr A\* observed by [*Chandra*]{}.'
author:
- |
Y. Kato$^{1}$[^1], M. Umemura$^{2}$, K. Ohsuga$^{3}$\
$^{1}$Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Sagamihara,\
Kanagawa 229-8510, Japan\
$^{2}$Center for Computational Sciences, University of Tsukuba, 1-1-1 Ten-nodai, Tsukuba 305-8577, Japan\
$^{3}$National Astronomical Observatory of Japan, Osawa, Mitaka, Tokyo 181-8588, Japan
date: 'Accepted 1988 December 15. Received 1988 December 14; in original form 1988 October 11'
title: 'Three-dimensional Radiative Properties of Hot Accretion Flows onto the Galactic Centre Black Hole'
---
\[firstpage\]
accretion, accretion discs — black hole physics — (magnetohydrodynamics) MHD — plasma — radiation transfer — Galaxy: centre
Introduction
============
How the emission comes from accreting material in the Galactic centre (GC) is a fundamental question for understanding the nature of mass accretion processes feeding a supermassive black hole (SMBH). When mass accretion rate is much less than the critical value, $\dot{M}_{\rm crit}\equiv L_{\rm Edd}/c^{2}$ where $L_{\rm Edd}\approx 1.3\times 10^{38} \left(M_{\rm BH}/M_{\odot}\right){\rm
erg\,s^{-1}}$ is the Eddington luminosity and $c$ is the speed of the light, the radiation loss of accreting gas is inefficient and thus most of the energy generated by turbulent viscosity is stored as thermal energy of the gas and is advected onto SMBH. Therefore, the accretion flow becomes hot and geometrically thick structure. This type of accretion flow is well known as an advection-dominated accretion flow (ADAF: Ichimaru 1977; Narayan & Yi 1994, 1995; Abramowicz et al. 1995) or a radiatively inefficient accretion flows (RIAF: Yuan et al 2003; see also Kato, Fukue, Mineshige 2008). The ADAF/RIAF model is quite successful in reproducing high-energy emission of low-luminous active galactic nuclei (AGN) and the GC source, which is Sgr A\* as a compact radio source (Balick & Brown 1974; Bower et al. 2004; Shen et al. 2005; Doeleman et al. 2008). Actually, the stellar dynamics has revealed the mass of SMBH in the GC, $\approx 4\times 10^{6} M_{\odot}$ (e.g., Sch[ö]{}del et al. 2002; Ghez et al. 2003, 2008; Gillessen et al. 2009). It turned out that the low-luminous material at the GC is associated with Sgr A\*, in which the luminosity is $L_{\rm Sgr A*}\approx 10^{-8} L_{\rm Edd}$.
After the discovery of magneto-rotational instability (MRI: Balbus & Hawley 1991, 1998), which can drive magnetohydrodynamic (MHD) turbulence as a source of viscosity for accretion process, it seems that non-radiative MHD simulations of accretion flows have been accepted as a realistic model of ADAF/RIAF (Stone & Pringle 2001; Hawley & Krolik 2001). For example, many MHD studies based on numerical simulations have revealed a hot and geometrically thick accretion flows with a variety of complex motions (Matsumoto et al. 1999; Hawley 2000; Machida et al. 2000), outflows and jets (Igumenshchev et al. 2003; Proga & Begelman 2003; Kato et al. 2004), and oscillations (Kato 2004). The recent detection of variability in the X-ray and near-infrared (NIR) emissions at Sgr A\* (Baganoff et al. 2001, 2003; Ghez et al. 2004) may be induced by such a multi-dimensional structure of the flow. Nonlocal nature of radiation process is essential for testing MHD models. Therefore, in order to clarify the structure and the time variability of accretion flows, undoubtedly full radiation transfer treatment of MHD accretion flows in three-dimension is indispensable.
The pioneering work for examining MHD model of accretion flows has been done by Ohsuga, Kato, Mineshige (2005: hereafter OKM05). OKM05 assumed the cylindrical distribution of electron temperature by adopting the balance equation between radiative cooling and heating via Coulomb collision, regardless of gas temperature in MHD model. They reconstructed for the first time the multi-band spectrum of MHD model which is consistent with the observed spectra in flaring state of Sgr A\*. However, the spectra in quiescent state cannot be reconstructed simultaneously in the radio and X-ray bands. In the context of MHD model, this is the issue of what makes the difference between the flaring state and the quiescent state in Sgr A\*. We expect that determination of the electron temperature is the key for understanding the occurrence of two distinct states.
Moscibrodzka, Proga, Czerny, and Siemiginowska (2007: hereafter MPCS07) present the spectral feature of axisymmetric MHD flows by Proga & Begelman (2003). In contrast to OKM05, they calculate the electron temperature distribution by solving the heating-cooling balance equation at each grid point at a given time in the simulation. Moreover, they take into account the advective energy transport and compressive heating in the balance equation. It turns out that the heating of electrons via Coulomb collision is not always a dominant term in the balance equation. Unfortunately, MPCS07 failed to reproduce the radio and X-ray spectra by emission from thermal electron. They conclude that a contribution of non-thermal electrons offers a much better representation of the spectral variability of Sgr A\*.
One difficulty in theoretical studies of radiative feature of hot accretion flows is that radiative energy transfer plays a critical role for determining the electron temperature. For example, radiative heating/cooling via synchrotron emission/absorption and Compton processes may dominate compressional heating and collisional heating in the energy balance equation at the high electron temperature (Rees et al. 1982). This makes everything rather complicated than the case of MPCS07. Conversely, once we know the properties of two-temperature plasma in MHD accretion flows which can reconstruct a radiation spectrum using a few relevant parameters, we can then constrain the physics of heating mechanism.
In this study, we investigate radiative signatures of radiatively inefficient accretion flows in long-term 3-D global MHD simulations. Then, confronting with the observed spectra of Sgr A\* in the flaring and quiescent states, we derive constraints for electron temperature that can concordant with the observations. We also discuss heating mechanism of electron in the magnetized accretion flows. In §2 we present our 3-D MHD model of accretion flows and describe method of radiation transfer calculation. We then present our results in §3. The final section is devoted to summary and discussion.
Numerical Models and Methods
============================
Setup of simulations
--------------------
A physical model we use in this study is based on 3-D resistive MHD calculations with pseudo-Newtonian potential (see Kato, Mineshige, Shibata 2004; Kato 2004 for more details). We pick last 40 snapshots of our calculations from $t=30,000\,r_{\rm s}/c$ to $32,000\,r_{\rm s}/c$ with $dt=50 r_{\rm s}/c$ where $r_{\rm s}$ and $c$ are the Schwartzschild radius and the speed of light, respectively. Note the flow has evolved in more than 600 rotation periods at the innermost stable circular orbits (ISCO) where the Keplerian rotation period is about $50 r_{\rm s}/c$. Therefore our MHD model is supposed to be in quasi-steady state. In our radiation transfer calculations, we use uniform meshes as $(N_{\rm x},N_{\rm
y},N_{\rm z}) = (100, 100, 100)$ in Cartesian coordinates in a simulation box $|x, y, z| \leq 100\,r_{\rm s}$.
MHD model
---------
The quasi-steady MHD accretion discs have a hot, geometrically thick structure associated with sub-thermal magnetic field. Fig. \[fig1:eps\] displays a spatial distribution of the density, the gas temperature, and the strength of magnetic field of our MHD model. The density is normalized by the initial maximum density $\rho_{0}$ and the strength of magnetic field is proportional to $\left(\rho/\rho_{0}\right)^{1/2}$ for the same black hole mass $M_{\rm BH}$. As we can notice, an MHD disc has non-axisymmetric structures (close to $m=1$ where $m$ is the azimuthal mode number) in density, gas temperature, and strength of magnetic field, simultaneously. Moreover, filamentary structures of cold and dense gas can be seen in left and middle panels. Note that gas temperature is relatively high at funnel region along the z-axis because the centrifugal barriers prevent the penetration of accreting material. In right panel, MHD turbulence induced by MRI and the differential rotation in the flow generate strong magnetic field regions within approximately $30 r_{\rm s}$.
=
Electron temperature
--------------------
In our MHD model, it is assumed that the entire magnetic energy released in the diffusion region is thermalized instantly and therefore the production of non-thermal electrons accelerated by the magnetic reconnection is neglected for self-consistency. In the previous study OKM05, electron temperature is determined by the local thermal equilibrium between radiative cooling and electron heating via coulomb coupling in the cylindrical region. However, this assumption is invalid. Actually, in our preliminary radiation transfer calculation coupled with the energy balance equation, we found that the heating rate of coulomb coupling cannot afford the cooling rate of electrons in each computational cell. Therefore, the other heating mechanism of electrons, such as turbulent heating, must be taken into account for physical reasoning. Similar conclusions have been made by Sharma, Quataert, Hammett, & Stone (2007: hereafter SQHS07). For this reason, we introduce a new parameter, $f_{\rm ep}\equiv T_{\rm
e}/T_{\rm p}$, the ratio of electron temperature and proton temperature so that electron temperature is determined with the gas temperature, $T_{\rm gas}$, by using $f_{\rm ep}$ in this study. We assume that $f_{\rm ep}$ is spatially uniform for simplicity. Thus electron temperature is obtained as: $$T_{\rm e} = \min{\left(\frac{f_{\rm ep}}{1+f_{\rm ep}}T_{\rm gas},m_{\rm
e}c^2/k_{\rm b}\right)}$$ where $T_{\rm gas}$ is the gas temperature which is derived by MHD simulation. We assume here that the electron temperature cannot exceed the temperature of rest mass energy of electron due to pair annihilation.
Parameters for radiation transfer calculation
---------------------------------------------
There are only three model parameters, $\rho_{0}$, $f_{\rm ep}$, and $M_{\rm BH}$, in order to perform radiative transfer calculation in our study. The mass of SMBH in the GC is fixed at $M_{\rm
BH}=3.6\times 10^{6} M_{\odot}$ (e.g., Sch[ö]{}del et al. 2002). The other model parameters we choose here are as follows: (a) $\rho_{0}=8\times 10^{-15} {\rm g\,cm^{-3}}$ and $f_{\rm ep}=0.25$, (b) $\rho_{0}=8\times 10^{-15} {\rm g\,cm^{-3}}$ and $f_{\rm ep}=1$, (c) $\rho_{0}=8\times 10^{-16} {\rm g\,cm^{-3}}$ and $f_{\rm
ep}=0.25$, and (d) $\rho_{0}=8\times 10^{-16} {\rm g\,cm^{-3}}$ and $f_{\rm ep}=1$. Note that these parameters are derived by fitting the observed broadband spectra. Because the flow velocity is identical in all models, the larger density represent the more mass accretion rate $\dot{M}$. The relation between $\rho_{0}$ and $\dot{M}$ at ISCO can be described as follows: $$\dot{M}\approx 2.5\times 10^{-7}\left({\rho_{0}\over 8\times
10^{-15}\,{\rm g\,cm^{-3}}}\right) M_{\odot}\,{\rm yr}^{-1}.$$ This relation indicates that mass accretion rate of our models is smaller than that for the quiescent X-ray emission measured with [ *Chandra*]{} of $\dot{M}\sim 10^{-6}\,M_{\odot}\,{\rm yr}^{-1}$ at the Bondi radius (Baganoff et al. 2003), but is consistent with that estimated by the Faraday rotation in the millimeter band of $\dot{M}\sim 10^{-7} - 10^{-8}\,M_{\odot}\,{\rm yr}^{-1}$ (Bower et al. 2003, 2005).
In radiation transfer calculation, we treat synchrotron emission/absorption, (inverse-)Compton scattering, and bremsstrahlung emission/absorption of the thermal electrons. Non-thermal electrons produced by collisions of protons via $\pi$-decay (Mahadevan et al. 1998) are not taken into account, because we focus only on the radiative properties of thermal electrons (Loeb & Waxman 2007).
Radiative transfer calculation
------------------------------
We solve the following radiation transfer equations with electron scattering by using Monte-Carlo method: $$\mbox{\boldmath$n$}\cdot\nabla\mbox{\boldmath$I$}_{\nu} =
\chi_{\nu}(\mbox{\boldmath$S$}_{\nu} - \mbox{\boldmath$I$}_{\nu})$$ where $\mbox{\boldmath$I$}_{\nu}(x,y,z,\theta,\phi)$ is the specific intensity at the position $(x,y,z)$ in the direction $(\theta,
\phi)$ with the frequency $\nu$, whereas $\chi_{\nu}(x,y,z,\theta,\phi) = n_{\rm e}\sigma_{\nu} + \kappa_{\nu}$ is the extinction coefficient where $n_{\rm e}$, $\sigma_{\nu}$ and $\kappa_{\nu}$ is the electron number density, the scattering cross-section, and the absorption coefficient, respectively, and $$\mbox{\boldmath$S$}_{\nu} = {\varepsilon_{\nu}\over 4\pi\chi_{\nu}} +
\oint\varphi(\nu, \mbox{\boldmath$n$}; \nu', \mbox{\boldmath$n$}')
\alpha_{\nu'}\mbox{\boldmath$I$}_{\nu'}(\mbox{\boldmath$n$}')d\mbox{\boldmath$\Omega$}'$$ is the source function where $\varepsilon_{\nu}$, $\alpha_{\nu}$, and $\varphi(\nu, \mbox{\boldmath$n$}; \nu'
\mbox{\boldmath$n$}')$ is the local emissivity, the scattering albedo, and the phase function, respectively. The coordinate system is shown in Fig. \[fig2:eps\].
=0.9
For generating photon packets, we randomly selected a position by using a local emissivity as follows: $$\sum_{\rm i=1}^{\rm k-1}\varepsilon_{\nu}({\rm i}) < R_{1}\sum_{\rm
i=1}^{\rm Nmesh}\varepsilon_{\nu}({\rm i}) < \sum_{\rm i=1}^{\rm
k}\varepsilon_{\nu}({\rm i})$$ where $\varepsilon_{\nu}({\rm i})$ is the emissivity at the position index ${\rm i}$ and $R_{\rm 1}$ indicates a random number distributed uniformly in the interval $[0, 1]$. In our study, all pseudo-random numbers $R_{\rm j}$ are generated by using a Mersenne Twister method (Matsumoto & Nishimura 1998). The direction of generated photon packets, $\mbox{\boldmath$n$}=(\sin{\theta}\cos{\theta},
\sin{\theta}\sin{\phi},\cos{\theta})$, is also determined by using random numbers $R_{2}=(\cos{\theta} + 1)/2$, $R_{3}=\phi/2\pi$. The frequency domain of photon packets is ranging from $10^{3}$ to $10^{25}$ Hz and is uniformally divided by 100 bins in logarithmic scale. In this study, $N_{\rm p} = 10^{7}$ photon packets are generated in every frequency bin in order to acquire the statistically significant results.
In order to evaluate an escaping probability of an emerging photon packet, optical depth is computed by using a direct integration along the photon packet trajectory as follows: $$\tau_{\nu}(l)=\int_{0}^{l} \left(n_{\rm e}\sigma_{\nu} + \kappa_{\nu}\right)ds,$$ where $l$ is a distance between the origin of the photon packet and the computational boundary along the trajectory. Here, we use the Klein-Nishina formula of scattering cross-section $\sigma_{\rm KN}$ for $\sigma_{\nu}$ (Rybicki & Lightman 1979) and the synchrotron, free-free, and bound-free self-absorption coefficient for $\kappa_{\nu}$ described by Kirchoff’s law assuming local thermal equilibrium (LTE) at every meshes, $$\kappa_{\nu}=\frac{\varepsilon^{\rm sy}_{\nu} + \varepsilon^{\rm
ff}_{\nu} + \varepsilon^{\rm bf}_{\nu}}{4\pi B_{\nu}}$$ where $\varepsilon^{\rm sy}_{\nu}$, $\varepsilon^{\rm ff}_{\nu}$, and $\varepsilon^{\rm bf}_{\nu}$ are synchrotron, free-free, and bound-free emissivity, respectively (Pacholczyk 1970; Stepney & Guilbert 1983), and $B_{\nu}$ is the Planck function. Accordingly, an escaping probability of the generated photon packet is written as: $$w(l)=\exp{(-\tau_{\nu}(l))},$$ and the remaining photon packet $1 - w(l)$ interacts with gas. Scattering position at the distance $l_{\rm s}$ from the original position is determined by using a random number as follows: $$R_{4} = \left[1 - w(l_{\rm s})\right]/\left[1 - w(l)\right]$$ and the scattering albedo is given by: $$\alpha_{\nu} = \frac{n_{\rm e}\sigma_{\nu}}{n_{\rm e}\sigma_{\nu} +
\langle\kappa_{\nu}\rangle}$$ where $\langle\kappa_{\nu}\rangle$ is the mean absorption coefficient along the photon packet trajectory. To determine the phase function for the Compton scattering process, $\varphi(\nu, \mbox{\boldmath$n$}; \nu', \mbox{\boldmath$n$}')$, we follow the method by Pozdnyakov et al. (1977). We repeat the same procedure for a scattered photon packet until either a photon is out of the computational box or $w(l) < \epsilon$ where $\epsilon=10^{-5}$ in this study. Note that photons can neither penetrate nor be generated at the region within 2 $r_{\rm s}$ in above calculation.
Finally, the computed radiation field is accumulated in the data array of $(N_{\rm x},N_{\rm y},N_{\rm z},N_{\theta},N_{\phi},N_{\nu}) =
(100, 100, 100, 3, 4, 5)$ in order to generate synthetic images in a given frequency band and viewing angle. On the other hand, the computed spectrum of escaping photons in the fixed viewing angle is stored in the different data array.
Results
=======
Mapping of escaping photons
---------------------------
We investigate the spatial distribution of emergent radiation in order to explore the radiative nature of magnetized accretion flows. Fig. \[fig3:eps\] represents synthetic images of our models (a), (b), (c), and (d) at different frequency bands with the viewing angle of ($\theta$, $\phi$)$=$($\pi/3$, $\pi/4$). In Fig. \[fig3:eps\], upper half images correspond to the model with high-density plasma ($\dot{M}\approx 2.5\times 10^{-7} M_{\odot}\,{\rm yr}^{-1}$), whereas lower half images correspond to that with low-density plasma ($\dot{M}\approx 2.5\times 10^{-8} M_{\odot}\,{\rm yr}^{-1}$). Each density model has two sub-categories; one is two-temperature plasma $f_{\rm ep}=0.25$ and the other is one-temperature plasma $f_{\rm
ep}=1$. A basic feature of all images is that the core of emission region is larger than the size of the event horizon shadow, and is smaller than the radius of $50 r_{\rm s}$, except for the image of model (b) (namely high-density and one-temperature plasma) at millimeter band $10^{10} - 10^{11}$ Hz.
=0.9
Distinctive feature of non-axisymmetry is seen in all models at millimeter band. The non-axisymmetric structure in the magnetized accretion flows is also visible in Fig. \[fig1:eps\]. The apparent size of emission region in models (a) and (d) looks similar, whereas that in models (b) and (c) looks quite different. This is because the difference of density and temperature between model (a) and (d) compensate each other. On the other hand, non-axisymmetric feature cannot be seen in sub-millimeter band ($10^{11} - 10^{12}$ Hz) except for model (b), but it has nearly spherical emission region around the gravitational centre. Again, the apparent size of emission region in model (a) and (d) looks quite similar. At both millimeter and sub-millimeter bands, the core of emission region correspond to the region of the strong magnetic field (see Fig. \[fig1:eps\]c), and the peak of emission region is slightly shifted from the gravitational centre.
All models represent the disc-like structure in X-ray band $10^{17} -
10^{19}$ Hz. The apparent size of emission region in high-density models \[(a) and (b)\], and in low-density models \[(c) and (d)\] looks similar each other. This is because X-ray emission is produced by the Bremsstrahlung emission, which is sensitive to density, not to electron temperature. Interestingly, asymmetric structure can be seen only when electron temperature is equal to proton temperature \[models (b) and (d)\]. This feature is most likely to be produced by Compton scattering of low-energy photons generated by synchrotron emission (known as synchrotron self-Compton process: SSC).
Spectral energy distribution (SED)
----------------------------------
In addition to the spatial distributions of radiative fields, we calculate the SED of our MHD model. In Fig. \[fig4:eps\], the resultant SED is shown from radio to gamma-ray bands and also the observed spectra of Sgr A\* are superimposed. The overall spectrum consist of several radiative processes, that is, self-absorbed synchrotron for $\nu\lsim 10^{12}$ Hz, synchrotron for $10^{12}\lsim\nu\lsim 10^{14}$ Hz, synchrotron self-Compton for $10^{14}\lsim\nu\lsim 10^{17}$ Hz, and thermal Bremsstrahlung for $\nu\gsim 10^{17}$ Hz. As we can see, time variability is different at each band. In optically thick region below the critical frequency $\nu_{\rm c}\approx 10^{12}$ Hz, there is a small time variation. When the frequency becomes higher, close to the critical frequency, the time variation becomes larger. Note that the variability at the lowest frequency edge $\sim 10^{9}$ Hz depends on the number of photon packets used in the calculation. Because of the strong self-absorption of synchrotron process, we cannot exclude the statistical errors in those frequency bands. Estimating from higher frequency regions, the variability is about two factors in magnitude. In optically thin region, on the other hand, time variation becomes prominent in the low-frequency band $10^{12} - 10^{17}$ Hz and it becomes negligible in high-energy band $\gsim 10^{17}$ Hz.
=
The resultant SED of model (a) can nicely fit the observed SED in flaring state for all frequency range (Fig. \[fig4:eps\]a). The difference between the emergent spectra and the total emissivity spectra below the critical frequency is induced by the synchrotron self-absorption, whereas that in X-ray band is caused by the effect of viewing angle. Unlike the previous study of ADAF and OKM05, low-frequency bump in the millimeter band $10^{10} - 10^{11}$ Hz is reconstructed successfully without considering non-thermal electrons. This is a main product of introducing a parameter $f_{\rm ep}$, which can account for spatially structured electron temperature distribution.
In order to explain two orders of magnitude difference in X-ray emission between flaring and quiescent states, the density need to be reduced by at least one order of magnitude, such as model (b) (see Fig. \[fig4:eps\]b). The resultant SED of model (b) can fit the X-ray spectra in quiescent state. However it under-predicts the radio to IR spectra for two-temperature plasma with $f_{\rm ep}=0.25$. Remaining option is to increase electron temperature so as to compensate the reduction of density, such as model (d) of one-temperature plasma with $f_{\rm ep}=1$ (see Fig. \[fig4:eps\]d). It turned out that the resultant SED of model (d) can successfully reconstruct the X-ray spectra of quiescent state as well as the radio and IR spectra. Although the underlying physics of electron heating is not clear, this is an outstanding result for the theory of hot accretion flows.
It is interesting to note that the X-ray variability does not strongly depend on the model parameter. Actually, all models except model (b) represent that the X-ray variability is very weak. In model (b), soft X-ray photons $\approx 10^{17}$ Hz are strongly contaminated by SSC photons because both density and electron temperature are increased so that scattering coefficient becomes large. Moreover, it seems that most of variabilities in ultraviolet band are produced by the scattered photons and also they are correlated with IR variabilities generated by synchrotron emission in all models. Note that the amplitude of variabilities (different between the maximum and the minimum power of escaping photons) in IR and UV bands does not change much in the range of our model parameter. Therefore, we suspect that the variability in the recent NIR observation (Eckart et al. 2005) are strongly affected by either the structure or the dynamics of the accretion flows.
Summary and Discussion
======================
We have investigated three-dimensional radiative features of radiatively inefficient accretion flows modeled by MHD simulations. The synthetic images of all models show that the core of emission region in Sgr A\* is larger than the size of the event horizon shadow, and is smaller than the radius of $100\,r_{\rm s}$. We have found that a non-axisymmetric structure of $m=1$ is associated with an elongated core emission in millimeter band. Remarkably, the peak location of the core emission in millimeter band $\approx 10^{11}$ Hz is slightly shifted from the gravitational centre. This is consistent with the baseline-correlated flux density diagram of the recent VLBI observation at 230 GHz (Doeleman et al. 2008).
We have also demonstrated for the first time that our 3-D MHD model with different density (namely different mass accretion rate) can reconstruct the observed broadband spectra including both the X-ray quiescent and flaring states, simultaneously. We have found that the X-ray flaring state corresponds to relatively high-mass accretion rate with a [*weak*]{} coupling between electron and proton temperature $f_{\rm ep}=0.25$, whereas the X-ray quiescent state corresponds to relatively low-mass accretion rate with a [*strong*]{} coupling between electron and proton temperature $f_{\rm ep}=1$. This is an opposite sense if one considers only the Coulomb coupling for electron heating. We will discuss this issue in the following.
Heating mechanism of electron in hot accretion flows has been investigated by many groups (Bisnovatyi-Kogan & Lovelace 1997; Quataert 1998; Gruzinov 1998; Blackman 1999; Quataert & Gruzinov 1999; Medvedev 2000). In ADAF/RIAF models, the turbulent viscosity primarily heats protons and then hot protons heats electrons via the Coulomb coupling between them. When the proton temperature becomes the virial temperature $\approx 10^{12}$ K, the electron-proton Coulomb relaxation time becomes much larger than the dynamical time (Spitzer 1956; Stepney 1983). As a result, the electron temperature decouple from the proton temperature. This is a well-known understanding of physics in hot accretion flows (Rees et al. 1982; Narayan et al. 1995). However, the spectra of hot accretion flows modeled by MHD simulations indicate that a coupling ratio $f_{\rm ep}$ is close to unity in both the flaring and quiescent state. Moreover, $f_{\rm ep}$ increases when the mass accretion rate decreases. This is an opposite sense in terms of plasma physics, because the electron-proton Coulomb relaxation time increases when the density decrease as $t_{\rm relax}\propto T_{\rm
e}^{3/2}/\rho_{0}$. Therefore, our results imply that the alternative heating mechanism of electrons is requisite in order to keep the electron temperature being close to the proton temperature.
Recently, SQHS07 have found that significant fraction of dissipative energy generated by turbulent viscosity can be directly received by electrons. This can naturally explain the discrepancy of $f_{\rm ep}$ between our results and ADAF/RIAF models, because a fraction of energy received by electron is assumed to be typically $\delta\sim m_{\rm
e}/m_{\rm p}\sim 10^{-3}$ in ADAF models (e.g., Narayan et al .1998). Although the detailed physics on the basis of viscous heating of electrons is in dispute, their results are worth to implement as subgrid physics of electron heating for 3-D global MHD simulations in the near future.
Acknowledgments {#acknowledgments .unnumbered}
===============
YK would like to thank S. Mineshige for numerous useful conversations , and M. Miyoshi and M. Tsuboi for fruitful discussion on radio spectrum of Sgr A\*, and H. Hirashita and K. Yoshikawa for stimulating discussions. Radiation transfer code have been developed on [ *FIRST*]{} simulator at the center for computational sciences, University of Tsukuba. Radiation transfer computations were carried out on [*FIRST*]{} simulator and XT4 at the center for computational astrophysics (CfCA), National Astronomical Observatory in Japan (NAOJ). MHD computations were carried out on VPP5000 at the Astronomical Data Analysis Center of the National Astronomical Observatory, Japan (yyk27b). This work was supported in part by the [*FIRST*]{} project based on Grants-in-Aid for Specially Promoted Research by MEXT (16002003, MU), Grant-in-Aid for Scientific Research (S) by JSPS (20224002, MU), and Grant-in-Aid for Scientific Research of MEXT (20740115, KO).
[99]{} Abramowicz, M. A., Chen, X., Kato, S., Lasota, J.-P., & Regev, O. 1995, ApJ, 438, L37
Baganoff, F. K., et al. 2001, Nature, 413, 45 Baganoff, F. K., et al. 2003, ApJ, 591, 891 Balbus, S. A., & Hawley, J. F. 1991, ApJ, 376, 214 Balbus, S. A., & Hawley, J. F. 1998, Reviews of Modern Physics, 70, 1 Balick, B., & Brown, R. L. 1974, ApJ, 194, 265 Bisnovatyi-Kogan, G. S., & Lovelace, R. V. E. 1997, ApJ, 486, L43 Blackman, E. G. 1999, MNRAS, 302, 723 Bower, G. C., Wright, M. C. H., Falcke, H., & Backer, D. C. 2003, ApJ, 588, 331 Bower, G. C., Falcke, H., Herrnstein, R. M., Zhao, J.-H., Goss, W. M., & Backer, D. C. 2004, Science, 304, 704 Bower, G. C., Falcke, H., Wright, M. C., & Backer, D. C. 2005, ApJ, 618, L29 Doeleman, S. S., et al. 2008, Nature, 455, 78 Ghez, A. M., et al. 2003, ApJ, 586, L127 Ghez, A. M., et al. 2004, ApJ, 601, L159 Ghez, A. M., et al. 2008, ApJ, 689, 1044 Gillessen, S., Eisenhauer, F., Trippe, S., Alexander, T., Genzel, R., Martins, F., & Ott, T. 2009, ApJ, 692, 1075 Gruzinov, A. V. 1998, ApJ, 501, 787 Hawley, J. F. 2000, ApJ, 528, 462 Hawley, J. F., & Krolik, J. H. 2001, ApJ, 548, 348 Ichimaru, S. 1977, ApJ, 214, 840 Igumenshchev, I. V., Narayan, R., & Abramowicz, M. A. 2003, ApJ, 592, 1042 Kato, Y., Mineshige, S., & Shibata, K. 2004, ApJ, 605, 307 Kato, Y. 2004, PASJ, 56, 931 Kato, S., Fukue, J., & Mineshige, S. 2008, Black-Hole Accretion Disks — Towards a New Paradigm —, 549 pages, including 12 Chapters, 9 Appendices, ISBN 978-4-87698-740-5, Kyoto University Press (Kyoto, Japan), 2008., Loeb, A., & Waxman, E. 2007, Journal of Cosmology and Astro-Particle Physics, 3, 11 Matsumoto, R. 1999, Numerical Astrophysics, 240, 195 Matsumoto, M. & Nishimura, T. 1998, ACM Trans. on Modeling and Computer Simulation, 8, 3 Machida, M., Hayashi, M. R., & Matsumoto, R. 2000, ApJ, 532, L67 Mahadevan, R. 1998, Nature, 394, 651 Moscibrodzka, M., Proga, D., Czerny, B., & Siemiginowska, A. 2007, A&A, 474, 1 (MPCS07) Medvedev, M. V. 2000, ApJ, 541, 811 Narayan, R., & Yi, I. 1994, ApJ, 428, L13 Narayan, R., Yi, I., & Mahadevan, R. 1995, Nature, 374, 623
Ohsuga, K., Kato, Y., & Mineshige, S. 2005, ApJ, 627, 782 (OKM05) Pacholczyk, A. G. 1970, Series of Books in Astronomy and Astrophysics, San Francisco: Freeman, 1970, Proga, D., & Begelman, M. C. 2003, ApJ, 592, 767 Pozdnyakov, L. A., Sobol, I. M., & Syunyaev, R. A. 1977, Sov. Astron., 21, 708 Quataert, E. 1998, ApJ, 500, 978 Quataert, E., & Gruzinov, A. 1999, ApJ, 520, 248 Rees, M. J., Begelman, M. C., Blandford, R. D., & Phinney, E. S. 1982, Nature, 295, 17 Rybicki, G. B., & Lightman, A. P. 1979, New York, Wiley-Interscience, 1979. 393 p., Sch[ö]{}del, R., et al. 2002, Nature, 419, 694 Sharma, P., Quataert, E., Hammett, G. W., & Stone, J. M. 2007, ApJ, 667, 714 Shen, Z.-Q., Lo, K. Y., Liang, M.-C., Ho, P. T. P., & Zhao, J.-H. 2005, Nature, 438, 62 Spitzer, L. 1956, Physics of Fully Ionized Gases, New York: Interscience Publishers, 1956, Stepney, S. 1983, MNRAS, 202, 467 Stepney, S., & Guilbert, P. W. 1983, MNRAS, 204, 1269 Stone, J. M., & Pringle, J. E. 2001, MNRAS, 322, 461 Yuan, F., Quataert, E., & Narayan, R. 2003, ApJ, 598, 301
[^1]: E-mail: kato.yoshiaki@isas.jaxa.jp
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a sparse estimation and dictionary learning framework for compressed fiber sensing based on a probabilistic hierarchical sparse model. To handle severe dictionary coherence, selective shrinkage is achieved using a Weibull prior, which can be related to non-convex optimization with $\ell_p$-norm constraints for $0\!<\!p\!<\!1$. In addition, we leverage the specific dictionary structure to promote collective shrinkage based on a local similarity model. This is incorporated in form of a kernel function in the joint prior density of the sparse coefficients, thereby establishing a Markov random field-relation. Approximate inference is accomplished using a hybrid technique that combines Hamilton Monte Carlo and Gibbs sampling. To estimate the dictionary parameter, we pursue two strategies, relying on either a deterministic or a probabilistic model for the dictionary parameter. In the first strategy, the parameter is estimated based on alternating estimation. In the second strategy, it is jointly estimated along with the sparse coefficients. The performance is evaluated in comparison to an existing method in various scenarios using simulations and experimental data.'
author:
- Christian Weiss
- ' Abdelhak M. Zoubir'
bibliography:
- './bibliography.bib'
title: Dictionary Learning Strategies for Compressed Fiber Sensing Using a Probabilistic Sparse Model
---
Introduction
============
Fiber sensors are versatile devices with broad applicability [@Kersey1997; @Culshaw2008; @Nakazaki2009; @Yamashita2009]. They are of high interest in smart structures to sense and react to the environment [@Measures1992; @Udd1996]. For quasi-distributed sensing based on wavelength-division fiber Bragg grating (FBG) sensors are often employed due to their sensitivity to strain or An FBG describes a local variation of the refractive index and reflects light at a certain wavelength, called *Bragg wavelength*. Typically, a number of detuned FBGs is imprinted into the core of an optical fiber. Fiber interrogation is performed using broadband light sources or wavelength-tunable lasers. The latter feature higher local signal-to-noise ratios However, in order to monitor time-varying perturbations, the laser has to sweep quickly through the tuning range. This requires high-speed analog-to-digital converters (ADCs) and produces large amounts of data.\
*Compressed sensing* (CS) [@Baraniuk2007; @Candes2008; @Eldar2012] can help to alleviate these problems by taking samples in form of projections into a low-dimensional subspace. The original signal can be reconstructed by exploiting the sparsity of the signal with respect to an adequate dictionary [@Candes2005; @Candes2008a]. This task strongly resembles the *sparse synthesis* problem with redundant dictionaries Besides greedy methods, such as Orthogonal Matching Pursuit (OMP) [@Pati1993], $\ell_1$-minimization is a popular method to solve the sparse reconstruction problem [@Tibshirani1994; @Donoho2003; @Candes2011]. It relies on the *restricted isometry property* (RIP), which essentially states that unique sparse solutions can be recovered by restricting the $\ell_1$-norm instead of the $\ell_0$-norm [@Donoho2001; @Candes2006]. Redundant dictionaries can yield highly sparse representations, that allow for estimating quantities at high resolution directly in the sparse domain [@Donoho2003; @Malioutov2005]. However, redundancy causes inter-column coherence and it is likely that the required RIP conditions are no longer fulfilled [@Donoho2003; @Rauhut2008; @Candes2011]. The with $0\!<\!p\!<\!1$, offers a trade-off to avoid an NP-hard combinatorical problem imposed by the $\ell_0$-norm, while a unique solution might still be retrieved [@Chartrand2007; @Chartrand2008].\
Dictionaries can be classified as parametric or non-parametric. Non-parametric dictionaries are typically learned from training data and often used if no analytical model is While they can yield sparser representations of certain data realizations [@Mairal2008a], non-parametric dictionaries usually lack an interpretable structure and are inefficient in terms of Parametric dictionaries, in turn, rely on an analytical model for the observed signal. Their analytic form offers an efficient implementation and a means to obtain optimality proofs and error bounds [@Rubinstein2010]. They are also favorable in terms of scalability and storage-. *Translation-invariant* dictionaries represent an important sub-class of parametric dictionaries, that can be used to estimate the translation coefficients of localized Nonetheless, due to the complexity of natural signals, some model parameters might be unknown or contain uncertainty. Parametric *Dictionary* *Learning* (DL) addresses this problem with the aim of estimating these parameters from the measured data. Herein, statistical DL methods, such as maximum or maximum *a posteriori* (MAP) estimation, are commonly employed [@Rubinstein2010]. In order to solve the resulting optimization problem, *alternating estimation* (AE) is a frequently -optimal paradigm, that iteratively optimizes a local objective In a Bayesian setting, the Expectation Maximization (EM) algorithm is a popular variant of AE-based estimation [@Rubinstein2010].\
A model for the sparse coefficients can be of deterministic or probabilistic nature. While the deterministic case is often assumed in sparse estimation [@Donoho2003; @Candes2011], a probabilistic model offers high flexibility to take model deviations and measurement errors into account. Moreover, a hierarchical structure can be used to incorporate additional uncertainty in prior assumptions. Sparsity can either be promoted by continuous distributions, resulting in *weakly sparse* models, or by discrete mixtures, leading to *strongly sparse* A prominent example of discrete mixtures are *Spike & Slab* models [@Ishwaran2005a]. They are based on binary activations and yield strongly sparse representations. Continuous sparse priors, such as a Gaussian or double-exponential (Laplace) prior, feature high excess kurtosis with heavy tails and a narrow peak around Besides sparsity, additional knowledge of the signal, e.g. correlation, can be incorporated [@Eldar2010; @Zhang2011a].\
For many practical models, evaluating the posterior distribution is not feasible and approximate methods, such as *Markov Chain Monte Carlo (MCMC)* or variational Bayes methods, have to be used to accomplish Variational methods use rather simple analytic functions to approximate the posterior distribution by factorization, which is favorable in terms of scalability and computational costs but leads to a deterministic approximation [@Seeger2008; @Bishop2006]. MCMC methods attempt to sample the posterior distribution, where subsequent samples form a *Markov chain* [@Bishop2006]. The method is a powerful technique, that is especially suitable for sampling high-dimensional spaces in the presence of correlation [@Neal2011]. However, MCMC performance is generally limited by the available computation time, thereby relying on a stochastic approximation. Another application of MCMC is found in non-convex optimization, where Stochastic MCMC has gained popularity for large-scale Bayesian learning [@Chen2014; @Chen2015; @Chen2016].\
In the present work, we consider the problem of *Compressed Fiber Sensing* (CFS) with highly coherent translation-invariant dictionaries and imperfectly known parameters. For the sparse coefficients, a weakly sparse hierarchical model is considered. We also establish a relation between this model and non-convex optimization with $\ell_p$-norm constraints for $0\!\!<\!\!p\!\!<\!\!1$. In order to alleviate the problem of dictionary coherence, we leverage additional structure of the dictionary and achieve augmented sparsity by establishing a *Markov random* relation among the sparse coefficients. For dictionary learning, we pursue two different strategies: In the first we consider a deterministic dictionary parameter, that is estimated using a Monte Carlo EM algorithm. In the second a probabilistic hierarchical model for the dictionary parameter is considered, leading to a full Bayesian formulation and joint estimation of the sparse coefficients and the dictionary parameter. In both strategies, approximate inference is accomplished using a hybrid MCMC method based on Gibbs sampling and HMC. Finally, we use simulations and real data to compare the proposed methods to previous work where a deterministic model is considered for the sparse coefficients and the dictionary parameter. For the deterministic case, we derive the Cram[é]{}r-Rao bound (CRB) to assess the performance gain achieved by a probabilistic model.
Contributions
-------------
- We propose a probabilistic model for the sparse coefficients, where a Weibull prior is used to promote (weak) sparsity. Additional collective shrinkage is achieved by establishing an MRF-relation among the sparse coefficients based on a bivariate kernel function in the joint prior density. This helps to moderate the impact of severe dictionary coherence and can be used in general sparse synthesis problems with similar dictionary structure. We also establish a relation to non-convex optimization with constraints on the $\ell_p$-norm for $0<p<1$.
- For dictionary learning, we investigate two conceptually different strategies, assuming either a deterministic (***S1***) or a stochastic (***S2***) dictionary parameter. In both strategies, the noise level can be jointly estimated along with the sparse coefficients. We further highlight advantages, disadvantages and limitations to offer support in choosing an adequate method for practical systems.
- To accomplish inference in these models, we use a hybrid MCMC method, combining HMC and Gibbs sampling, We show its applicability and efficacy in the considered sampling problem for CFS.
- We use simulations to evaluate the performance of the proposed sparse estimation and DL methods for various scenarios of different CS sample sizes, SNRs and CS matrices. These results are compared to an existing method in [@Weiss2016], where the sparse coefficients and the dictionary parameter are assumed to be deterministic. In addition, we provide a real-data example to verify the practical applicability of ***S1*** and ***S2***.
- We derive the Cram[é]{}r-Rao bound for jointly estimating the sparse coefficients and the dictionary parameter in the deterministic case. It is a valid bound for the competing method in [@Weiss2016], and serves to assess the achieved performance gain of our probabilistic approach.
Related Work
============
There exists little work addressing the combination of CS and DL for the application of WDM-based distributed fiber-optic In [@Weiss2013], a model for the received sensor signal is presented, from which a redundant shift-invariant parametric dictionary is created. The works focus on the aspect of CS and sparse estimation in the case of uncertain dictionary parameters. The authors use AE-based estimation to determine the dictionary parameters, where a pre-processing routine accounts for severe dictionary coherence. Unlike our approach, these works use a deterministic model for the sparse coefficients and dictionary parameters.\
Weakly sparse models have been widely used in the literature. A comprehensive analysis of different hierachical sparse prior models is provided in [@Mohammad-Djafari2012]. The general problem of choosing the prior in weakly sparse models for sparse regression is addressed in [@Polson2010], where the authors describe various properties of different shrinkage priors and illuminate the selection problem from two perspectives: prior distributions and penalty functions. The work in [@Mohamed2012] also investigates Bayesian methods with different sparse models in comparison to classical $\ell_1$-minimization. Seeger [@Seeger2008] found that the Laplace prior is able to shrink most components close to zero, while allowing for selected components to become sufficiently large. This effect, termed *selective shrinkage* is most noticeable for heavy-tailed priors, e.g. the Student’s $t$-prior [@Seeger2008] or the *horseshoe* prior in [@Carvalho2010; @Polson2010]. Based on these findings, we select a sparsity prior that resembles a positive version of the horseshoe prior. Other works, that focus on the perspective of penalized regression, report higher sparsity levels by penalizing the $\ell_p$-norm with $0<p<1$ instead of the $\ell_1$-norm [@Gupta2013]. The authors in [@Chartrand2007] show that the RIP requirements for the dictionary can be relaxed in this case. It is also pointed out in [@Chartrand2007; @Chartrand2008] that non-convex CS with $\ell_p$-norm penalization requires less measurements than standard CS, which is based on the $\ell_1$-norm. We rely on these results and show a relation between the considered sparsity prior and non-convex optimization with $\ell_p$-norm constraints.\
There exist several approaches to exploit additional structure of the signal. One example is *block sparsity* [@Eldar2010]. A block sparse Bayesian learning framework is proposed in [@Zhang2011a], pointing out how correlation can be exploited in regularization algorithms. Wakin *et* introduce the concept of *joint sparsity* for signal recovery in distributed CS theory. In [@Malioutov2005], temporal correlation across subsequent CS measurements is considered, while the authors in [@Chen2013a] use correlation to achieve smoothness. Another related concept is proposed in [@Altmann2015], where a truncated multivariate Ising MRF model is used to describe the correlation between adjacent pixels for image processing. Different from these works, we use the ideas of MRFs [@Murphy2012] and exploit correlation to achieve collective shrinkage among the sparse coefficients.\
A comparative analysis in [@Mohamed2012] suggests that MCMC methods are powerful for inference in sparse models. In [@Neal2011], the benefits of HMC and Gibbs sampling in hierarchical models are outlined. It is also shown, that HMC can be more effective than a Gibbs sampler for sampling high-dimensional spaces in the presence of correlation. According to these results, we consider a hybrid MCMC method that combines HMC and Gibbs sampling for inference in our hierarchical model, where the sparse coefficients are high-dimensional and correlated. For parametric DL, the Monte Carlo EM algorithm in ***S1*** represents one variant of the frequently applied AE-based estimation Comparable to ***S2*** is the Bayesian framework for sparse estimation and DL in [@Hansen2014]. However, the authors use a Gaussian prior without correlation.
Outline
-------
In , the signal model for CFS is introduced, and in , the CRB for joint estimation of the deterministic sparse coefficients and dictionary parameters is derived. details the sparsity and local similarity model, while Section \[sec:approx\_inference\] describes the hybrid MCMC method for approximate inference in this model. The parametric DL strategies ***S1*** and ***S2*** are described in Section \[sec:PDL\]. Section \[sec:performance\] shows the working principle along with a performance analysis of the proposed and an existing method based on simulations and experimental data. A discussion of the results and findings is given in Section \[sec:discussion\]. Section \[sec:conclusion\] concludes this work.
Signal Model {#sec:problem_and_model}
============
In order to determine the quantity and nature of impairments at the FBGs in a WDM-based fiber sensor, the time delays of the reflections from the individual FBGs need to be estimated. We adopt the model in [@Weiss2013; @Weiss2015; @Weiss2016], where CS-based acquisition is employed to reduce the number of samples to be stored and processed. The CS measurements are described by $$\label{eq:basic_model}
\mathbf{y} = \boldsymbol{\Phi}\mathbf{A}(\theta)\mathbf{x} + \mathbf{n}\,,$$ where $\boldsymbol{\Phi} \in \mathbb{R}^{M\times L}$ is the CS sampling matrix and $\mathbf{n}\in \mathbb{R}^{M}$ is a Gaussian noise component with independent and identically distributed (i.i.d.) entries, $n_m \sim \mathcal{N}(0,\sigma_n^2)$, $m=1,\dots,M$. The vector $\mathbf{x} \in \mathbb{R}^N$ is sparse with $K$ significant components, and $\theta\!\in\!\mathbb{R}$ is a scalar dictionary parameter. The matrix $\mathbf{A}(\theta)$ represents a redundant shift-invariant dictionary and its columns, called *atoms*, represent FBG reflections on a dense grid of delays. The indices of the $K$ significant components in $\x$ indicate the desired reflection delays. They are collected in the set $\mathcal{S}\! =\! \{i_1,\dots,i_K\}$. We can write the full data likelihood function for this model by $$\label{eq:Gauss_likelihood}
\hspace{0.05cm} {p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\theta \hspace{0.8pt} )} = (\sqrt{2\pi}\sigma_n)^{-M}{\mathrm{exp}\text{$\left( \!-\frac{1}{2\sigma_n^2}\|\y-\boldsymbol{\Phi}\mathbf{A}(\theta)\x\|_2^2 \right)$} }\!.\hspace{-0.3cm}$$ The $i$-th dictionary atom, $i\!=\!1,\dots,\!N\!$, is defined $$\label{eq:dict_atoms_elements}
[\mathbf{a}_i]_l(\theta) = r(lT_d - i\delta t, \theta),\ \ l=1,\dots,L \,,$$ where the generating function, $r(lT_d - i\delta t, \theta)$, describes the reflection from a single FBG, incrementally shifted by $\delta t$ and sampled with a design sampling period, $T_d$. In order to specify the dictionary parameter in CFS according to [@Weiss2016], we write $$\label{eq:sensor_signa_IFT_model}
r(t,\theta) = \int_{-\infty}^{\infty} \text{e}^{j2\pi f t} H_{\text{\tiny LP}}(f,\theta)\,i_{\text{ph}}(f)\, \text{d}f \,.$$ Herein, $i_{\text{ph}}(f)$ is the received photocurrent in the frequency domain, and $H_{\text{\tiny LP}}(f,\theta)$ is the transfer function of a lowpass filter, that models a limited *effective bandwidth* of the receiver circuitry. This bandwidth is described in terms of a positive dictionary parameter, $\theta\in\mathbb{R}_+$. As an auxiliary parameter, it accounts for different indistinguishable sources of uncertainty, that all contribute to the broadening in the temporal response of the FBG reflections. A detailed model for $i_{\text{ph}}(f)$ is
The CRB for joint estimation of ($\x,\theta$) in CFS {#sec:CRB}
====================================================
We derive the CRB for jointly estimating the deterministic parameters ($\x,\theta$). This is a valid bound for the model considered in [@Weiss2016] and can be used to assess the relative performance gain achieved by the proposed probabilistic sparse model and DL strategies. Although the Bayesian CRB in [@Bobrovsky1987] can be empirically determined, we found that this bound is very lose, due to the high information content in the considered sparsity prior. Therefore, and in regard of the comparative analysis with the deterministic case in , the non-Bayesian CRB is more useful in this case.\
The constrained CRB for estimating $\x$ with sparsity constraints has been derived in [@Ben-Haim2010]. However, this derivation does not assume uncertainty in the dictionary. It is based on locally balanced sets and involves the projection of the *Fisher Information matrix* (FIM), $\boldsymbol{\mathcal{I}}(\x)$, onto a low-dimensional subspace spanned by the so-called *feasible directions*. Any estimator, $\hat{\x}$, for which the constrained CRB is a valid lower bound, must be unbiased with respect to these directions. The projection matrix can be created from the unit vectors corresponding to the non-zero coefficients in $\x$, that is $\mathbf{U}=[\mathbf{e}_{i_1},\dots,\mathbf{e}_{i_K}]$ with $i_k\in \mathcal{S}$, For a Gaussian likelihood as in (\[eq:Gauss\_likelihood\]), the FIM can be derived from the expected value of the Hessian matrix of the log-likelihood function, i.e. [@Kay1993; @Ben-Haim2010] $$\boldsymbol{\mathcal{I}}(\x) = - {\mathbb{E}_{\y}\hspace{4.0pt} }{\nabla_{\hspace{-1.5pt}\x}}^2 \log {p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\theta \hspace{0.8pt} )} = \frac{1}{\sigma_n^2}\mathbf{B}^{\!\top}\mathbf{B},$$ with $\mathbf{B} = \boldsymbol{\PHI}\mathbf{A}$. Further, we define the reduced FIM by . Then, given that $\x$ is exactly $K$-sparse, the constrained CRB for a *known* dictionary becomes [@Ben-Haim2010] $$\text{Cov}(\hat{\x}) \succeq \mathbf{U}\,\boldsymbol{\mathcal{I}}_K^{-1}\mathbf{U}^{\!\top},\quad \|\x\|_0 = K.$$ Based on these results, we derive the CRB for the joint parameters $\boldsymbol{\gamma} = (\x,\theta)$. First, we derive the Fisher information for $\theta$, given that $\x$ is known. It is given by $$\begin{aligned}
\nonumber \mathcal{I}(\theta) &=& - {\mathbb{E}_{\y}\hspace{4.0pt} } {\frac{\partial^2 }{\partial \theta^2}}\log {p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\theta \hspace{0.8pt} )}\\[0.0cm]
\nonumber &=& {\mathbb{E}_{\y}\hspace{4.0pt} } {\frac{\partial^2 }{\partial \theta^2}}\frac{1}{2\sigma_n^2} (\y-\PHI\mathbf{A}(\theta)\x)^{\!\top} ( \y-\PHI\mathbf{A}(\theta)\x)\\[0.0cm]
\label{eq:Fisher_theta} &=& \frac{1}{\sigma_n^2}\,\x^{\!\top}\mathbf{A}'(\theta)^{\!\top}\PHI^{\!\top}\PHI\mathbf{A}'(\theta)\x .\end{aligned}$$ Herein, $\mathbf{A}'(\theta)$ denotes the (element-wise) derivative of $\mathbf{A}(\theta)$ with respect to $\theta$. Next, we have to take into account that $\x$ and $\theta$ share some mutual information. Therefore, we define the combined FIM: $$\boldsymbol{\mathcal{I}}(\boldsymbol{\gamma}) = \left(
\begin{matrix}
\boldsymbol{\mathcal{I}}(\x) & -{\mathbb{E}_{\y}\hspace{4.0pt} }\mathbf{u}\\[0.1cm]
-{\mathbb{E}_{\y}\hspace{4.0pt} }\mathbf{u}^{\!\top} & \mathcal{I}(\theta)
\end{matrix} \right),$$ where $\mathbf{u}=[u_1,\dots,u_N]^T$ and $u_i = {\frac{\partial }{\partial x_i}}{\frac{\partial }{\partial \theta}}\log {p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\theta \hspace{0.8pt} )}$, Since the partial derivatives can be interchanged, the off-diagonal elements are identical. In order to complete the definition of $\boldsymbol{\mathcal{I}}(\boldsymbol{\gamma})$, we determine $$\begin{aligned}
\nonumber -{\mathbb{E}_{\y}\hspace{4.0pt} \!u_i } &=& -{\mathbb{E}_{\y}\hspace{4.0pt} }\frac{\partial^2}{\partial x_i\partial\theta} \log{p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\theta \hspace{0.8pt} )}\\
&=& \frac{1}{\sigma_n^2}\x^{\!\top}\mathbf{A}'(\theta)^{\!\top}\PHI^{\!\top}\PHI\,\mathbf{a}_i(\theta).\end{aligned}$$ The reduced FIM is obtained by appending the set of feasible directions, such that the coordinate $\theta$ is included, i.e. Hence, $\boldsymbol{\mathcal{I}}_{K+1}=\mathbf{\tilde{U}}^{\!\top}\boldsymbol{\mathcal{I}}(\boldsymbol{\gamma})\mathbf{\tilde{U}}$. To obtain the inverse, we apply twice the matrix inversion lemma [@Higham2002] $$\boldsymbol{\mathcal{I}}_{K+1}^{-1} =
\left(\begin{matrix}
\left(\boldsymbol{\mathcal{I}}_K-\frac{\mathbf{v}\mathbf{v}^{\!\top}}{\mathcal{I}(\theta)}\right)^{-1} &
-\frac{1}{\breve{b}}\boldsymbol{\mathcal{I}}_K^{-1}\mathbf{v} \\[0.2cm]
-\frac{1}{\breve{b}}\mathbf{v}^{\!\top}\boldsymbol{\mathcal{I}}_K^{-1} &
\frac{1}{\breve{b}}
\end{matrix}\right),$$ where $\breve{b} = \mathcal{I}(\theta)-\mathbf{v}^{\!\top}\boldsymbol{\mathcal{I}}_K^{-1}\mathbf{v}$, and $$\left(\boldsymbol{\mathcal{I}}_K-\frac{\mathbf{v}\mathbf{v}^{\!\top}}{\mathcal{I}(\theta)}\right)^{-1} = \boldsymbol{\mathcal{I}}_K^{-1}+\frac{1}{\breve{b}}\boldsymbol{\mathcal{I}}_K^{-1}\mathbf{v}\mathbf{v}^{\!\top}\boldsymbol{\mathcal{I}}_K^{-1}\,.$$ The constrained CRB for the joint parameters in $\boldsymbol{\gamma}$ becomes $$\text{Cov}(\boldsymbol{\gamma})\ \succeq\ \mathbf{\tilde{U}}\boldsymbol{\mathcal{I}}_{K+1}^{-1}\mathbf{\tilde{U}}^{\!\top},\quad \|\x\|_0 = K\,.$$ Finally, a lower bound for the *mean squared error (MSE)* in the joint setting is obtained by updating the individual estimation errors to account for the information shared between $\x$ and $\theta$: $$\begin{aligned}
\hspace{-0.4cm} {\text{MSE}\hspace{1.0pt}(\hat{\x})} &\!\!\! \geq&\!\!\! ({\text{Tr}\hspace{3.0pt}\ \boldsymbol{\mathcal{I}}_{K}^{-1} }) + \frac{1}{\breve{b}} \mathbf{v}^{\!\top}\boldsymbol{\mathcal{I}}_{K}^{-1}\boldsymbol{\mathcal{I}}_{K}^{-1}\mathbf{v}\\[0.1cm] \hspace{-0.4cm} {\text{MSE}\hspace{1.0pt}(\hat{\theta})}&\!\!\! \geq&\!\!\! \frac{1}{\breve{b}}\, =\ \mathcal{I}(\theta)^{-1} + \frac{\mathbf{v}^{\!\top}\boldsymbol{\mathcal{I}}_K\mathbf{v}}{\mathcal{I}(\theta)\,(\,\mathcal{I}(\theta)-\mathbf{v}^{\!\top}\boldsymbol{\mathcal{I}}_K\mathbf{v})}.\end{aligned}$$
Probabilistic sparse model {#sec:weak_sparsity_model}
==========================
Regarding the model in (\[eq:basic\_model\]), the data can be explained in different ways. On the one hand, many non-zero components in $\mathbf{x}$ and a large bandwidth, $\theta$, result in many narrow temporal peaks that can yield a good approximation of the observed reflections. On the other hand, it is known that the sensing fiber contains $K$ FBGs, so we expect exactly $K$ reflections. Therefore, a more useful explanation is given by $K$ significant elements in $\mathbf{x}$ with a smaller value of $\theta$, such that $\mathcal{S}$ correctly indicates the reflection delays. Nevertheless, even for a suitable value of $\theta$, the signal $\mathbf{x}$ is usually not exactly sparse but contains many small elements close to zero, e.g. due to measurement noise. In a strongly sparse model, these contributions are not taken into account, which impacts the positions of non-zero elements in $\x$. Hence, it may lead to incorrectly estimated reflection delays. This motivates a weakly sparse model, where the $K$ most significant components indicate the reflection delays. When $\x$ and $\theta$ are both unknown, the reflections delays can only be estimated when prior information of sparsity is incorporated, since $\theta$ depends on $\mathbf{x}$ and vice versa. Severe dictionary coherence aggravates this problem and results in several non-zero components with moderate amplitudes around the true significant elements. The coherence level is even stronger when the dimensionality of the acquired data is further reduced by CS. Thus, an adequate sparse model for $\x$ must compensate for this effect. Classic $\ell_1$-minimization can be interpreted as an MAP estimation problem, where $\mathbf{x}$ has i.i.d. entries with Laplace However, the required performance guarantees for $\ell_1$-minimization, essentially the RIP [@Candes2005; @Candes2008a], are no longer fulfilled in the case of strong dictionary coherence. According to [@Chartrand2007; @Chartrand2008], the RIP conditions can be relaxed for $\ell_p$-minimization, when $0<p<1$. Therefore, we use a prior with stronger selective shrinkage effect, that can be related to constraints on the $\ell_p$-norm in non-convex optimization. Yet, specific characteristics of the signal have to be considered. The measured reflection signal is proportional to the optical power, and the dictionary atoms essentially model the optical power reflected from the individual FBGs. Thus, the prior must also account for the non-negativity of the data. Due to these restrictions, we choose a Weibull prior that resembles a positive version of the horseshoe prior and induces the required selective shrinkage effect: $$\hspace{0.25cm} x_i \sim p(x_i) = {\mathcal{W}(\hspace{0.8pt} x_i \hspace{1.2pt} \boldsymbol{|} \hspace{1.3pt} \lWeibull,\kWeibull \hspace{0.8pt} )}\,, \ x_i > 0, \ i=1,\dots,N, \hspace{-0.3cm}
$$ where $\!\lambda_w,k_w$ are the scale and shape parameters, respectively. Then, the joint prior density of $\mathbf{x}$ is given by $$\label{eq:joint_x_Weibull}
{p(\hspace{0.8pt} \mathbf{x} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \kWeibull,\lWeibull \hspace{0.8pt} )} = \frac{\kWeibull}{\lWeibull^{\kWeibull}} \prod_{i=1}^{N} x_i^{\kWeibull-1}\,{\mathrm{exp}\text{$\left( \!-\lWeibull^{-\kWeibull} \sum_{i=1}^{N}x_i^{\kWeibull}\! \right)$} }\!.\hspace{-0.1cm}$$ Fig. \[fig:kernel\_and\_impact\_on\_pdf\] (top left) shows qualitatively the shape of the considered prior in the bivariate case.\
Based on (\[eq:joint\_x\_Weibull\]) and (\[eq:Gauss\_likelihood\]), we can relate the problem to constrained ML estimation. First, let us consider an interpretation in terms of MAP estimation as in [@Mohammad-Djafari2012], by calculating $\text{arg}\max_{\x}\, \log\,{p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\theta \hspace{0.8pt} )}{p(\hspace{0.8pt} \mathbf{x} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \kWeibull,\lWeibull \hspace{0.8pt} )}$ or, equivalently, $$\begin{aligned}
\nonumber\hspace{-1.2cm} &&\hspace{-0.0cm} \text{arg}\min_{\hspace{-0.3cm}\x}\ -\log\,{p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\theta \hspace{0.8pt} )}{p(\hspace{0.8pt} \mathbf{x} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \kWeibull,\lWeibull \hspace{0.8pt} )}\ = \\
\hspace{-1.2cm} &&\hspace{-0.0cm} \text{arg}\min_{\hspace{-0.3cm}\x}\, \|\y\!-\!\boldsymbol{\Phi}\mathbf{A}(\theta)\x\|_2^2
+ \mu_1\!\sum_{i=1}^N\log(x_i) + \mu_2\!\sum_{i=1}^{N}x_i^{\kWeibull}\!,\end{aligned}$$ where $\mu_1=(1-\kWeibull)$ and $\mu_2=\lWeibull^{-\kWeibull}$ with and $\mu_1,\mu_2 >0$. In order to formulate a related constrained ML problem, let us define two functions, $$\label{eq:constr_fcts}
g_1 = \sum_{i=1}^{N}x_i^{\kWeibull}-\lambda_1^{\kWeibull}\quad\ \ \text{and}\quad\ \ g_2 = \sum_{i=1}^N\log(x_i)-\lambda_2,$$ where $\lambda_1,\lambda_2 \in \mathbb{R}_+$ are related to the coefficients $\mu_1,\mu_2$, respectively. The functions in (\[eq:constr\_fcts\]) can represent inequality constraints of the form $g_1 \leq 0$ and $g_2\leq 0$, that account for the impact of the prior by restricting the search space. constrained version of the ML problem can be formulated by $$\begin{aligned}
\label{eq:optProblem_costFct}
\hspace{0.1cm}\text{arg}\min_{\hspace{-0.3cm}\x \succ \mathbf{0}} &\ \ \, \|\y-\boldsymbol{\Phi}\mathbf{A}(\theta)\x\|_2^2&\\[0.0cm]
\label{eq:optProblem_constr_1}
\text{s.t.}& \ \ \|\mathbf{x}\|_{k_w} &\leq\ \lambda_1 \\[0.1cm] \label{eq:optProblem_constr_2}
\text{and}& \ \ \sum_{i=1}^{N}\log(x_i)& \leq\ \lambda_2 \,.\end{aligned}$$ In this non-convex problem, $\|\mathbf{x}\|_p = (\sum_{i=1}^{N}|x_i|^p)^{1/p}$ denotes the $\ell_p$-norm with . The hyperparameters $\lambda_1, \lambda_2$ control the shrinkage effects. Fig. \[fig:kernel\_and\_impact\_on\_pdf\] (top right) depicts the search space restricted by the constraints (\[eq:optProblem\_constr\_1\])-(\[eq:optProblem\_constr\_2\]). are shown for a fixed value of $\lambda_1$ and $\lambda_2$ in the bivariate case.
Local covariance model for augmented sparsity
---------------------------------------------
In analogy to the concept of block sparsity [@Eldar2010], we can use the specific sparse structure of the signal with respect to the shift-invariant dictionary for CFS to exploit sparsity among groups of variables. The signal contains only $K$ reflections that arrive at temporally separated delays, indicated by the significant components in $\x$. Therefore, we can assume that a significant coefficient is always surrounded by larger groups of non-significant coefficients and any two significant components are always well separated. Also, it is likely that the amplitudes of adjacent non-significant coefficients are similarly close to zero. Borrowing from the ideas of MRFs [@Murphy2012], such local similarity can be modeled by a prior on the differential coefficients, $\Delta\x$, where $\Delta x_i = x_{i+1} - x_{i},\, i=1,\dots,N-1$. It restricts the variation of adjacent amplitudes and establishes a MRF relation between neighboring coefficients in $\x$. Then, non-significant coefficients with larger amplitudes are pulled down to match the majority with almost-zero amplitudes, which promotes additional *collective* shrinkage. However, if a significant coefficient follows a non-significant one (or vice versa), the model should allow for larger changes. Therefore, the differential variation must be locally specified, dependent on the respective amplitudes, in order to avoid undesired shrinkage or equalization.
\
To this end, we define a kernel function for all adjacent pairs of sparse coefficients, i.e. $\forall\ i=1,\dots,N-1$, with hyperparameter $\lambda_{\text{\tiny$\Delta$}}$: $$\label{eq:kernel}
\mathcal{K}(x_{i},x_{i+1}\,\boldsymbol{|}\,\lambda_{\text{\tiny$\Delta$}})\ =\ {\mathrm{exp}\text{$\left( -\lambda_{\text{\tiny$\Delta$}}\frac{|x_{i+1}-x_{i}|}{f_{\text{\tiny$\mathcal{K}$}}(x_i,x_{i+1})} \right)$} }\,.\vspace{0.05cm}$$ The bivariate function $\!f_{\text{\tiny$\mathcal{K}$}}$ controls the similarity level between adjacent coefficients. Within the scope of this work we consider cases, where this function takes the form $i=1,\dots,N-1$, with positive constants $r\leq 1$, $N_x < \infty$. They can be incorporated in ${p(\hspace{0.8pt} \!\x \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} k_w,\!\lambda_w\! \hspace{0.8pt} )}$ to yield a modified joint prior density, $$\hspace{-2.3cm}\tilde{p}(\mathbf{x}\,\boldsymbol{|}\,\kWeibull,\lWeibull,\lambda_{\text{\tiny$\Delta$}})\ \ =\ \ \frac{1}{Z_{\text{\tiny$\mathcal{K}$}}} {\mathcal{W}(\hspace{0.8pt} x_N \hspace{1.2pt} \boldsymbol{|} \hspace{1.3pt} \kWeibull,\lWeibull \hspace{0.8pt} )}$$ $$\label{eq:modified_joint_x} \hspace{2.417cm} \times \prod_{i=1}^{N-1}\!\mathcal{K}(x_i,x_{i+1}\,\boldsymbol{|}\,\lambda_{\text{\tiny$\Delta$}})\, {\mathcal{W}(\hspace{0.8pt} x_i \hspace{1.2pt} \boldsymbol{|} \hspace{1.3pt} \kWeibull,\lWeibull \hspace{0.8pt} )},\hspace{-0.6cm}$$ with normalization constant $Z_{\text{\tiny$\mathcal{K}$}}$. For any $\alpha,\beta\in \mathbb{R}_+$, it holds that $0<\mathcal{K}(\alpha,\beta\,\boldsymbol{|}\,\lambda_{\text{\tiny$\Delta$}}) = \mathcal{K}(\beta,\alpha\,\boldsymbol{|}\,\lambda_{\text{\tiny$\Delta$}}) \leq 1$ and $$\begin{aligned}
\hspace{-0.5cm} \left.\tilde{p}(\mathbf{x}\,\boldsymbol{|}\,\kWeibull,\lWeibull,\lambda_{\text{\tiny$\Delta$}})\right|_{Z_{\text{\tiny$\mathcal{K}$}}=1}
\!\!&\leq&\! {p(\hspace{0.8pt} \mathbf{x} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \kWeibull,\lWeibull \hspace{0.8pt} )}
$$ is bounded. Hence, there exists a positive constant that normalizes (\[eq:modified\_joint\_x\]) to make $\tilde{p}(\mathbf{x}\,\boldsymbol{|}\,\kWeibull,\lWeibull,\lambda_{\text{\tiny$\Delta$}})$ a proper density. Fig. \[fig:kernel\_and\_impact\_on\_pdf\] (bottom) visualizes the function $\mathcal{K}(x_{i},x_{i+1}\,\boldsymbol{|}\,\lambda_{\text{\tiny$\Delta$}})$ and its impact on the original prior in the bivariate case.\
In the view of constraint ML estimation, the modified prior density in (\[eq:modified\_joint\_x\]) can be related to the optimization problem in (\[eq:optProblem\_costFct\])-(\[eq:optProblem\_constr\_2\]) by imposing additional constraints $$\frac{|x_{i+1}-x_{i}|}{f_{\text{\tiny$\mathcal{K}$}}(x_i,x_{i+1})} \leq \mu_i,\quad i=1,\dots,N-1\,.$$ Fig. \[fig:kernel\_and\_impact\_on\_pdf\] (top right) depicts a bivariate example. In order to show the MRF relation between the coefficients, we calculate the conditional densities $\forall\ x_i,\, i=1,\dots,N$. To this end, we conveniently define and get $$\hspace{-0.5cm}\tilde{p}(x_i\,\boldsymbol{|}\,\mathbf{x}_{\setminus i},\kWeibull,\lWeibull,\lambda_{\text{\tiny$\Delta$}})\ =\ \,
\tilde{p}(x_i\,\boldsymbol{|}\,x_{i-1},x_{i+1},\kWeibull,\lWeibull,\lambda_{\text{\tiny$\Delta$}})$$\
$$\label{eq:px_i_cond}
\hspace{0.4cm}\propto\ {\mathcal{W}(\hspace{0.8pt} x_i \hspace{1.2pt} \boldsymbol{|} \hspace{1.3pt} \kWeibull,\lWeibull \hspace{0.8pt} )}\,\mathcal{K}(x_{i-1},x_{i}\,\boldsymbol{|}\,\lambda_{\text{\tiny$\Delta$}})\,\mathcal{K}
(x_{i},x_{i+1}\,\boldsymbol{|}\,\lambda_{\text{\tiny$\Delta$}}),\hspace{-0.1cm}$$ $$\label{eq:px_1_cond}
\hspace{-0.80cm} \tilde{p}(x_1\boldsymbol{|}\mathbf{x}_{\setminus 1},\!\kWeibull,\!\lWeibull,\!\lambda_{\text{\tiny$\Delta$}}\!)\ \ \ \, \propto\ {\mathcal{W}(\hspace{0.8pt} \!x_1\! \hspace{1.2pt} \boldsymbol{|} \hspace{1.3pt} \kWeibull,\!\lWeibull\! \hspace{0.8pt} )}\,\mathcal{K}(x_{1},\!x_{2}\boldsymbol{|}\lambda_{\text{\tiny$\Delta$}}\!),$$ $$\label{eq:px_N_cond}
\hspace{-0.75cm} \tilde{p}(x_N\boldsymbol{|}\mathbf{x}_{\setminus N}\!,\kWeibull,\!\lWeibull,\!\lambda_{\text{\tiny$\Delta$}}\!)\ \, \propto
{\mathcal{W}(\hspace{0.8pt} \!x_N\! \hspace{1.2pt} \boldsymbol{|} \hspace{1.3pt} \kWeibull,\!\lWeibull\! \hspace{0.8pt} )}\,\mathcal{K}(x_{N\text{-}1},\!x_{N}\boldsymbol{|}\lambda_{\text{\tiny$\Delta$}}\!)\hspace{-0.01cm},
$$ where dependencies appear only between directly adjacent coefficients.\
In order to account for deviations from prior assumptions, we consider randomization of the hyperparameters and assign conjugate inverse Gamma priors to the scale parameters $\lambda_w$ and $\lambda_{\text{\tiny$\Delta$}}$. Finally, given $\lambda_w$ and a normalization constant $Z_{\kWeibull}$, the shape parameter, $\kWeibull > 0$, is assigned the conjugate prior distribution according to [@Fink1997]: $$\label{eq:kw_prior}
\hspace{0.05cm} {p(\hspace{0.8pt} \kWeibull \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} a',b',(d\,')^{k_w}\!\!,\lWeibull \hspace{0.8pt} )} = \frac{\kWeibull^{a'}}{Z_{\kWeibull}} {\mathrm{exp}\text{$\left( \!\!-b'\kWeibull\! - \frac{(d\,')^{\kWeibull}}{\lWeibull}\! \right)$} }, \hspace{-0.15cm}$$ Fig. \[fig:factor\_graph\_localSimilarity\] shows a factor graph for the complete sparsity model with randomized hyperparameters.
Approximate Inference: Hybrid MCMC {#sec:approx_inference}
==================================
In order to accomplish inference in the sparse model, we apply a hybrid MCMC technique, i.e. HMC within Gibbs sampling. The reasons for using HMC are twofold: Firstly, it only requires an analytic expression for the posterior density to be sampled. Secondly, it is efficient in sampling high-dimensional spaces in the presence of correlation. However, as pointed out in [@Neal2011], it can be more efficient to sample the hyperparameters separately, as their posterior distributions are often highly peaked and require a small step size in the HMC algorithm, which limits the general performance. Therefore, we employ an outer Gibbs sampler for approximate inference of the latent variables. In each iteration, $\tilde{p}(\x\,\boldsymbol{|}\,\lWeibull,\kWeibull,\lambda_{\text{\tiny$\Delta$}})$ is sampled using HMC, while all other variables are fixed. Since we are also interested in estimating the noise variance, $\sigma_n^2$, it is assigned an inverse Gamma ($\text{Inv-}\Gamma$) prior and sampled along with the other variables. The resulting model is summarized below: $$\begin{aligned}
\nonumber \mathbf{x}\,\boldsymbol{|}\,\kWeibull,\lWeibull,\lambda_{\text{\tiny$\Delta$}}&\sim& \tilde{p}(\x\,\boldsymbol{|}\,\kWeibull,\lWeibull,\lambda_{\text{\tiny$\Delta$}})\,\ \ \quad\qquad \; \text{in}\ (\ref{eq:modified_joint_x}),\\
\nonumber \lWeibull &\sim& \text{Inv-}\Gamma(\lWeibull\,\boldsymbol{|}\,a,b),\\
\nonumber \kWeibull\,\boldsymbol{|}\,\lWeibull &\sim& {p(\hspace{0.8pt} \kWeibull \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} a',b',(d\,')^{k_w},\lWeibull \hspace{0.8pt} )}\; \; \; \; \text{in}\ (\ref{eq:kw_prior}),\\
\nonumber \lambda_{\text{\tiny$\Delta$}} &\sim& \text{Inv-}\Gamma(\lambda_{\text{\tiny$\Delta$}}\,\boldsymbol{|}\,a'',b'')\\
\label{eq:sparse_model_summary} \sigma_n^2 &\sim& \text{Inv-}\Gamma(\lambda_{\text{\tiny$\Delta$}}\,\boldsymbol{|}\,a_\sigma,b_\sigma)\,.\end{aligned}$$ We also define $\zeta\in\mathcal{C} = \{\kWeibull,\lWeibull,\lambda_{\text{\tiny$\Delta$}},\sigma_n^2\}$ as a representative variable with corresponding positive, real-valued parameters $a_\zeta\in\{a,a'',a_\sigma\}$ and $b_\zeta\in\{a,a'',a_\sigma\}$, that belong to the respective density functions. Further, the set $\mathcal{C}_{\setminus\zeta}$ denotes the set $\mathcal{C}$ without the respective variable $\zeta$. Fig. \[fig:dependency\_graph\] shows a graphical model that helps to visualize the dependencies in this model. Herein, $\theta$ and $\boldsymbol{\Xi}$ are only valid for strategy $\textbf{\emph{S2}}$, which is discussed in For the particular model in (\[eq:sparse\_model\_summary\]), we assume that the variables $\x, \sigma_n^2$ and $\theta$ are mutually independent. Gibbs sampling requires the full conditional distributions for each parameter of interest. Based on these assumptions, we obtain the relation $$\begin{aligned}
{p(\hspace{0.8pt} \zeta \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \y,\x,\mathcal{C}_{\setminus\zeta} \hspace{0.8pt} )} & \propto & {p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\mathcal{C} \hspace{0.8pt} )}\,{p(\hspace{0.8pt} \zeta \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\mathcal{C}_{\setminus\zeta} \hspace{0.8pt} )}\\
& \propto & {p(\hspace{0.8pt} \y \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \x,\mathcal{C} \hspace{0.8pt} )}\, {p(\hspace{0.8pt} \zeta \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \mathcal{C}_{\setminus\zeta} \hspace{0.8pt} )}\, \tilde{p}(\x\,\boldsymbol{|}\,\mathcal{C}) \,.\end{aligned}$$ Since the prior distributions are all conjugate to the Gaussian likelihood function in (\[eq:Gauss\_likelihood\]), a simple calculation yields the posterior distributions of the parameters involved in the Gibbs sampling procedure. For $\zeta\in\mathcal{C}_{\setminus\kWeibull}$, we obtain $$\label{eq:posterior_zeta_no_kw}
\zeta\,\boldsymbol{|}\,{\y,\x,\mathcal{C}_{\setminus\zeta}} \, \sim\ \text{Inv-}\Gamma(\zeta\,\boldsymbol{|}\,a_\zeta+{\text{\scriptsize$\frac{M}{2}$}},\ b_\zeta +\text{\scriptsize$\frac{1}{2}$}\,)\ \tilde{p}(\x\,\boldsymbol{|}\,\mathcal{C}), $$ and for $\kWeibull$, we obtain $$\label{eq:posterior_kw}
\kWeibull\,\boldsymbol{|}\,{\y,\x,\mathcal{C}_{\setminus\kWeibull}}\ \sim\ \ {p(\hspace{0.8pt} \kWeibull \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \tilde{a}\,',\, \tilde{b}\,',\, \tilde{c}\,' \hspace{0.8pt} )}\
\tilde{p}(\x\,\boldsymbol{|}\,\mathcal{C}),$$ with parameters $\tilde{a}\,' = a\,'+N$, $\ \tilde{b}\,'=b\,'+\sum_{i=1}^{N}\log(x_i)$, and $\tilde{c}\,' = (d\,')^{\kWeibull}+\sum_{i=1}^{N}x_i^{\kWeibull}$. Samples of the posterior variables can be obtained using Metropolis Hastings [@Bishop2006] or HMC.\
![Factor graph of the complete sparse model with local similarity.[]{data-label="fig:factor_graph_localSimilarity"}](./sparse_model_full.pdf){width="0.63\columnwidth"}
The sparse coefficients are sampled using HMC. We briefly describe the idea of this method according adapted to our model for $\x$:\
Within the framework of HMC, the sampling process is described in terms of *Hamilton dynamics*, a concept known from classical physics. It is used to describe the trajectory of a physical system in phase space, based on its potential and kinetic energy. HMC assigns to every sparse coefficient, $x_i$, an associated momentum variable, $\xi_i$, $i=1,\dots,N$, that is responsible for the sampling dynamics. The posterior density to be sampled is related to the potential energy, given by [@Neal2011] $$U(\x\,\boldsymbol{|}\,\y,\mathcal{C})\ =\ -\log \tilde{p}(\x\,\boldsymbol{|}\,\y,\mathcal{C}) - \log(Z_u)\,,$$ where $Z_u$ is a suitable normalization constant. Since $\y$ and $\mathcal{C}$ are fixed, we may drop them and write $U(\x)$ instead. The kinetic energy, $K(\boldsymbol{\xi})$, depends only on the auxiliary variables $\boldsymbol{\xi}\!=\![\xi_1,\dots,\xi_N\!]$. A standard choice for $K(\boldsymbol{\xi})$ corresponds to independent particles in free space with mass $m_i$, i.e. The dynamics of the sampling process are governed by the *Hamiltonian function*, which is given by $\mathcal{H}(\x,\boldsymbol{\xi})\! =\!U(\x)\! +\! K(\boldsymbol{\xi})$ and represents the total system energy. The joint density of $\!(\x,\boldsymbol{\xi})\!$ is defined by [@Neal2011] $$\label{eq:HMC_canonical_density}
p(\x,\boldsymbol{\xi}) = \frac{1}{Z_c} \text{e}^{-\frac{\mathcal{H}(\x,\boldsymbol{\xi})}{T_{\text{\scriptsize sys}}}}\!
= \ \tilde{p}(\x\,\boldsymbol{|}\,\y,\mathcal{C}) \,\prod_{i=1}^{N}\mathcal{N}(\,\xi_i\,\boldsymbol{|}\,0,m_i\,).
$$ Herein, $T_{\text{\scriptsize sys}}$ is called the *system temperature* and $Z_c$ is a normalization constant. The last equation is obtained by setting $T_{\text{\scriptsize sys}}=1$ and $Z_u = Z_c$, while the Gaussian density arises from the special choice of the kinetic energy term. In HMC, a proposal for a new sample is obtained by the final points ($x_i^*,\xi_i^*$) of a trajectory described by Hamilton’s equations of motion. They are calculated $\forall\ (x_i,\xi_i), i\!=\!1,\dots,N$, [@Neal2011]: $$\hspace{0.0cm} \frac{\mathrm{d}x_i}{\mathrm{d}t} = \frac{\xi_i}{m_i}\,, \qquad \frac{\mathrm{d}\xi_i}{\mathrm{d}t} = -\frac{{\frac{\partial }{\partial x_i}}\tilde{p}(\x\,\boldsymbol{|}\,\y,\mathcal{C}) }{\tilde{p}(\x\,\boldsymbol{|}\,\y,\mathcal{C}) }\,.$$ A Metropolis update decides, whether a proposed sample is accepted or rejected, with acceptance probability [@Neal2011] $$\text{P}(\text{accept}) = \min_{}\,(\,1 ,\ {\mathrm{exp}\text{$\left( -\mathcal{H}(x_i^*,\xi_i^*) + \mathcal{H}(x_i,\xi_i) \right)$} }\ )\,.$$
Parametric DL strategies for CFS {#sec:PDL}
================================
In this section, we present two strategies for parametric dictionary learning in CFS. In the first we follow the ideas of hybrid Bayesian inference [@Yuan2009; @Yuan2015] and AM-based DL [@Beck2013], where $\theta$ is a deterministic parameter, that is estimated using the Monte Carlo EM algorithm in [@Bishop2006]. In the second we pursue a full Bayesian approach and consider a probabilistic model for $\theta$. Herein, approximate inference is accomplished by extending the Gibbs sampler in jointly estimate $(\x,\theta,\sigma_n^2)$. depicts the dependency graph for both strategies, where $\theta, \boldsymbol{\Xi}$ belong exclusively\
As pointed out in [@Yuan2009; @Yuan2015], hybrid and full Bayesian strategies have their individual advantages in certain situations. For small sample sizes, Bayesian methods can be superior if good prior knowledge is available [@Yuan2009]. Nonetheless, they are often computationally more complex and insufficient prior information can lead to a small-sample bias, even if a non-informative prior is used [@Yuan2009]. In CFS, the sample size is small and only vague prior knowledge of $\theta$ is available. Therefore, we investigate the performance of both DL strategies based on our probabilistic sparse model. The computational complexity of both strategies is comparable. It is dominated by HMC, i.e. by sampling the high-dimensional vector $\x$ in each iteration of the Gibbs sampler. Regarding $\theta$, the following prior knowledge is assumed: In ***S1***, we roughly restrict the range of values can take, while in ***S2***, we define a non-informative prior over the same range. Recall that $\theta$ effectively describes the filter characteristics of the lowpass To create the dictionary for a certain value of $\theta$ using (\[eq:dict\_atoms\_elements\]), the inverse Fourier transform in (\[eq:sensor\_signa\_IFT\_model\]) has to be evaluated for each atom. Thus, the dictionary is not a simple function of $\theta$ and we restrict ourselves to a discrete set of parameters, with lower and upper bound, $\theta_{\text{\tiny min}}$ and $\theta_{\text{\tiny max}}$, respectively. Since the bandwidth should be positive and bounded, we have $0 < \theta_{\text{\tiny min}}$ and $\theta_{\text{\tiny max}} < \infty$. Then, the set $\Theta$ contains the discrete values $\theta_r, r=1,\dots,\Rtheta$,
Hybrid DL: iterative estimation of $\theta$ and $(\x,\mathcal{C})$ (***S1***)
-----------------------------------------------------------------------------
The dictionary parameters in the CFS problem can be iteratively estimated using a Monte Carlo EM algorithm. First, an initial value, $\theta^{(0)}\!$, has to be chosen. In subsequent iterations with indices $d\!=\!1,\dots,d_{\text{\tiny max}}$, we obtain joint samples $\{\x_l,\mathcal{C}_l\}^{(d)},\,l=1,\dots,L_{\text{\tiny MC}}$, by Gibbs sampling and HMC according to Then, we determine the posterior expectation of $\zeta\in \mathcal{C}$, using the previous estimate $\hat{\theta}^{(d-1)}$: $$\begin{aligned}
\hat{\zeta}^{(d)} &=& \int_{\text{dom}(\zeta)} \zeta \,{p(\hspace{0.8pt} \zeta \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \y,\hat{\theta}^{(d-1)} \hspace{0.8pt} )}\,\mathrm{d}\zeta \\
\label{eq:posterior_mean_zeta} &\approx&\!\! \frac{1}{L_{\text{\tiny MC}}}\sum_{l=1}^{L_{\text{\tiny MC}}}\ \zeta_{\,l}^{(d)}\, {p(\hspace{0.8pt} \zeta_l^{(d)} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \y,\hat{\theta}^{(d-1)} \hspace{0.8pt} )},\end{aligned}$$ where $\text{dom}(\zeta)$ is the domain of $\zeta$. The current estimates of the reflection delays, $\hat{\mathcal{S}}^{(d)}$, are determined by identifying the indices of the $K$ largest elements in the posterior mean of $\x$, denoted by $\hat{\x}^{(d)}$. It is obtained by exchanging $\zeta_{\,l}^{(d-1)}$ with $\x_l^{(d-1)}\!\!$ in (\[eq:posterior\_mean\_zeta\]). Besides, we also estimate the amplitudes of the significant components in $\x$. They can be useful to assess the sparsity level of the solution and to determine the amount of optical power reflected from the FBGs. Since the posterior of $\x$ is multimodal with one narrow peak around zero and another peak at some larger amplitude, the MAP is more suitable for this task. It is given by $$\begin{aligned}
\label{eq:EM_MAP_x}
\hspace{-0.5cm} \{\mathbf{\hat{x}},\hat{\mathcal{C}}\,\}_{\text{\tiny MAP}}^{(d)}\hspace{-0.1cm} &=&\hspace{0.05cm} \text{arg}\max_{\hspace{-0.25cm}\x,\mathcal{C}}\ \log {p(\hspace{0.8pt} \x,\mathcal{C} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \y,\hat{\theta}^{(d-1)} \hspace{0.8pt} )}\\[0.2cm]\label{eq:EM_MAP_x_approx} \hspace{-0.5cm} &\hspace{-2.8cm}\approx&\hspace{-0.6cm} \text{arg}\hspace{-1.08cm}\max_{\hspace{-0.45cm}\{\x_j,\mathcal{C}_j\}\in\{\x_l,\,\mathcal{C}_l\}^{(d)}_{l=1,..,L_{\text{\tiny MC}}}}\hspace{-0.5cm} \log {p(\hspace{0.8pt} \{\x_j,\mathcal{C}_j\}^{(d)} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \y,\hat{\theta}^{(d-1)} \hspace{0.8pt} )}.
$$ However, the estimates of $\mathcal{S}$ obtained from $\hat{\x}_{\text{\tiny MAP}}^{(d)}$ are less accurate than those obtained by the posterior mean. Therefore, the empirical MAP solution is only used to estimate the reflection amplitudes. Next, we calculate the current estimate $\hat{\theta}^{(d)}$ by taking the expected value over $\x,\mathcal{C}$ given $\y,\theta$ (E-step): $$\begin{aligned}
\label{eq:EM_E_step}
\nonumber\hspace{-1.0cm} && \!\!\! {\mathbb{E}_{\text{\scriptsize$\x,\!\mathcal{C}\,$}\boldsymbol{|}\text{\scriptsize$\,\y,\!\theta$}}\hspace{4.0pt} } \log {p(\hspace{0.8pt} \y,\x,\mathcal{C} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \theta \hspace{0.8pt} )}\\ \hspace{-1.0cm} &=&\!\!\!\! \int_{\mathbb{R}_+^N} \int_{\Psi}
\log {p(\hspace{0.8pt} \y,\x,\mathcal{C} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \theta \hspace{0.8pt} )}\, {p(\hspace{0.8pt} \x,\mathcal{C} \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \y,\theta \hspace{0.8pt} )}\,\mathrm{d}\mathcal{C}\,\mathrm{d}\x\\ \label{eq:Qfct}\hspace{-1.0cm} &\approx&\!\!\!\! \frac{1}{L_{\text{\tiny MC}}}\sum_{l=1}^{L_{\text{\tiny MC}}}\ \log {p(\hspace{0.8pt} \y,\{\x_l,\mathcal{C}_l\}^{(d-1)}\! \hspace{1.2pt} \boldsymbol{|} \hspace{1.2pt} \theta \hspace{0.8pt} )}\ \triangleq\ Q(\,\theta\,\boldsymbol{|}\,\hat{\theta}^{(d-1)}).\end{aligned}$$ Herein, $\Psi$ is the product space formed by the individual domains of all variables in $\mathcal{C}$. In the $M$-step, a locally optimal value, $\hat{\theta}^{(d)}$, is obtained by maximizing $\theta$ over the set $\Theta$, i.e. $$\label{eq:EM_M_step}
\hat{\theta}^{(d)}\ =\ \text{arg}\max_{\hspace{-0.3cm}\theta\, \in\, \Theta}\ Q(\,\theta\,\boldsymbol{|}\,\theta^{(d-1)}\,)\,.$$
![Dependency relations for the complete hierarchical model. The variables $\theta$ and $\boldsymbol{\Xi}$ appear exclusively in ***S2***.[]{data-label="fig:dependency_graph"}](./dependency_FULL_BAYES_2.pdf){width="0.63\columnwidth"}
### Initialization of $\theta$ via bisectional search
An adequate initialization, $\theta^{(0)}\!$, can alleviate the problem of local optima in the EM algorithm. In CFS, the desired sparsity level is known to be the number of reflections, $K$. Hence, a good choice for $\theta^{(0)}\!$ yields a solution for $\x$ with $K$ significant non-zero elements. Starting at an arbitrary value $\theta^{(0)}\!\in\Theta$, a bisectional search within $\Theta$ can quickly determine a suitable initial value. After choosing the first value at random, $\Theta$ is subdivided into two parts, containing all larger and all smaller values, respectively. When the number of peaks is too high, the next trial is chosen as the median of the lower division. If it is too low, the next trial is the median of the upper division, and so on. For a properly selected $\theta^{(0)}\!$, ***S1*** converges faster and is more likely to approach (or even attain) the global optimum.
Bayesian DL: joint estimation of ($\x,\mathcal{C},\theta$)$\ $ (***S2***)
-------------------------------------------------------------------------
In strategy ***S2***, we treat $\theta$ as a random variable. Due to its discrete nature, each element is assigned a probability mass, $p_r = p(\theta_r)\,, r=1,\dots,\Rtheta$, where $\sum_{r=1}p(\theta_r)=1$. Then, $\theta$ is distributed over the set of discrete dictionary parameters, $\Theta$, with corresponding probability masses in $\boldsymbol{\Xi} = \{p_1,\dots,p_{\text{\tiny $R_{\small\Theta}$}}\}$. Uncertainty in the *a priori* assigned probability masses is taken into account in terms of a prior on $\boldsymbol{\Xi}$. The Dirichlet (Dir) distribution can be used as the conjugate prior with parameters $\boldsymbol{\nu} = [\nu_1,\dots,\nu_{\text{\tiny$\Rtheta$}}]^{\!\top}$, i.e. $$p(\boldsymbol{\Xi}) =\ \frac{1}{B(\boldsymbol{\nu})}\prod_{r=1}^{\Rtheta}p_r^{\nu_r}\,,$$ where $B(\boldsymbol{\nu})$ denotes the *Beta* function and the variables $\nu_r$, $r=1,\dots,\Rtheta$, describe the number of occurrences of the values in $\Theta$. When a new element, $\theta_q\in\Theta$, is sampled, a posterior count is assigned to that value. After sampling another value in the next iteration, this count is reassigned to the new value. Let $\mathbf{\breve{c}}\in \mathbb{N}^{\Rtheta}$ indicate the current sample, i.e. $c_q=1$ for one index $q\in\{1,\dots,\Rtheta\}$, while all other elements are zero. A non-informative prior is obtained if all values $\theta_q\in\Theta$ are equally likely and each element is assigned a single count. Then, $\nu_r=1\ \forall\ r=1,\dots,\Rtheta$ and a new sample has a strong impact on the posterior distribution. In contrast, for large values, e.g. $\nu_r\! =\! 1000\ \forall\ r\!=\!1,\dots,\Rtheta$, a new count leaves the distribution The complete model is then given by (\[eq:sparse\_model\_summary\]) and, in addition, $$\begin{aligned}
\boldsymbol{\Xi} &\sim & \text{Dir}(\,\boldsymbol{\Xi}\,\boldsymbol{|}\,\boldsymbol{\nu}\,)\\[0.15cm]
\theta\,\boldsymbol{|}\,\boldsymbol{\Xi} &\sim& \text{Cat}(\theta\,\boldsymbol{|}\,\Rtheta,\boldsymbol{\Xi}) \, .\end{aligned}$$ To accomplish approximate inference in this model, the variables $\theta$ and $\boldsymbol{\Xi}$ are included in the Gibbs sampling procedure of Section \[sec:approx\_inference\]. Therefore, the conditional distributions must be determined. Based on the dependencies in Fig. \[fig:dependency\_graph\], and since $\x,\sigma_n^2$ and $\theta$ are assumed to be mutually independent, we find $$\begin{aligned}
\label{eq:theta_Gibbs}
\boldsymbol{\Xi} \,\boldsymbol{|}\,\theta\ =\ \boldsymbol{\widetilde{\Xi}}\ & \sim &\ \text{Dir}(\,\boldsymbol{\Xi}\,\boldsymbol{|}\,\boldsymbol{\nu} + \mathbf{\breve{c}}\,),\\[0.15cm]
\label{eq:Xi_Gibbs}
\theta \,\boldsymbol{|}\,\y,\boldsymbol{\widetilde{\Xi}}\ &\sim&\ \text{Cat}(\,\theta\,\boldsymbol{|}\,\Rtheta,\boldsymbol{\widetilde{\Xi}}\,).
$$
(1,0)[280]{}\
**Algorithm:** Sparse estimation and PDL, strategy ***S1*** & ***S2***
(1,0)[280]{}
----------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Input:** $\mathbf{y},M,\PHI,N,L,T_d,\delta t,r(t,\theta),K,L_{\text{\tiny MC}},d_{\text{max}}$
**Output:** $\hat{\mathcal{S}},\hat{\mathbf{x}},\hat{\theta},\hat{\sigma}_n,d,ee $
**Parameters:** $a,a',a'',a_\sigma,b,b',b'',b_\sigma,d\,',\boldsymbol{\nu}, \Rtheta, \{\theta_r\}_{r=1}^{\Rtheta}$,
internal HMC parameters (c.f. [@Neal2011; @Homan2014]).
**0. Initialize:** $\theta$ at random $\rightarrow \hat{\theta}^{(0)}$ via bisectional search,
$\mathbf{A}(\hat{\theta}^{(0)}),\{\hat{\mathbf{x}}^{(0)}\!\!,\, \hat{\mathcal{C}}^{(0)}\}\!$ as in (\[eq:sparse\_model\_summary\]), (***S2***):$\,d_{\text{max}}\!\!=\!1$
**1. for** $d = 1$ to $d_{\text{max}}$ **do**
**2.** **for** $l=1$ to $L_{\text{\tiny MC}}$ **do**
**3.** Gibbs sampling: (i) $\mathcal{C}_l^{(d)}$ using (\[eq:posterior\_zeta\_no\_kw\]) and (\[eq:posterior\_kw\]),
\(ii) $\x_l^{(d)}$ via HMC.
(***S2***):$\,$ (iii) $\theta_l^{(d)}\!\!,\, \boldsymbol{\Xi}_l^{(d)}$ using (\[eq:theta\_Gibbs\]) and (\[eq:Xi\_Gibbs\])
**4.** **end for**
**5.** Estimate: $\hat{\mathcal{S}}^{(d)}$ from $\hat{\x}^{(d)}$ in (\[eq:posterior\_mean\_zeta\]) with $\zeta_{\,l}^{(d)}\!\rightarrow \x_l^{(d)}\!\!$,
$\hat{\mathcal{C}}^{(d)}$ from (\[eq:posterior\_mean\_zeta\]), $\hat{\x}^{(d)}_{\text{\tiny MAP}}$ from (\[eq:EM\_MAP\_x\_approx\]),
**5.a** (***S1*:**) $\,\hat{\theta}^{(d)} = \text{arg}\max_{\theta\in\Theta}\ Q(\theta\,\boldsymbol{|}\,\hat{\theta}^{(d-1)})$.
**5.b** (***S2*:**) $\,\hat{\theta}^{(d)}$ from (\[eq:posterior\_mean\_zeta\]) with $\zeta_{\,l}^{(d)}\!\rightarrow \theta_l^{(d)}$.
**6.** **if** $\hat{\theta}^{(d)} ==\, \hat{\theta}^{(d-1)}$ **or** $d==d_{\text{max}}$
**7.** **return** $\, \hat{\mathcal{S}}^{(d)}\!,\, \hat{\x}^{(d)}_{\text{\tiny MAP}},\,\hat{\mathcal{C}}^{(d)}\!,\, \hat{\theta}^{(d)}, ee\!=\!\|\y\!-\!\PHI\mathbf{A}(\hat{\theta}^{(d)}\!)\|_2^2$.
**8.** **end if**
**9. end for**
----------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(1,0)[280]{}
Simulations and experimental data {#sec:performance}
=================================
Let us now evaluate the proposed sparse model and DL strategies. First, we show the qualitative behavior of the algorithms, followed by a quantitative performance analysis in comparison to the method in [@Weiss2016]. To this end, we consider several scenarios of different SNRs, CS sampling matrices and sample sizes. Finally, we apply our algorithms to experimental data taken from a real fiber-optic sensor.
Simulation setup
----------------
We consider $K=3$ uniform FBGs in the sensing fiber, where the observed reflections have a common amplitude, $A_x$, and two reflections are closely spaced. Their delays are indicated by the inidces of the $K$ most significant elements in $\x$, contained in the set $\mathcal{S}$. Subsequently, the dictionary parameter is re-defined relative to its true value, i.e. $\hat{\theta}$ is replaced by $\hat{\theta}/\theta$. Further, we use $\Rtheta = 100$ discrete parameter values, equally spaced between $30\%$ and $150\%$ of the true value. The original signal (prior to CS) contains $L=134$ samples of the measured photocurrent. The dictionary atoms are created using $L$ samples of $r(t\!-\!i\delta t), i=1,\dots,N$, with a delay spacing of We use two types of CS matrices, $\PHI$, with i.i.d. entries drawn from the distributions below:
--------------------------- ----------------------------------------------------------------------------
\(a) Gauss: $\,\mathcal{N}(0,1)$,
\(b) DF [@Achlioptas2003] $\{-1,0,1\}$ with probabilities $\{\frac{1}{6},\frac{2}{3},\frac{1}{6}\}$.
--------------------------- ----------------------------------------------------------------------------
The variables $\{\mathcal{C},\theta,\boldsymbol{\Xi}\}$ are sampled according to For $\x$ we use the ’No-U-Turn’ variant of the HMC which is efficiently implemented in the software package *Stan* [@STAN]. The algorithm ***S1*** is initialized based on a bisectional search and runs at most $d_{\text{max}}=35$ iterations. In ***S2***, we use a non-informative prior for $\theta$, with $p(\theta_r)=1/\Rtheta$ and $\nu_r = 1\ \forall\ r=1,\dots,\Rtheta$.
Visualization and Working Principle
-----------------------------------
The working principle of the algorithms is presented for dB and a Gaussian CS matrix using $M/L=50\%$ of the original samples. Fig. \[fig:visualization\] (top left) depicts the MAP solution for $\x$, obtained by HMC within Gibbs sampling according to Section \[sec:approx\_inference\], where $\theta$ is fixed to the true value. It shows, that collective shrinkage, imposed by the local similarity assumption in the joint prior density of $\x$, yields a highly improved sparsity level in the presence of strong dictionary coherence. Fig. \[fig:visualization\] (top right) shows the posterior density of $\x$ in one dimension. For a non-significant component, it is strongly peaked around zero, and for a significant component, is multimodal with a strong mode around the true amplitude and a smaller mode around zero.
\
\
\[fig:visualization\]
The second row in delineates the evolution of the EM algorithm in ***S1*** over several iterations. Fig. \[fig:visualization\] (center left) shows the current MAP solutions for $\x$, i.e. $\x_{\text{\tiny MAP}}^{(d)}$, zoomed on the two left-sided peaks. Due to a bad initial value for $\theta$, more than $K$ peaks appear in the first iterations. However, as the algorithm proceeds, significant peaks are formed only at the positions of the true significant components (black bullets). Fig. \[fig:visualization\] (center right) shows, that also $\theta$ approaches the true value. Fig. \[fig:visualization\] (bottom left) delineates a typical shape of the function $Q(\theta\,|\,\hat{\theta}^{(d-1)})$ of ***S1*** in (\[eq:Qfct\]), for a properly and badly chosen initial value, $\theta^{(0)}$. A good choice leads to faster convergence, while for a bad choice, the algorithm might get either stuck at a local optimum or requires many EM iterations before the maximum of the Q-function appears close the true value of $\theta$. Finally, Fig. \[fig:visualization\] (bottom right) depicts for ***S2***, the non-informative prior of $\theta$ and a typical posterior density when $\nu_r=1\ \forall\ r=1,\dots,\Rtheta$.
Performance evaluation
----------------------
The performance is evaluated in terms of the root mean-squared error (RMSE). For a vector $\mathbf{v}$ and an estimator $\mathbf{\hat{v}}$, it is given by $\text{RMSE}(\mathbf{v},\mathbf{\hat{v}}) = ({\mathbb{E}_{}\hspace{4.0pt} \!\|\mathbf{v}-\mathbf{\hat{v}}\|_2^2 })^{1/2}$.
\
\
\
\
\
\
\[fig:JOINT\_performance\_50\]
\
\
\
We define $\overline{\text{RMSE}}$ as the approximation, where the expectation is replaced by averaging estimates over 100 Monte Carlo trials. We compare ***S1***, ***S2*** to the PDL-OIAI algorithm in [@Weiss2016], which considers a deterministic sparse model and incorporates a pre-processing routine to handle strong dictionary coherence. To calculate the CRB of Section \[sec:CRB\], the derivative of $r(t,\theta)$ with respect to $\theta$ must be determined for all dictionary elements. Since $r(t,\theta)$ is not a simple function of $\theta$, it can be approximated for a certain value $\theta_0$. For the $(l,\!i)$-th element in $\mathbf{A}'(\theta)$, we obtain $$\hspace{0.01cm} \left.{\frac{\partial }{\partial \theta}}[\mathbf{a}_i(\theta)]_l\right|_{\theta_0}\!\! \approx\ \frac{r(lT_d-\tau_i,\theta_0) - r(lT_d-\tau_i,\theta_0-\Delta\theta)}{\Delta\theta}.\hspace{-0.30cm}
$$ Fig. \[fig:EM\_performance\_50\]-\[fig:JOINT\_performance\_30\] show the results of the proposed and the competetive method. Herein, $\mathbf{s}\in \mathbb{N}^K$ contains all elements in $\mathcal{S}$, $\x_{\mathcal{S}}\in \mathbb{R}_+^K$ contains the coefficients of $\x$ with indices in $\mathcal{S}$. The $\overline{\text{RMSE}}(\x_{\mathcal{S}},\hat{\x}_{\mathcal{S}})$ compares the estimated amplitudes at the positions in $\hat{\mathcal{S}}$ to the true common amplitude, $A_x$, at positions in $\mathcal{S}$. The lower bound of the RMSE for jointly estimating *deterministic* parameters $(\x_\mathcal{S},\theta)$, induced by the CRB derived in Section \[sec:CRB\], is denoted by ’RCRB’.\
Fig. \[fig:EM\_performance\_50\] shows the results for $\{\mathbf{s},\x_{\mathcal{S}},\theta,\sigma_n\}$, obtained by ***S1*** using $50\%$ of the original samples. For fewer samples, the EM algorithm in ***S1*** becomes unstable. depicts the results obtained by ***S2*** using $50\%$ and $30\%$ of the original samples, respectively. It shows, that ***S2*** is more robust against small sample sizes and missing data than ***S1***. Generally, the error is only marginally affected by the type of the CS sampling matrix, i.e. (a) or (b). In all scenarios, ***S1*** and ***S2*** achieve a significantly lower error in estimating $\mathbf{s}$ than PDL-OIAI. At low SNRs, ***S1*** performs better than ***S2***, while ***S2*** becomes better at high SNRs. However, PDL-OIAI estimates $\theta$ with slightly higher accuracy than ***S1*** and ***S2***. When $50\%$ of the original samples are used, the error closely adheres to the RCRB. The amplitudes, $\x_{\mathcal{S}}$, are estimated with similar accuracy by both, ***S1*** and ***S2***, and no improvement is achieved compared to PDL-OIAI. Also, the distance to the RCRB is almost constant at all SNRs. Regarding the noise level, $\sigma_n^2$, ***S1*** yields a slightly smaller estimation error than ***S2***. PDL-OIAI does not provide a simple means for estimating $\sigma_n^2$, which is an advantage of ***S1*** and ***S2***. In the presented results for PDL-OIAI, it is assumed that pure noise samples are available to estimate $\sigma_n^2$. The instability of the RMSE between SNRs of 15 and 17.5 dB in Fig. \[fig:EM\_performance\_50\] might arise from averaging over an insufficient number of samples. It is also possible that the MCMC algorithm took longer to converge to the stationary distribution for SNR$=$20 dB, e.g. due to an unlucky initialization, thus, increasing the error.
Experimental Data
-----------------
To complete our study, we apply **S1** and **S2** to experimental data taken from the real fiber sensor system in [@Nakazaki2009; @Yamashita2009]. It was acquired at the Yamashita laboratory of photonic communication devices at The University of Tokyo, Japan. We consider $L\!=\!134$ original samples of the received sensor signal and use $M/L\!=\!50\%$ of the original samples. The delay spacing between the $N\!\!=\!\!2L$ dictionary atoms is $\delta t\approx 50$ ns. The sensing fiber contains $K=4$ FBGs and the delays of the reflected signals are potentially off-grid. Their positions are approximately at \[7.79, 9.05, 10.27, 12.30\] $\mu$s. We perform 100 Monte Carlo trials to estimate $\{\mathcal{S}, \x_\mathcal{S}, \theta\}$. shows the the original sensor signal and one estimated reflection from FBG$_3$. The shaded area indicates the standard deviation in estimating $\theta$. $\textbf{\emph{S1}}$ estimates a narrower reflection, which also results in slightly different estimates of $\mathcal{S}$. depicts $\hat{\x}_{\hat{\mathcal{S}}}$ at the estimated positions in $\hat{\mathcal{S}}$. The shaded areas represents the standard deviation for $\mathcal{S}$ and the vertical error bars indicate the standard deviation for $\x_{\mathcal{S}}$. Essentially, the results of **S1** and **S2** are comparable, although the variance in the estimates of $\mathcal{S}$ is marginally smaller in the case of $\textbf{\emph{S2}}$. Similar performance was reported for PDL-OIAI in [@Weiss2016].
Discussion {#sec:discussion}
==========
Based on simulations and experimental results, we demonstrate that the proposed sparsity model and our DL strategies, ***S1***, ***S2***, are useful in CFS and can be used for an automated estimation of the reflection delays. In comparison to PDL-OIAI in [@Weiss2016], where the underlying model treats $\x$ and $\theta$ as deterministic parameters, the following general observation can be made: The methods $\textbf{\emph{S1}}$ and $\textbf{\emph{S2}}$, based on a probabilistic sparse model, show comparable performance to PDL-OIAI but do not exceed the performance limit imposed by the non-Bayesian CRB. However, a significant improvement is achieved in estimating $\mathcal{S}$. It should be emphasized is of major importance in WDM-based CFS. It indicates the reflection delays, that are used to infer the quantity or nature of impairments at the FBGs. The amplitudes, $\x_{\mathcal{S}}$, can be used to determine the sparsity level and the amount of optical power reflected from the FBGs. We find that all competing methods estimate $\x_{\mathcal{S}}$ similarly accurate. The real data example shows that ***S1***, ***S2*** are insensitive to signal features that are not explicitly modeled, e.g. the skewness of the reflections or a signal-dependent noise amplitude. This was also reported for\
Our results for $\mathcal{S}$ indicate that the proposed sparsity model is better able to handle strong dictionary coherence than PDL-OIAI, which adopts a dictionary pre-processing routine to reduce the dictionary coherence. We ascribe this ability in part to the favorable selective shrinkage properties of the Weibull prior. Such behavior was previously reported for general heavy-tailed priors in [@Seeger2008; @Polson2010]. Regarding the relation to non-convex optimization, we find that constraints imposed on the $\ell_p$-norm, with $0\!<\!p\!<\!1$, are indeed useful in the presence of strong dictionary coherence. In this context, we support the findings in [@Chartrand2007; @Chartrand2008], that report relaxed RIP conditions when $\ell_1$-minimization is replaced by non-convex optimization methods. Another important factor, that contributes to the ability of handling strong dictionary coherence, is the local similarity model introduced in the joint prior density of $\x$. We observe much sparser solutions due to its collective shrinkage property, without the need for any dictionary pre-processing as in PDL-OIAI. Although this model is designed to deal with the unique features of the CFS dictionary, it can be used for general shift-invariant dictionaries with similar structures and high coherence levels. Therefore, it offers a broader applicability beyond the CFS problem. For parametric DL, all compared methods seem equally suitable for estimating $\theta$, but ***S2*** and PDL-OIAI are more stable for small sample sizes. Since the type of the CS matrix has only marginal impact, DF matrices in [@Achlioptas2003] are favorable. They are easy to implement, require low storage, and reduce the average sampling rate by 66%, since 2/3 of all projections are zero.\
The computational complexity of ***S1*** and ***S2*** is dominated by drawing samples from the posterior of $\x$ using HMC. HMC shows high efficacy in sampling this high-dimensional space in the presence of correlation. It yields samples from the desired posterior, that are weakly sparse with sharp peaks only close to the true positions of the significant components. Compared to optimization methods such as PDL-OIAI, MCMC is slower (c.f. [@Mohamed2012]) but some preliminary efforts are necessary for choosing a proper regularization parameter in the $\ell_1$-minimization problem. The run-time complexity of PDL-OIAI is dominated by a costly but essential data-dependent pre-processing routine to deal with severe dictionary coherence. This can be implemented using parallel processing and might be more efficient in situations, where CFS is used for permanent perturbation monitoring. Nonetheless, it initial estimate of the non-perturbed reference system. For this task, $\mathcal{S}$ can be more accurately estimated using ***S1*** and ***S2***. In contrast to PDL-OIAI, they are also able to estimate the noise level. Combining these methods for calibration and permanent monitoring is a promising perspective for practical systems. A limiting factor in ***S1*** and ***S2*** is the MCMC runtime, i.e. the number of available samples for Monte Carlo integration. Depending on the initial point, sufficient time has to be given for the algorithms to converge to the stationary distribution. Also, ***S1*** may get stuck in local optima but the proposed initialization using a bisectional search can lower this chance and helps to speeds up the convergence of the algorithm. A possible extension of this work can include multiple CS sample vectors to improve the SNR conditions. This might yield more accurate results and stable behavior. A similar technique was proposed in [@Malioutov2005]. Also, as pointed out in [@Weiss2016], additional local dictionary parameters can be considered. Since the reflections in the experimental data are non-uniform, this might improve both robustness and accuracy.
Conclusion {#sec:conclusion}
==========
We present a sparse estimation and parametric dictionary learning framework for Compressed Fiber Sensing (CFS) based on a probabilistic hierarchical sparse model. The significant components in the sparse signal indicate reflection delays, that can be used to infer the quantity and nature of external impairments. In order to handle severe dictionary coherence and to accomodate specific characteristics of the signal, a Weibull prior is employed to promote selective shrinkage. This choice can be related to non-convex optimization based on the To further alleviate the problem of dictionary coherence, we leverage the particular structure of the dictionary and assign a local variance to the differential sparse coefficients. This model can be useful for general shift-invariant dictionaries with similar structure and strong coherence. We propose two parametric dictionary learning strategies, ***S1*** and ***S2***, to estimate the dictionary parameter, $\theta$. In $\textbf{\emph{S1}}$, $\theta$ is treated as a deterministic parameter and estimated using a Monte Carlo EM algorithm. In $\textbf{\emph{S2}}$, a probabilistic hierarchical model for $\theta$ is considered. A hybrid MCMC method based on Gibbs sampling and Hamilton Monte Carlo is used for approximate inference. In simulations and by experimental data, we show the applicability and efficacy of the proposed sparse model, together with the methods ***S1*** and ***S2***, for an automated estimation of the reflection delays and the dictionary parameter in CFS. In a comparative analysis with an existing method, based on a deterministic sparse model, we highlight advantages, disadvantages and limitations, that can serve as a guidance to choose an adequate method for practical systems. To better assess the performance gain of a probabilistic sparse model, the Cram[é]{}r-Rao bound is derived for the joint estimation of deterministic sparse coefficients and the dictionary parameter in CFS. Drawbacks of the proposed methods are the generally high computational costs of MCMC methods, and the lack of simple diagnostic tools for Markov chain convergence and sample independence. Also, ***S1*** suffers from the problem of local optima. As a remedy, we propose a bisectional search to find a proper initialization. In subsequent investigations, multiple CS sample vectors and additional local dictionary parameters can be taken into account. Also, variational Bayes methods can be used to speed up computations.
Acknowledgment {#acknowledgment .unnumbered}
==============
This work was supported by the ’Excellence Initiative’ of the German Federal and State Governments and the Graduate School of Computational Engineering at Technische Universit[ä]{}t Darmstadt.\
The authors would like to thank Professor S. Yamashita and his group at The University of Tokyo, Japan, for kindly providing experimental data of the fiber sensor
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Additive tree functionals represent the cost of many divide-and-conquer algorithms. We derive the limiting distribution of the additive functionals induced by toll functions of the form (a) $n^\alpha$ when $\alpha > 0$ and (b) $\log{n}$ (the so-called shape functional) on uniformly distributed binary trees, sometimes called Catalan trees. The Gaussian law obtained in the latter case complements the central limit theorem for the shape functional under the random permutation model. Our results give rise to an apparently new family of distributions containing the Airy distribution ($\alpha = 1$) and the normal distribution \[case (b), and case (a) as $\alpha \downarrow 0$\]. The main theoretical tools employed are recent results relating asymptotics of the generating functions of sequences to those of their Hadamard product, and the method of moments.'
address:
- 'Department of Applied Mathematics and Statistics, The Johns Hopkins University, 3400 N. Charles St., Baltimore MD 21218, USA'
- 'Department of Computer Science, California Institute of Technology, MC 256-80, 1200 E. California Blvd., Pasadena CA 91125, USA'
author:
- James Allen Fill
- Nevin Kapur
bibliography:
- 'msn.bib'
- 'leftovers.bib'
date: 'June 4, 2003. Revised April 1, 2004.'
title: Limiting distributions for additive functionals on Catalan trees
---
[^1]
Introduction {#sec:introduction}
============
Binary trees are fundamental data structures in computer science, with primary application in searching and sorting. For background we refer the reader to Chapter 2 of the excellent book [@MR93f:68045]. In this article we consider additive functionals defined on uniformly distributed binary trees (sometimes called Catalan trees) induced by two types of toll sequences \[$(n^{\alpha})$ and $(\log n)$\]. (See the simple Definition \[def:additive-functional\].) Our main results, Theorems \[thm:limit-dist\] and \[thm:shape\_clt\], establish the limiting distribution for these induced functionals.
A competing model of randomness for binary trees—one used for binary search trees—is the *random permutation model* (RPM); see Section 2.3 of [@MR93f:68045]. While there has been much study of additive functionals under the RPM (see, for example, [@MR93f:68045 Section 3.3] and [@MR92f:68028; @MR97f:68021; @01808498; @MR1871558]), little attention has been paid to the distribution of functionals defined on binary trees under the uniform (Catalan) model of randomness. Fill [@MR97f:68021] argued that the functional corresponding to the toll sequence $ (\log{n}) $ serves as a crude measure of the “shape” of a binary tree, and explained how this functional arises in connection with the move-to-root self-organizing scheme for dynamic maintenance of binary search trees. He derived a central limit theorem under the RPM, but obtained only asymptotic information about the mean and variance under the Catalan model. (The latter results were rederived in the extension [@MR99j:05171] from binary trees to simply generated rooted trees.) In this paper (Theorem \[thm:shape\_clt\]) we show that there is again asymptotic normality under the Catalan model.
In [@MR88h:68033 Prop. 2] Flajolet and Steyaert gave order-of-growth information about the mean of functionals induced by tolls of the form $ n^\alpha $. (The motivation is to build a “repertoire” of tolls from which the behavior of more complicated tolls can be deduced by combining elements from the repertoire. The corresponding results under the random permutation model were derived by Neininger [@MR1910527].) Tak[á]{}cs established the limiting (Airy) distribution of path length in Catalan trees [@MR92m:60057; @MR93e:60175; @MR92k:60164], which is the additive functional for the toll $n-1$. The additive functional for the toll $n^2$ arises in the study of the Wiener index of the tree and has been analyzed by Janson [@janson:_wiener]. In this paper (Theorem \[thm:limit-dist\]) we obtain the limiting distribution for Catalan trees for toll $n^{\alpha}$ for any $\alpha > 0$. The family of limiting distributions appears to be new. In most cases we have a description of the distribution only in terms of its moments, although other descriptions in terms of Brownian excursion, as for the Airy distribution and the limiting distribution for the Wiener index, may be possible. This is currently under investigation by the authors in collaboration with others.
The uniform model on binary trees has also been used recently by Janson [@janson02:_ideal] in the analysis of an algorithm of Koda and Ruskey [@MR94f:94015] for listing ideals in a forest poset.
This paper serves as the first example of the application of recent results [@FFK], extending singularity analysis [@MR90m:05012], to obtain limiting distributions. In [@FFK], it is shown how the asymptotics of generating functions of sequences relate to those of their Hadamard product. First moments for our problems were treated in [@FFK] and a sketch of the technique we employ was presented there. (Our approach to obtaining asymptotics of Hadamard products of generating functions differs only marginally from the Zigzag Algorithm as presented in [@FFK].) As will be evident soon, Hadamard products occur naturally when one is analyzing moments of additive tree functionals. The program we carry out allows a fairly mechanical derivation of the asymptotics of moments of each order, thereby facilitating application of the method of moments. Indeed, preliminary investigations suggest that the techniques we develop are likewise applicable to the wider class of simply generated trees; this is work in progress.
The organization of this paper is as follows. Section \[sec:preliminaries\] establishes notation and states certain preliminaries that will be used in the subsequent proofs. In Section \[sec:toll-sequence-nalpha\] we consider the toll sequence $(n^\alpha)$ for general $\alpha > 0$. In Section \[sec:asympotics-mean\] we compute the asymptotics of the mean of the corresponding additive functional. In Section \[sec:higher-moments\] the analysis diverges slightly as the nature of asymptotics of the higher moments differs depending on the value of $\alpha$. Section \[sec:asymptotics-moments\] employs singularity analysis [@MR90m:05012] to derive the asymptotics of moments of each order. In Section \[sec:limit-distr\] we use the results of Section \[sec:asymptotics-moments\] and the method of moments to derive the limiting distribution of the additive tree functional. In Section \[sec:shape-functional\] we employ the approach again to obtain a normal limit theorem for the shape functional. Finally, in Section \[sec:suff-cond-asympt\], we present heuristic arguments that may lead to the identification of toll sequences giving rise to a normal limit.
Notation and Preliminaries {#sec:preliminaries}
==========================
Additive tree functionals {#sec:addit-tree-funct}
-------------------------
We first establish some notation. Let $T$ be a binary tree. We use $|T|$ to denote the number of nodes in $T$. Let $L(T)$ and $R(T)$ denote, respectively, the left and right subtrees rooted at the children of the root of $T$.
\[def:additive-functional\] A functional $ f $ on binary trees is called an *additive tree functional* if it satisfies the recurrence $$\label{eq:2.1}
f(T) = f( L(T) ) + f( R(T) )+ b_{|T|},$$ for any tree $ T $ with $ |T| \geq 1 $. Here $ (b_n)_{n \geq
1} $ is a given sequence, henceforth called the *toll function*.
We analyze additive functionals defined on binary trees uniformly distributed over $\{ T\!:\, |T|=n \}$ for given $n$. Let $X_n$ be such an additive functional induced by the toll sequence $(b_n)$. It is well known that the number of binary trees on $n$ nodes is counted by the $n$th Catalan number $$\beta_n := \frac1{n+1}\binom{2n}{n},$$ with generating function $$\label{eq:36}
\operatorname{CAT}(z) := \sum_{n=0}^\infty \beta_n z^n = \frac1{2z}(1 - \sqrt{1-4z}).$$ In our subsequent analysis we will make use of the identity $$\label{eq:50}
z \operatorname{CAT}^2(z) = \operatorname{CAT}(z) - 1.$$ The mean of the cost function $a_n := {\mathbf{E}\,X_n} $ can be obtained recursively by conditioning on the size of $L(T)$ as $$a_n = \sum_{j=1}^n \frac{\beta_{j-1}\beta_{n-j}}{\beta_n}(a_{j-1} +
a_{n-j}) + b_n, \qquad n \geq 1.$$ This recurrence can be rewritten as $$\label{eq:35}
(\beta_n a_n) = 2 \sum_{j=1}^n (\beta_{j-1} a_{j-1}) \beta_{n-j} +
(\beta_n b_n), \qquad n \geq 1.$$ Recall that the *Hadamard product* of two power series $F$ and $G$, denoted by $F(z) \odot G(z)$, is the power series defined by $$( F \odot G)(z) \equiv F(z) \odot G(z) := \sum_{n} f_n g_n z^n,$$ where $$F(z) = \sum_{n} f_n z^n \qquad \text{ and } \qquad G(z) = \sum_{n}
g_n z^n.$$ Multiplying (\[eq:35\]) by $z^n/4^n$ and summing over $n \geq 1$ we get $$\label{eq:1}
A(z)\odot\operatorname{CAT}(z/4) = \frac{B(z)\odot\operatorname{CAT}(z/4)}{\sqrt{1-z}}, $$ where $A(z)$ and $B(z)$ are the ordinary generating functions of $(a_n)$ and $(b_n)$ respectively.
\[rem:Catalan\] Catalan numbers are ubiquitous in combinatorial applications; see [@MR2000k:05026] for a list of 66 instances and <http://www-math.mit.edu/~rstan/ec/> for more.
In the sequel the notation \[$\cdots$\] is used both for Iverson’s convention [@knuth97 1.2.3(16)] and for the coefficient of certain terms in the succeeding expression. The interpretation will be clear from the context. For example, $ [ \alpha > 0 ] $ has the value 1 when $ \alpha > 0 $ and the value 0 otherwise. In contrast, $ [z^n]
F(z) $ denotes the coefficient of $ z^n $ in the series expansion of $ F(z) $. Throughout this paper $\Gamma$ and $\zeta$ denote Euler’s gamma function and the Riemann zeta function, respectively.
Singularity analysis {#sec:singularity-analysis}
--------------------
*Singularity analysis* is a systematic complex-analytic technique that relates asymptotics of sequences to singularities of their generating functions. The applicability of singularity analysis rests on the technical condition of *$\Delta$-regularity*. Here is the definition. See [@FFK] or [@MR90m:05012] for further background.
\[def:delta-regular\] A function defined by a Taylor series about the origin with radius of convergence equal to $1$ is *$\Delta$-regular* if it can be analytically continued in a domain $$\Delta(\phi,\eta) := \{z: |z| < 1 + \eta, |\arg(z-1)| > \phi\},$$ for some $\eta > 0$ and $0 < \phi < \pi/2$. A function $f$ is said to admit a *singular expansion* at $z=1$ if it is $\Delta$-regular and $$f(z) = \sum_{j=0}^J c_j(1-z)^{\alpha_j} + O(|1-z|^A)$$ uniformly in $z \in \Delta(\phi,\eta)$, for a sequence of complex numbers $(c_j)_{0 \leq j \leq J}$ and an increasing sequence of real numbers $(\alpha_j)_{0 \leq j \leq J}$ satisfying $\alpha_j < A$. It is said to satisfy a singular expansion *“with logarithmic terms”* if, similarly, $$f(z) = \sum_{j=0}^J c_j\left(L(z)\right)(1-z)^{\alpha_j} + O(|1-z|^A),
\qquad
L(z):=\log\frac{1}{1-z},$$ where each $c_j(\cdot)$ is a polynomial.
Following established terminology, when a function has a singular expansion with logarithmic terms we shall say that it is *amenable* to singularity analysis.
Recall the definition of the *generalized polylogarithm*:
\[def:li\] For $\alpha$ an arbitrary complex number and $r$ a nonnegative integer, the *generalized polylogarithm* function $\operatorname{Li}_{\alpha,r}$ is defined for $|z| < 1$ by $$\label{eq:4.1.50}
\operatorname{Li}_{\alpha,r}(z) := \sum_{n=1}^\infty \frac{(\log{n})^r}{n^\alpha} z^n.$$
The key property of the generalized polylogarithm that we will employ is $$\operatorname{Li}_{\alpha,r} \odot \operatorname{Li}_{\beta,s} = \operatorname{Li}_{\alpha+\beta,r+s}.$$ We will also make extensive use of the following consequences of the singular expansion of the generalized polylogarithm. Neither this lemma nor the ones following make any claims about uniformity in $\alpha$ or $r$. Note that $\operatorname{Li}_{1,0}(z) = L(z) = \log\bigl((1-z)^{-1}\bigr)$.
\[lem:litoomz\] For any real $\alpha < 1$ and nonnegative integer $r$, we have the singular expansion $$\operatorname{Li}_{\alpha,r}(z) = \sum_{k=0}^r \lambda_k^{(\alpha,r)}
(1-z)^{\alpha-1} L^{r-k}(z) +
O(|1-z|^{\alpha-\epsilon}) + (-1)^r \zeta^{(r)}(\alpha)[\alpha > 0],$$ where $\lambda_k^{(\alpha,r)} \equiv \binom{r}{k}
\Gamma^{(k)}(1-\alpha)$ and $\epsilon > 0$ is arbitrarily small.
By Theorem 1 in [@MR2000a:05015], $$\label{eq:37}
\operatorname{Li}_{\alpha,0}(z) \sim \Gamma(1-\alpha) t^{\alpha-1} + \sum_{j
\geq 0} \frac{(-1)^j}{j!} \zeta(\alpha-j) t^j, \qquad t = -\log z
= \sum_{l=1}^\infty \frac{(1-z)^l}{l},$$ and for any positive integer $r$, $$\operatorname{Li}_{\alpha,r}(z) = (-1)^r \frac{\partial^r}{\partial\alpha^r}
\operatorname{Li}_{\alpha,0}(z).$$ Moreover, as also shown in [@MR2000a:05015], the singular expansion for $\operatorname{Li}_{\alpha,r}$ is obtained by performing the indicated differentiation of (\[eq:37\]) term-by-term. To establish the claim we set $ f = \Gamma(1-\alpha) $ and $ g =
t^{\alpha-1} $ in the general formula for the $r$th derivative of a product: $$(fg)^{(r)} = \sum_{k=0}^r \binom{r}{k} f^{(k)} g^{(r-k)}$$ to first obtain $$(-1)^r \frac{\partial^r}{\partial\alpha^r}
[\Gamma(1-\alpha)t^{\alpha-1}] = (-1)^r \sum_{k=0}^r \binom{r}{k}
(-1)^k\Gamma^{(k)}(1-\alpha) t^{\alpha-1} (\log t)^{r-k}$$ The claim then follows easily.
The following “inverse” of Lemma \[lem:litoomz\] is very useful for computing with Hadamard products.
\[lem:omztoli\] For any real $\alpha < 1$ and nonnegative integer $r$, there exists a region $\Delta(\phi,\eta)$ as in Defintion \[def:delta-regular\] such that $$(1-z)^{\alpha-1}L^r(z) = \sum_{k=0}^r
\mu_k^{(\alpha,r)} \operatorname{Li}_{\alpha,r-k}(z) + O(|1-z|^{\alpha-\epsilon})
+ c_r(\alpha)[\alpha > 0]$$ holds uniformly in $z \in \Delta(\phi,\eta)$, where $\mu_0^{(\alpha,r)}=1/\Gamma(1-\alpha)$, $ c_r(\alpha) $ is a constant, and $\epsilon > 0$ is arbitrarily small.
We use induction on $r$. For $r=0$ we have $$\operatorname{Li}_{\alpha,0}(z) = \Gamma(1-\alpha)(1-z)^{\alpha-1} +
O(|1-z|^{\alpha-\epsilon}) + \zeta(\alpha) [ \alpha > 0 ]$$ and the claim is verified with $$\mu_0^{(\alpha,0)} = \frac{1}{\Gamma(1-\alpha)} \qquad \text{ and
} \qquad c_0(\alpha) = -\frac{\zeta(\alpha)}{\Gamma(1-\alpha)}.$$ Let $r \geq 1$. Then using Lemma \[lem:litoomz\] and the induction hypothesis we get $$\begin{aligned}
&\operatorname{Li}_{\alpha,r}(z) \\&=
\Gamma(1-\alpha)(1-z)^{\alpha-1}
L^r(z) \\
& \quad + \sum_{k=1}^r \lambda_{k}^{(\alpha,r)} \left[
\sum_{l=0}^{r-k}\mu_l^{(\alpha,r-k)} \operatorname{Li}_{\alpha,r-k-l}(z) +
O(|1-z|^{\alpha-\epsilon}) +
c_{r-k}(\alpha) [\alpha > 0 ] \right]\\
& \quad + O(|1-z|^{\alpha-\epsilon})
+ (-1)^r\zeta^{(r)}(\alpha)[\alpha > 0]\\
&=
\Gamma(1-\alpha)(1-z)^{\alpha-1}L^r(z)
+ \sum_{k=1}^r \lambda_{k}^{(\alpha,r)} \sum_{s=0}^{r-k}
\mu_{r-k-s}^{(\alpha,r-k)} \operatorname{Li}_{\alpha,s}(z) \\
& \quad {}+ O(|1-z|^{\alpha-\epsilon}) +
\left(
\sum_{k=1}^r \lambda_k^{(\alpha,r)} c_{r-k}(\alpha)
+ (-1)^r\zeta^{(r)}(\alpha)
\right)[\alpha > 0]\\
&=
\Gamma(1-\alpha)(1-z)^{\alpha-1}
L^r(z)
+ \sum_{s=0}^{r-1} \nu_s^{(\alpha,r)} \operatorname{Li}_{\alpha,s}(z)\\
& \quad + O(|1-z|^{\alpha-\epsilon}) + \gamma_r(\alpha)[\alpha >
0],
\end{aligned}$$ where, for $0 \leq s \leq r-1$, $$\nu_s^{(\alpha,r)} := \sum_{k=1}^{r-s} \lambda_k^{(\alpha,r)}
\mu_{r-s-k}^{(\alpha,r-k)},$$ and where $$\gamma_r(\alpha) := \sum_{k=1}^r
\lambda_k^{(\alpha,r)}c_{r-k}(\alpha) + (-1)^r \zeta^{(r)}(\alpha).$$ Setting $$\mu_0^{(\alpha,r)} = \frac{1}{\Gamma(1-\alpha)},\qquad
\mu_k^{(\alpha,r)} =
-\frac{\nu_{r-k}^{(\alpha,r)}}{\Gamma(1-\alpha)}, \quad 1 \leq k
\leq r,$$ and $$c_r(\alpha) = - \frac{\gamma_r(\alpha)}{\Gamma(1-\alpha)},$$ the result follows.
For the calculation of the mean, the following refinement of a special case of Lemma \[lem:litoomz\] is required. It is a simple consequence of Theorem 1 of [@MR2000a:05015].
\[lem:li0omz\] When $\alpha < 0$, we have the singular expansion $$\operatorname{Li}_{\alpha,0}(z) = \Gamma(1-\alpha)(1-z)^{\alpha-1} -
\Gamma(1-\alpha)\frac{1-\alpha}2 (1-z)^{\alpha} +
O(|1-z|^{\alpha+1}) + \zeta(\alpha)[\alpha > -1] .$$
For the sake of completeness, we state a result of particular relevance from [@FFK].
\[thm:hadamard\] If $f$ and $g$ are amenable to singularity analysis and $$f(z) = O(|1-z|^a) \qquad\text{ and }\qquad g(z) = O(|1-z|^b)$$ as $z \to 1$, then $f \odot g$ is also amenable to singularity analysis. Furthermore
(a) If $a+b+1 < 0$ then $$f(z) \odot g(z) = O(|1-z|^{a+b+1}).$$\[item:hadamard1\]
(b) If $k < a+b+1 < k+1$ for some integer $-1 \leq k < \infty$, then $$f(z) \odot g(z) = \sum_{j=0}^k \frac{(-1)^j}{j!} (f \odot
g)^{(j)}(1) (1-z)^j + O(|1-z|^{a+b+1}).$$ \[item:hadamard2\]
(c) If $a+b+1$ is a nonnegative integer then $$f(z) \odot g(z) = \sum_{j=0}^{a+b} \frac{(-1)^j}{j!} (f \odot
g)^{(j)}(1) (1-z)^j + O(|1-z|^{a+b+1}|L(z)|).$$ \[item:hadamard3\]
The toll sequence n\^ {#sec:toll-sequence-nalpha}
=====================
In this section we consider additive functionals when the toll function $b_n$ is $n^\alpha$ with $\alpha > 0$.
Asympotics of the mean {#sec:asympotics-mean}
----------------------
The main result of this Section \[sec:asympotics-mean\] is a singular expansion for $A(z) \odot \operatorname{CAT}(z/4)$. The result is , , or according as $\alpha <
1/2$, $\alpha = 1/2$, or $\alpha > 1/2$.
Since $b_n = n^\alpha$, by definition $B=\operatorname{Li}_{-\alpha,0}$. Thus, by Lemma \[lem:li0omz\], $$B(z) = \Gamma(1+\alpha)(1-z)^{-\alpha-1} -
\Gamma(1+\alpha)\frac{\alpha+1}2(1-z)^{-\alpha} +
O(|1-z|^{-\alpha+1}) + \zeta(-\alpha)[\alpha < 1].$$ We will now use (\[eq:1\]) to obtain the asymptotics of the mean.
First we treat the case $\alpha < 1/2$. From the singular expansion $\operatorname{CAT}(z/4) = 2 + O(|1-z|^{1/2})$ as $z
\to 1$, we have, by part (\[item:hadamard2\]) of Theorem \[thm:hadamard\], $$B(z)\odot\operatorname{CAT}(z/4) = C_0 + O(|1-z|^{-\alpha+\tfrac12}),$$ where $$\label{eq:20}
C_0 := B(z)\odot\operatorname{CAT}(z/4) \Bigr\rvert_{z=1} = \sum_{n=1}^\infty n^\alpha
\frac{\beta_n}{4^n}.$$ We now already know the constant term in the singular expansion of $B(z)\odot\operatorname{CAT}(z/4)$ at $z=1$ and henceforth we need only compute lower-order terms. The constant $\bar{c}$ is used in the sequel to denote an unspecified (possibly 0) constant, possibly different at each appearance.
Let’s write $B(z) = L_1(z) + R_1(z)$, and $\operatorname{CAT}(z/4) = L_2(z) +
R_2(z)$, where $$\begin{aligned}
L_1(z) &:= \Gamma(1+\alpha)(1-z)^{-\alpha-1} -
\Gamma(1+\alpha)\frac{\alpha+1}2(1-z)^{-\alpha} + \zeta(-\alpha),\\
R_1(z) &:= B(z) - L_1(z) = O(|1-z|^{1-\alpha}),\\
L_2(z) &:= 2(1 - (1-z)^{1/2}),\\
R_2(z) &:= \operatorname{CAT}(z/4) - L_2(z) = O(|1-z|).\end{aligned}$$ We will analyze each of the four Hadamard products separately. First, $$\begin{aligned}
L_1(z)\odot{}L_2(z) &= -
2\Gamma(1+\alpha)\bigl[(1-z)^{-\alpha-1}\odot(1-z)^{1/2}\bigr]\\
& \quad + 2\Gamma(1+\alpha)\frac{\alpha+1}2
\bigl[(1-z)^{-\alpha}\odot(1-z)^{1/2}\bigr] + \bar{c}.\end{aligned}$$ By Theorem 4.1 of [@FFK], $$(1-z)^{-\alpha-1}\odot(1-z)^{1/2} = \bar{c} +
\frac{\Gamma(\alpha-\tfrac12)}{\Gamma(\alpha+1)\Gamma(-1/2)}
(1-z)^{-\alpha+\tfrac12} + O(|1-z|),$$ and $$(1-z)^{-\alpha}\odot(1-z)^{1/2} = \bar{c} + O(|1-z|)$$ by another application of part (\[item:hadamard2\]) of Theorem \[thm:hadamard\], this time with $k=1$. Hence $$L_1(z)\odot{}L_2(z) = \bigl[L_1(z)\odot{}L_2(z)\bigr]\Bigr\rvert_{z=1} +
\frac{\Gamma(\alpha-\tfrac12)}{\sqrt\pi}
(1-z)^{-\alpha+\tfrac12} + O(|1-z|).$$ The other three Hadamard products are easily handled as $$\begin{aligned}
L_1(z)\odot{}R_2(z) &= \bigl[L_1(z)\odot{}R_2(z)\bigr]\Bigr\rvert_{z=1} +
O(|1-z|^{-\alpha+1}), \\
L_2(z)\odot{}R_1(z) &= \bigl[L_2(z)\odot{}R_1(z)\bigr]\Bigr\rvert_{z=1} +
O(|1-z|), \\
R_1(z)\odot{}R_2(z) &= \bigl[R_1(z)\odot{}R_2(z)\bigr]\Bigr\rvert_{z=1} +
O(|1-z|).\end{aligned}$$ Putting everything together, we get $$B(z)\odot\operatorname{CAT}(z/4) = C_0 +
\frac{\Gamma(\alpha-\tfrac12)}{\sqrt\pi}
(1-z)^{-\alpha+\tfrac12} + O(|1-z|^{-\alpha+1}).$$ Using this in , we get $$\label{eq:9}
A(z)\odot\operatorname{CAT}(z/4) = C_0(1-z)^{-1/2} +
\frac{\Gamma(\alpha-\tfrac12)}{\sqrt\pi} (1-z)^{-\alpha} +
O(|1-z|^{-\alpha+\tfrac12}).$$ To treat the case $\alpha \geq 1/2$ we make use of the estimate $$\label{eq:21}
(1-z)^{1/2} = \frac1{\Gamma(-1/2)}[ \operatorname{Li}_{3/2,0}(z) - \zeta(3/2) ]
+ O(|1-z|),$$ a consequence of Theorem 1 of [@MR2000a:05015], so that $$B(z) \odot (1-z)^{1/2} = \operatorname{Li}_{-\alpha,0}(z) \odot (1-z)^{1/2} =
\frac{1}{\Gamma(-1/2)} \operatorname{Li}_{\tfrac32-\alpha,0}(z) + R(z),$$ where $$\label{eq:55}
R(z) =
\begin{cases}
\bar{c} + O(|1-z|^{1-\alpha}) & 1/2 \leq \alpha < 1 \\
O(|L(z)|) & \alpha=1 \\
O(|1-z|^{1-\alpha}) & \alpha > 1.
\end{cases}$$ Hence $$B(z)\odot\operatorname{CAT}(z/4) = -\frac2{\Gamma(-1/2)}\operatorname{Li}_{\tfrac32-\alpha,0}(z) +
\widetilde{R}(z),$$ where $\widetilde{R}$, like $R$, satisfies (with a possibly different $\bar{c}$). When $\alpha=1/2$, this gives us $$B(z)\odot\operatorname{CAT}(z/4) = -\frac2{\Gamma(-1/2)}L(z) +
\bar{c} + O(|1-z|^{1/2}),$$ so that $$\label{eq:7}
A(z)\odot\operatorname{CAT}(z/4) =
\frac1{\sqrt\pi}(1-z)^{-1/2}L(z) +
\bar{c}(1-z)^{-1/2} + O(1).$$ For $\alpha > 1/2$ another singular expansion leads to the conclusion that $$\label{eq:8}
A(z)\odot\operatorname{CAT}(z/4) =
\frac{\Gamma(\alpha-\tfrac12)}{\sqrt\pi}(1-z)^{-\alpha} + \widehat{R}(z),$$ where $$\widehat{R}(z) =
\begin{cases}
O(|1-z|^{-\tfrac12}) & 1/2 < \alpha < 1 \\
O(|1-z|^{-\tfrac12}|L(z)|) & \alpha = 1 \\
O(|1-z|^{-\alpha+\tfrac12}) & \alpha > 1.
\end{cases}$$
We defer deriving the asymptotics of $a_n$ until Sections \[sec:higher-moments\]–\[sec:asymptotics-moments\].
Higher moments {#sec:higher-moments}
--------------
We will analyze separately the cases $0 < \alpha < 1/2$, , and . The reason for this will become evident soon; though the technique used to derive the asymptotics is induction in each case, the induction hypothesis is different for each of these cases.
### Small toll functions {#sec:small-toll-functions}
We start by restricting ourselves to tolls of the form $n^\alpha$ where $0 < \alpha < 1/2$. In this case we observe that by singularity analysis applied to (\[eq:9\]), $$\frac{a_n \beta_n}{4^n} = \frac{C_0}{\sqrt\pi} n^{-1/2} + O(n^{-3/2}) +
O(n^{\alpha-1}) = \frac{C_0}{\sqrt\pi} n^{-1/2} +
O(n^{\alpha-1}),$$ so $$a_n = n^{\tfrac32}[1 + O(n^{-1})][ C_0n^{-\tfrac12} +
O(n^{\alpha-1})] = C_0 n + O(n^{\alpha+\tfrac12}) = (C_0 + o(1)) (n+1).$$ The lead-order term of the mean $a_n = {\mathbf{E}\,X_n}$ is thus linear, irrespective of the value of $0 < \alpha < 1/2$ (though the coefficient $C_0$ does depend on $\alpha$). We next perform an approximate centering to get to further dependence on $\alpha$.
Define $\widetilde{X}_n := X_n -
C_0(n+1)$, with $X_0 := 0$; $\tilde{\mu}_n(k) :=
{\mathbf{E}\,\widetilde{X}_n^k}$, with $\tilde{\mu}_n(0) = 1$ for all $n \geq
0$; and $ \hat{\mu}_n(k) :=
\beta_n\tilde{\mu}_n(k)/4^n $. Let $\widehat{M}_k(z)$ denote the ordinary generating function of $\hat{\mu}_n(k)$ in the argument $n$.
By an argument similar to the one that led to (\[eq:35\]), we get, for $k \geq 2$, $$\hat{\mu}_n(k) = \frac12 \sum_{j=1}^n \frac{\beta_{n-j}}{4^{n-j}}
\hat{\mu}_{j-1}(k) + \hat{r}_n(k), \qquad n \geq 1,$$ where $$\begin{aligned}
\hat{r}_n(k) &:= \frac14\sum_{j=1}^n \sum_{\substack{k_1+k_2+k_3=k\\
k_1,k_2 < k}} \binom{k}{k_1,k_2,k_3}
\hat{\mu}_{j-1}(k_1)
\hat{\mu}_{n-j}(k_2) b_n^{k_3}\\
&= \frac14 \sum_{\substack{k_1+k_2+k_3=k\\
k_1,k_2 < k}} \binom{k}{k_1,k_2,k_3} b_n^{k_3} \sum_{j=1}^n
\hat{\mu}_{j-1}(k_1) \hat{\mu}_{n-j}(k_2),\end{aligned}$$ for $n \geq 1$ and $\hat{r}_0(k) := \hat{\mu}_0(k) = \tilde{\mu}_0(k) =
(-1)^kC_0^k$. Let $\widehat{R}_k(z)$ denote the ordinary generating function of $\hat{r}_n(k)$ in the argument $n$. Then, mimicking (\[eq:1\]), $$\label{eq:4}
\widehat{M}_k(z) = \frac{\widehat{R}_k(z)}{\sqrt{1-z}}$$ with $$\label{eq:3}
\widehat{R}_k(z) = (-1)^kC_0^k + \sum_{\substack{k_1+k_2+k_3=k\\
k_1,k_2<k}} \binom{k}{k_1,k_2,k_3} \bigl( B(z)^{\odot k_3}
\bigr) \odot
\bigl[\frac{z}4 \widehat{M}_{k_1}(z)\widehat{M}_{k_2}(z)\bigr],$$ where for $k$ a nonnegative integer $$B(z)^{\odot k} := \underbrace{B(z)\odot\cdots\odot B(z)}_{k}.$$ Note that $\widehat{M}_0(z) = \operatorname{CAT}(z/4)$.
\[thm:0alpha14\] Let $\epsilon > 0$ be arbitrary, and define $$c :=
\begin{cases}
2\alpha - \epsilon & 0 < \alpha \leq 1/4 \\
1/2 & 1/4 < \alpha < 1/2.
\end{cases}$$ Then we have the singular expansion $$\widehat{M}_k(z) = C_k(1-z)^{-k(\alpha+\tfrac12) +\tfrac12} +
O(|1-z|^{-k(\alpha+\tfrac{1}{2}) + \tfrac12 + c}),$$ The $C_k$’s here are defined by the recurrence $$\label{eq:10}
C_k = \frac14 \sum_{j=1}^{k-1} \binom{k}{j} C_{j} C_{k-j} +
kC_{k-1} \frac{ \Gamma(k\alpha+\tfrac{k}2-1)}{\Gamma((k-1)\alpha +
\tfrac{k}2 -1)}, \quad k \geq 2; \quad C_1 =
\frac{\Gamma(\alpha-\tfrac12)}{\sqrt\pi}.$$
For $k=1$ the claim is true as shown in with $C_1$ as defined in . We will now analyze each term in for $k \geq 2$.
One can analyze separately the cases $0 < \alpha \leq 1/4$ and $1/4
< \alpha < 1/2$. The proof technique in either case is induction. We shall treat here the case $0 < \alpha \leq 1/4$; the details in the other case can be found in [@FK-catalan-arXiv].
For notational convenience, define $\alpha' := \alpha+\tfrac12$. Also, observe that $$B(z)^{\odot{}k} = \operatorname{Li}_{-k\alpha,0}(z)
= \Gamma(1+k\alpha)(1-z)^{-k\alpha-1} + O(|1-z|^{-k\alpha-\epsilon})$$ by Lemma \[lem:litoomz\]. We shall find that the dominant terms in the sum in are those with (i) $k_3 = 0$, (ii) $(k_1,
k_2, k_3) = (k-1, 1, 0)$, and (iii) $(k_1, k_2, k_3) = (0, k-1,
1)$.
For this paragraph, consider the case that $k_1$ and $k_2$ are both nonzero. It follows from the induction hypothesis that $$\begin{aligned}
\frac{z}4 \widehat{M}_{k_1}(z)\widehat{M}_{k_2}(z) &= \frac14(1-(1-z))
\bigl[ C_{k_1}(1-z)^{-k_1\alpha'+\tfrac12} +
O(|1-z|^{-k_1\alpha'+\tfrac12+(2\alpha-\epsilon)})
\bigr]\\
&\times
\bigl[ C_{k_2}(1-z)^{-k_2\alpha'+\tfrac12} +
O(|1-z|^{-k_2\alpha'+\tfrac12+(2\alpha-\epsilon)})
\bigr]\\
&= \frac14C_{k_1}C_{k_2} (1-z)^{-(k_1+k_2)\alpha' + 1}
+ O(|1-z|^{-(k_1+k_2)\alpha'+1+(2\alpha-\epsilon)}).
\end{aligned}$$ If $k_3=0$ then the corresponding contribution to $\widehat{R}_k(z)$ is $$\frac14\binom{k}{k_1}
C_{k_1}C_{k_2} (1-z)^{-k\alpha' + 1}
+ O(|1-z|^{-k\alpha'+1+(2\alpha-\epsilon)}).$$ If $k_3 \ne 0$ we use Lemma \[lem:omztoli\] to express $$\begin{gathered}
\frac{z}4 \widehat{M}_{k_1}(z)\widehat{M}_{k_2}(z) =
\frac{C_{k_1}C_{k_2}}{4\Gamma((k_1+k_2)\alpha'-1)}
\operatorname{Li}_{-(k_1+k_2)\alpha'+2,0}(z)\\
+
O(|1-z|^{-(k_1+k_2)\alpha'+1+(2\alpha-\epsilon)}) -
\frac{C_{k_1}C_{k_2}}{4}
[(k_1+k_2)\alpha' <
2]\frac{\zeta(-(k_1+k_2)\alpha'+2)}
{\Gamma((k_1+k_2)\alpha'-1)}.
\end{gathered}$$ The corresponding contribution to $\widehat{R}_{k}(z)$ is then $\binom{k}{k_1,k_2,k_3}$ times: $$\frac{C_{k_1}C_{k_2}}{4\Gamma((k_1+k_2)\alpha'-1)}
\operatorname{Li}_{-k\alpha'+\tfrac{k_3}2+2,0}(z)
+ \operatorname{Li}_{-k_3\alpha,0}(z)
\odot O(|1-z|^{-(k_1+k_2)\alpha'+1+(2\alpha-\epsilon)}).$$ Now $k_3 \leq k-2$ so $-k\alpha'+\tfrac{k_3}2 + 2 < 1$. Hence the contribution when $k_3 \ne 0$ is $$O(|1-z|^{-k\alpha'+\tfrac{k_3}2+1}) =
O(|1-z|^{-k\alpha'+\tfrac32}) =
O(|1-z|^{-k\alpha'+1+(2\alpha-\epsilon)}).$$
Next we consider the case when $k_1$ is nonzero but $k_2=0$. In this case using the induction hypothesis we see that $$\begin{aligned}
\frac{z}4 \widehat{M}_{k_1}(z)\widehat{M}_{k_2}(z) &=
\frac{z}4 \operatorname{CAT}(z/4)\widehat{M}_{k_1}(z)\\
&= \frac{1 - (1-z)^{1/2}}{2} \bigl[
C_{k_1}(1-z)^{-k_1\alpha' + \tfrac12}\bigr]
+
O(|1-z|^{-k_1\alpha' + \tfrac12 +
(2\alpha-\epsilon)})\\
&= \frac{C_{k_1}}2 (1-z)^{-k_1\alpha'+\tfrac12} +
O(|1-z|^{-k_1\alpha' + \tfrac12 + (2\alpha-\epsilon)}).
\end{aligned}$$ Applying Lemma \[lem:omztoli\] to the last expression we get $$\begin{gathered}
\frac{z}4 \widehat{M}_{k_1}(z)\widehat{M}_{k_2}(z) =
\frac{C_{k_1}}{2\Gamma(k_1\alpha'-\tfrac12)}
\operatorname{Li}_{-k_1\alpha'+\tfrac32,0}(z) \\
+ O(|1-z|^{-k_1\alpha' + \tfrac12 + (2\alpha-\epsilon)})
- \frac{C_{k_1}}{2}[k_1\alpha'-\tfrac12 < 1]
\frac{\zeta(-k_1\alpha'+\tfrac32)}
{\Gamma(k_1\alpha'-\tfrac12)}.
\end{gathered}$$ The contribution to $\widehat{R}_{k}(z)$ is hence $\binom{k}{k_1}$ times: $$\frac{C_{k_1}}{2\Gamma(k_1\alpha'-\tfrac12)}
\operatorname{Li}_{-k\alpha'+\tfrac{k_3}2+\tfrac32,0}(z) +
\operatorname{Li}_{-k_3\alpha,0}(z) \odot O(|1-z|^{-k_1\alpha' +
\tfrac12 + (2\alpha-\epsilon)}).$$ Using the fact that $\alpha > 0$ and $k_3 \leq k-1$, we conclude that $-k\alpha'+\tfrac{k_3}2+\tfrac32 < 1$ so that, by Lemma \[lem:litoomz\] and part (\[item:hadamard1\]) of Theorem \[thm:hadamard\], the contribution is $$O(|1-z|^{-k\alpha'+\tfrac{k_3}2+\tfrac12}) =
O(|1-z|^{-k\alpha'+\tfrac32})$$ where the displayed equality holds unless $k_3=1$. When $k_3=1$ we get a corresponding contribution to $\widehat{R}_k(z)$ of $\binom{k}{k-1}$ times: $$\frac{C_{k-1}\Gamma(k\alpha'-1)}
{2\Gamma((k-1)\alpha'-\tfrac12)}
(1-z)^{-k\alpha'+1} +
O(|1-z|^{-k\alpha'+1+(2\alpha-\epsilon)}),$$ since for $k \geq 2$ we have $k\alpha' > 1 +
(2\alpha-\epsilon)$. The introduction of $\epsilon$ handles the case when $k\alpha' = 1 + 2\alpha$, which would have otherwise, according to part (\[item:hadamard3\]) of Thoerem \[thm:hadamard\], introduced a logarithmic remainder. In either case the remainder is $ O(|1-z|^{-k\alpha'+1+(2\alpha-\epsilon)})$. The case when $k_2$ is nonzero but $k_1=0$ is handled similarly by exchanging the roles of $k_1$ and $k_2$.
The final contribution comes from the single term where both $k_1$ and $k_2$ are zero. In this case the contribution to $\widehat{R}_k(z)$ is, recalling , $$\label{eq:5}
\operatorname{Li}_{-k\alpha,0}(z) \odot [\frac{z}4 \operatorname{CAT}^2(z/4)] =
\operatorname{Li}_{-k\alpha,0}(z) \odot
(\operatorname{CAT}(z/4)-1)= \operatorname{Li}_{-k\alpha,0}(z) \odot \operatorname{CAT}(z/4).$$ Now, using Theorem 1 of [@MR2000a:05015], $$\begin{aligned}
\operatorname{CAT}(z/4) & = 2-2(1-z)^{1/2} + O(|1-z|) \\
& = 2 + 2\frac{\zeta(3/2)}{\Gamma(-1/2)}
- \frac{2}{\Gamma(-1/2)}\operatorname{Li}_{3/2,0}(z) +
O(|1-z|),
\end{aligned}$$ so that is $$-\frac2{\Gamma(-1/2)} \operatorname{Li}_{\tfrac32-k\alpha,0}(z) +
O(|1-z|^{1-k\alpha}) +
\begin{cases}
0 & 1-k\alpha < 0, \\
O(|1-z|^{-\epsilon}) & 1-k\alpha=0, \\
O(1) & 1-k\alpha > 0.
\end{cases}$$ When $\tfrac32-k\alpha < 1$ this is $O(|1-z|^{-k\alpha+\tfrac12})$; when $\tfrac32-k\alpha \geq 1$, it is $O(1)$. In either case we get a contribution which is $O(|1-z|^{-k\alpha'+1+(2\alpha-\epsilon)})$.
Hence $$\begin{aligned}
\widehat{R}_k(z) &= \Biggl[ \sum_{\substack{k_1+k_2=k\\k_1,k_2<k}}
\binom{k}{k_1} \frac{C_{k_1}C_{k_2}}4 + 2k \frac{C_{k-1}}2
\frac{\Gamma(k\alpha+\tfrac{k}2-1)}{\Gamma((k-1)\alpha+\tfrac{k}2-1)}
\Biggr] (1-z)^{-k\alpha'+1}\\
& \qquad\qquad + O(|1-z|^{-k\alpha'+1+(2\alpha-\epsilon)})\\
&= C_k(1-z)^{-k\alpha'+1} + O(|1-z|^{-k\alpha'
+ 1 + (2\alpha-\epsilon)}),
\end{aligned}$$ with the $C_k$’s defined by the recurrence . Now using , the claim follows.
### Large toll functions {#sec:large-toll-functions}
When $\alpha \geq 1/2$ there is no need to apply the centering techinques. Define ${\mu}_n(k) := {\mathbf{E}\,{X}_n^k}$ and $ \bar{\mu}_n(k) := \beta_n{\mu}_n(k)/4^n $. Let $\overline{M}_k(z)$ denote the ordinary generating function of $\bar{\mu}_n(k)$ in $n$. Observe that $\overline{M}_0(z) = \operatorname{CAT}(z/4)$. As earlier, conditioning on the key stored at the root, we get, for $k \geq 2$, $$\bar{\mu}_n(k) = \frac12 \sum_{j=1}^n \frac{\beta_{n-j}}{4^{n-j}}
\bar{\mu}_{j-1}(k) + \bar{r}_n(k),
\qquad n \geq 1,$$ where $$\bar{r}_n(k) :=
\frac14 \sum_{\substack{k_1+k_2+k_3=k\\
k_1,k_2 < k}} \binom{k}{k_1,k_2,k_3} b_n^{k_3} \sum_{j=1}^n
\bar{\mu}_{j-1}(k_1) \bar{\mu}_{n-j}(k_2),$$ for $n \geq 1$ and $\bar{r}_0(k) := \bar{\mu}_0(k) = \mu_0(k) = 0$. Let $\overline{R}_k(z)$ denote the ordinary generating function of $\bar{r}_n(k)$ in $n$. Then $$\overline{M}_k(z) = \frac{\overline{R}_k(z)}{\sqrt{1-z}}$$ and $$\label{eq:12}
\overline{R}_k(z) = \sum_{\substack{k_1+k_2+k_3=k\\
k_1,k_2<k}} \binom{k}{k_1,k_2,k_3} \bigl( B(z)^{\odot k_3}
\bigr) \odot
\bigl[ \frac{z}4 \overline{M}_{k_1}(z)\overline{M}_{k_2}(z)\bigr].$$
We can now state the result about the asymptotics of the generating function $\overline{M}_k$ when $\alpha > 1/2$. The case $\alpha=1/2$ will be handled subsequently, in Proposition \[thm:alpha=12\].
\[thm:12alpha\] Let $\epsilon > 0$ be arbitrary, and define $$\label{eq:56}
c :=
\begin{cases}
\alpha - \tfrac12 & \tfrac12 < \alpha < 1 \\
\tfrac12 - \epsilon & \alpha = 1 \\
\tfrac12 & \alpha > 1.
\end{cases}$$ Then the generating function $\overline{M}_k(z)$ of $\bar{\mu}_n(k)$ has the singular expansion $$\overline{M}_k(z) = C_k(1-z)^{-k(\alpha+\tfrac12)+\tfrac12} +
O(|1-z|^{-k(\alpha+\tfrac12)+\tfrac12+c})$$ for $k \geq 1$, where the $C_k$’s are defined by the recurrence .
The proof is very similar to that of Proposition \[thm:0alpha14\]. We present a sketch. The reader is invited to compare the cases enumerated below to those in the earlier proof.
When $k=1$ the claim is true by . We analyze the various terms in for $k \geq 2$, employing the notational convenience $\alpha' := \alpha+\tfrac12$.
When both $k_1$ and $k_2$ are nonzero then the contribution to $\overline{R}_k(z)$ is $$\frac14\binom{k}{k_1} C_{k_1}C_{k_2} (1-z)^{-k\alpha'+1} +
O(|1-z|^{-k\alpha'+c+1})$$ when $k_3=0$ and is $O(|1-z|^{-k\alpha'+c+1})$ otherwise.
When $k_1$ is nonzero and $k_2=0$ the contribution to $\overline{R}_k(z)$ is $$k \frac{C_{k-1} \Gamma(k\alpha'-1)}{2\Gamma((k-1)\alpha'-\tfrac12)}
(1-z)^{-k\alpha'+1} + O(|1-z|^{-k\alpha'+c+1})$$ when $k_3=1$ and $O(|1-z|^{-k\alpha'+c+1})$ otherwise. The case when $k_2$ is nonzero and $k_1=0$ is identical.
The final contribution comes from the single term when both $k_1$ and $k_2$ are zero. In this case we get a contribution of $O(|1-z|^{-k\alpha+\tfrac12})$ which is $O(|1-z|^{-k\alpha'+c+1})$. Adding all these contributions yields the desired result.
The result when $\alpha=1/2$ is as follows. Recall that $L(z) :=
\log((1-z)^{-1})$.
\[thm:alpha=12\] Let $\alpha=1/2$. In the notation of Proposition \[thm:12alpha\], $$\overline{M}_k(z) = (1-z)^{-k+\tfrac12} \sum_{l=0}^k C_{k,l}
L^{k-l}(z) + O(|1-z|^{-k+1-\epsilon})$$ for $k \geq 1$ and any $\epsilon > 0$, where the $C_{k,l}$’s are constants. The constant multiplying the lead-order term is given by $$\label{eq:11}
C_{k,0} = \frac{(2k-2)!}{2^{2k-2}(k-1)!\pi^{k/2}}.$$
We omit the proof, referring the interested reader to [@FK-catalan-arXiv].
Asymptotics of moments {#sec:asymptotics-moments}
----------------------
For $0 < \alpha< 1/2 $, we have seen in Proposition \[thm:0alpha14\] that the generating function $\widehat{M}_k(z)$ of $ \hat{\mu}_n(k) = \beta_n\tilde{\mu}_n(k)/4^n $ has the singular expansion $$\widehat{M}_k(z) = C_k(1-z)^{-k(\alpha+\tfrac{1}2)+\tfrac12} +
O(|1-z|^{-k(\alpha+\tfrac{1}{2})+\tfrac12 + c}),$$ where $c := \min\{2\alpha-\epsilon,1/2\}$. By singularity analysis [@MR90m:05012], $$\frac{\beta_n\tilde{\mu}_n(k)}{4^n} =
C_k
\frac{n^{k(\alpha+\tfrac12)-\tfrac32}}{\Gamma(k(\alpha+\tfrac12)-\tfrac12)}
+ O(n^{k(\alpha+\tfrac{1}{2})-\tfrac32-c}).$$ Recall that $$\beta_n =
\frac{4^n}{\sqrt{\pi}n^{3/2}} \left(1 + O(\tfrac1n)\right),$$ so that $$\label{eq:15}
\tilde{\mu}_n(k)
= \frac{C_k\sqrt{\pi}}{\Gamma(k(\alpha+\tfrac{1}2)-\tfrac12)}
n^{k(\alpha+\tfrac{1}2)} +
O(n^{k(\alpha+\tfrac{1}2) - c}).$$
For $\alpha > 1/2$ a similar analysis using Proposition \[thm:12alpha\] yields $$\label{eq:16}
\mu_n(k) = \frac{C_k\sqrt{\pi}}{\Gamma(k(\alpha+\tfrac{1}2)-\tfrac12)}
n^{k(\alpha+\tfrac{1}2)} +
O(n^{k(\alpha+\tfrac{1}2) - c}),$$ with now $c$ as defined at . Finally, when $\alpha=1/2$ the asymptotics of the moments are given by $$\label{eq:17}
\mu_n(k) =
\left( \frac{1}{\sqrt\pi} \right)^k(n\log{n})^k +
O(n^k(\log{n})^{k-1}).$$
The limiting distributions {#sec:limit-distr}
--------------------------
In Section \[sec:alpha-ne-12\] we will use our moment estimates and with the method of moments to derive limiting distributions for our additive functions. The case $\alpha=1/2$ requires a somewhat delicate analysis, which we will present separately in Section \[sec:alpha=12\].
### alpha not 1/2 {#sec:alpha-ne-12}
We first handle the case $0 < \alpha < 1/2$. (We assume this restriction until just before Proposition \[thm:limit\_law\_alpha\_ne\_12\].) We have $$\label{eq:39}
\tilde\mu_n(1) = {\mathbf{E}\,\widetilde}{X}_n = {\mathbf{E}\,[}X_n - C_0(n+1)]
= \frac{C_1\sqrt\pi}{\Gamma(\alpha)} n^{\alpha+\tfrac12} +
O(n^{\alpha+\tfrac12-c})$$ with $c := \min\{2\alpha-\epsilon,1/2\}$ and $$\tilde\mu_n(2) = {\mathbf{E}\,\widetilde}{X}_n^2 =
\frac{C_2\sqrt\pi}{\Gamma(2\alpha+\tfrac12)} n^{2\alpha+1} +
O(n^{2\alpha+1-c}).$$ So $$\label{eq:40}
{\mathbf{Var}\,X_n} = {\mathbf{Var}\,\widetilde{X}_n} = \tilde\mu_n(2) -
[\tilde\mu_n(1)]^2 = \sigma^2 n^{2\alpha+1} + O(n^{2\alpha+1-c}),$$ where $$\label{eq:46}
\sigma^2 := \frac{C_2\sqrt\pi}{\Gamma(2\alpha+\tfrac12)} -
\frac{C_1^2\pi}{\Gamma^2(\alpha)}.$$ We also have, for $k \geq 1$, $$\label{eq:41}
{\mathbf{E}\,\left}[ \frac{\widetilde{X}_n}{n^{\alpha+\tfrac12}} \right]^k
= \frac{\tilde{\mu}_n(k)}{n^{k(\alpha+\tfrac12)}}
= \frac{C_k\sqrt\pi}{\Gamma(k(\alpha+\tfrac12)-\tfrac12)} + O(n^{-c}).$$
The following lemma provides a sufficient bound on the moments facilitating the use of the method of moments.
\[lem:Ckbound\] Define $\alpha' := \alpha+\tfrac12$. There exists a constant $A
< \infty $ depending only on $\alpha$ such that $$\left|\frac{C_k}{k!}\right| \leq A^k k^{\alpha'k}$$ for all $k \geq 1$.
The proof is fairly similar to those of Propositions \[thm:0alpha14\], \[thm:12alpha\] and Proposition \[thm:shape\_moments\]. We omit the details, referring the reader to [@FK-catalan-arXiv].
It follows from Lemma \[lem:Ckbound\] and Stirling’s approximation that $$\label{eq:19}
\left|\frac{C_k\sqrt\pi}{k! \Gamma(k(\alpha+\tfrac12)-\tfrac12)}\right|
\leq B^k$$ for large enough $B$ depending only on $\alpha$. Using standard arguments [@MR95k:60001 Theorem 30.1] it follows that $X_n$ suitably normalized has a limiting distribution that is characterized by its moments. Before we state the result, we observe that the argument presented above can be adapted with minor modifications to treat the case $\alpha > 1/2$, with $\widetilde{X}_n$ replaced by $X_n$. We can now state a result for $\alpha \ne 1/2$. We will use the notation $\stackrel{\mathcal{L}}{\to}$ to denote convergence in law (or distribution).
\[thm:limit\_law\_alpha\_ne\_12\] Let $X_n$ denote the additive functional on Catalan trees induced by the toll sequence $(n^\alpha)_{n \geq 0}$. Define the random variable $Y_n$ as follows: $$Y_n :=
\begin{cases}
\displaystyle\frac{X_n-C_0(n+1)}{n^{\alpha+\tfrac12}} & 0 <
\alpha < 1/2,\\
\displaystyle\frac{X_n}{n^{\alpha+\tfrac12}} & \alpha > 1/2,
\end{cases}$$ where $$C_0 := \sum_{n=0}^\infty n^\alpha \frac{\beta_n}{4^n}, \qquad
\beta_n = \frac{1}{n+1} \binom{2n}{n}.$$ Then $$Y_n \stackrel{\mathcal{L}}{\to} Y;$$ here $Y$ is a random variable with the unique distribution whose moments are $$\label{eq:49}
{\mathbf{E}\,Y}^k = \frac{C_k
\sqrt\pi}{\Gamma(k(\alpha+\tfrac12)-\tfrac12)},$$ where the $C_k$’s satisfy the recurrence $$C_k = \frac{1}{4} \sum_{j=1}^{k-1} \binom{k}{j} C_j C_{k-j} + k
\frac{\Gamma(k\alpha + \tfrac{k}2 -1)}{\Gamma((k-1)\alpha +
\tfrac{k}2 - 1)} C_{k-1}, k \geq 2; \quad C_1 =
\frac{\Gamma(\alpha-\tfrac12)}{\sqrt\pi}.$$
The case $\alpha=1/2$ is handled in Section \[sec:alpha=12\], leading to Proposition \[prop\_alpha\_12\], and a unified result for all cases is stated as Theorem \[thm:limit-dist\].
\[remark\_y\_alpha\_properties\] We now consider some properties of the limiting random variable $Y
\equiv Y(\alpha)$ defined by its moments at for $\alpha
\ne 1/2$.
(a) When $ \alpha=1 $, setting $\Omega_k := C_k/2$ we see immediately that $${\mathbf{E}\,Y}^k = \frac{-\Gamma(-1/2)}{\Gamma((3k-1)/2)}\Omega_k,$$ where $$2\Omega_k = \sum_{j=1}^{k-1} \binom{k}{j} \Omega_j \Omega_{k-j} + k(3k - 4)
\Omega_{k-1}, \qquad \Omega_1 = \frac12.$$ Thus $Y$ has the ubiquitous Airy distribution and we have recovered the limiting distribution of path length in Catalan trees [@MR92m:60057; @MR92k:60164]. The Airy distribution arises in many contexts including parking allocations, hashing tables, trees, discrete random walks, mergesorting, etc.—see, for example, the introduction of [@MR2002j:68115] which contains numerous references to the Airy distribution.
(b) When $\alpha = 2$, setting $\eta := Y/\sqrt2$ and $a_{0,l} :=
2^{2l-1} C_l$, we see that $${\mathbf{E}\,\eta^l} = \frac{\sqrt\pi}{2^{(5l-2)/2} \Gamma((5l-1)/2)} a_{0,l},$$ where $$a_{0,l} = \frac{1}{2} \sum_{j=1}^{l-1} \binom{l}{j} a_{0,j}
a_{0,l-j} + l(5l-4)(5l-6), \qquad a_{0,1} = 1.$$ We have thus recovered the recurrence for the moments of the distribution $\mathcal{L}({\eta})$, which arises in the study of the Wiener index of Catalan trees [@janson:_wiener proof of Theorem 3.3 in Section 5].
(c) Consider the variance $\sigma^2$ defined at .
(i) Figure \[fig:variance\], plotted using `Mathematica`, suggests that $\sigma^2$ is positive for all $\alpha > 0$. We will prove this fact in Theorem \[thm:limit-dist\].
![$\sigma^2$ of as a function of $\alpha$.[]{data-label="fig:variance"}](variance)
There is also numerical evidence that $\sigma^2$ is unimodal with $\max_{\alpha} \sigma^2(\alpha) \doteq 0.198946$ achieved at $\alpha \doteq
0.682607$. (Here $\doteq$ denotes approximate equality.)
(ii) As $\alpha \to \infty$, using Stirling’s approximation one can show that $\sigma^2 \sim (\sqrt{2}-1)\alpha^{-1}$.
(iii) As $\alpha \downarrow 0$, using a Laurent series expansion of $\Gamma(\alpha)$ we see that $\sigma^2 \sim 4(1-\log2) \alpha$.
(iv) Though the random variable $Y(\alpha)$ has been defined only for $\alpha \ne 1/2$, the variance $\sigma^2$ has a limit at $\alpha=1/2$: $$\label{eq:51}
\lim_{\alpha \to 1/2} \sigma^2(\alpha) = \frac{8 \log 2}{\pi} -
\frac{\pi}2.$$
(d) Figure \[fig:mc3\] shows the third central moment ${\mathbf{E}\,[}Y - {\mathbf{E}\,Y}]^3$ as a function of $\alpha$. The plot suggests that the third central moment is positive for each $\alpha
> 0$, which would also establish that $Y(\alpha)$ is not normal for any $\alpha > 0$. However we do not know a proof of this positive skewness. \[Of course, the law of $Y(\alpha)$ is not normal for any $\alpha > 1/2$, since its support is a subset of $[0,\infty)$.\]
![${\mathbf{E}\,[}Y - {\mathbf{E}\,Y}]^3$ of Proposition \[thm:limit\_law\_alpha\_ne\_12\] as a function of $\alpha$.[]{data-label="fig:mc3"}](mc3)
\[item:1\]
(e) When $\alpha=0$, the additive functional with toll sequence $(n^\alpha=1)_{n \geq 1}$ is $n$ for all trees with $n$ nodes. However, if one considers the random variable $\alpha^{-1/2}Y(\alpha)$ as $\alpha \downarrow 0$, using and induction one can show that $\alpha^{-1/2}Y(\alpha)$ converges in distribution to the normal distribution with mean 0 and variance $4(1-\log2)$.
(f) Finally, if one considers the random variable $\alpha^{1/2}Y(\alpha)$ as $\alpha \to \infty$, again using and induction we find that $\alpha^{1/2}Y(\alpha)$ converges in distribution to the unique distribution with $k$th moment $\sqrt{k!}$ for $k=1,2,\ldots$. In Remark \[remark:sqrkfact\] next, we will show that the limiting distribution has a bounded, infinitely smooth density on $(0,
\infty)$.
\[remark:sqrkfact\] Let $Y$ be the unique distribution whose $k$th moment is $\sqrt{k!}$ for $k=1,2,\ldots$. Taking $Y^*$ to be an independent copy of $Y$ and defining $X := Y Y^*$, we see immediately that $X$ is Exponential with unit mean. It follows by taking logarithms that the distribution of $\log Y$ is a convolution square root of the distribution of $\log X$. In particular, the characteristic function $\phi$ of $\log Y$ has square equal to $\Gamma(1 + it)$ at $t \in (-\infty, \infty)$; we note in passing that $\Gamma(1 + it)$ is the characteristic function of $-G$, where $G$ has the Gumbel distribution. By exponential decay of $\Gamma(1 + it)$ as $t \to
\pm\infty$ and standard theory (see, e.g., [@MR42:5292 Chapter XV]), $\log Y$ has an infinitely smooth density on $(-\infty, \infty)$, and the density and each of its derivatives are bounded.
So $Y$ has an infinitely smooth density on $(0, \infty)$. By change of variables, the density $f_Y$ of $Y$ satisfies $$f_Y(y) = \frac{f_{\log Y}(\log y)}{y}.$$ Clearly $f_Y(y)$ is bounded for $y$ *not* near 0. (We shall drop further consideration of derivatives.) To determine the behavior near 0, we need to know the behavior of $f_{\log Y}(\log y)/y$ as $y \to 0$. Using the Fourier inversion formula, we may equivalently study $$e^x f_{\log{Y}}(-x) = \frac{1}{2\pi} \int_{-\infty}^\infty
e^{(1+it)x} \phi(t)\, dt,$$ as $x \to \infty$. By an application of the method of steepest descents \[(7.2.11) in [@MR89d:41049], with $g_0=1$, $\beta =
1/2$, $w$ the identity map, $z_0=0$, and $\alpha=0$\], we get $$f_Y(y) \sim \frac{1}{\sqrt{\pi\log{(1/y)} }} \quad \text{as $y
\downarrow 0$}.$$ Hence $f_Y$ is bounded everywhere.
Using the Cauchy integral formula and simple estimates, it is easy to show that $$f_Y(y) = o( e^{-My} ) \quad \text{as $y \to \infty$}$$ for any $M < \infty$. Computations using the <span style="font-variant:small-caps;">WKB</span> method [@MR30:3694] suggest $$\label{eq:57}
f_Y(y) \sim (2/\pi)^{1/4} y^{1/2} \exp(-y^2/2) \quad \text{as $y
\to \infty$},$$ in agreement with numerical calculations using `Mathematica`. \[In fact, the right-side of appears to be a highly accurate approximation to $f_Y(y)$ for all $y
\geq 1$.\] Figure \[fig:sqrtfactdensity\] depicts the salient features of $f_Y$. In particular, note the steep descent of $f_Y(y)$ to 0 as $y \downarrow 0$ and the quasi-Gaussian tail.
![$f_Y$ of Remark \[remark:sqrkfact\].[]{data-label="fig:sqrtfactdensity"}](little1 "fig:"){width="2.4in"} ![$f_Y$ of Remark \[remark:sqrkfact\].[]{data-label="fig:sqrtfactdensity"}](little2 "fig:"){width="2.4in"}
### alpha = 1/2 {#sec:alpha=12}
For $\alpha=1/2$, from (\[eq:17\]) we see immediately that $${\mathbf{E}\,\left}[ \frac{X_n}{n\log{n}} \right]^k = \left(
\frac1{\sqrt\pi} \right)^k + O\left( \frac{1}{\log n} \right).$$ Thus the random variable $X_n/(n\log n)$ converges in distribution to the degenerate random variable $1/\sqrt\pi$. To get a nondegenerate distribution, we carry out an analysis similar to the one that led to (\[eq:7\]), getting more precise asymptotics for the mean of $X_n$. The refinement of (\[eq:7\]) that we need is the following, whose proof we omit: $$A(z) \odot \operatorname{CAT}(z/4) = \frac{1}{\sqrt\pi} (1-z)^{-1/2} L(z) +
D_0(1-z)^{-1/2} + O(|1-z|^{\tfrac12-\epsilon}),$$ where $$\label{eq:58}
D_0 = \sum_{n=1}^\infty n^{1/2} [ 4^{-n} \beta_n - \frac{1}{\sqrt{\pi}}
n^{-3/2} ].$$ By singularity analysis this leads to $$\label{eq:43}
{\mathbf{E}\,X_n} = \frac{1}{\sqrt\pi} n \log n + D_1 n + O(n^\epsilon),$$ where $$\label{eq:59}
D_1 = \frac{1}{\sqrt{\pi}} ( 2 \log2 + \gamma + \sqrt{\pi} D_0 ).$$ Now analyzing the random variable $X_n -
\pi^{-1/2} n \log n$ in a manner similar to that of Section \[sec:small-toll-functions\] we obtain $$\label{eq:44}
{\mathbf{Var}\,[ X_n - \pi^{-1/2} n \log n ]} =
\left(
\frac{8}{\pi} \log 2 - \frac{\pi}{2}
\right)
n^2 + O(n^{\tfrac32 + \epsilon}).$$ Using (\[eq:43\]) and (\[eq:44\]) we conclude that $${\mathbf{E}\,\left}[
\frac{X_n - \pi^{-1/2} n \log n - D_1 n}{n}
\right]
= o(1)$$ and $$\label{eq:45}
{\mathbf{Var}\,
\left[
\frac{X_n - \pi^{-1/2} n \log n - D_1 n}{n}
\right]}
\longrightarrow \frac{8}{\pi} \log 2 - \frac{\pi}{2} = \lim_{\alpha \to 1/2}
\sigma^2(\alpha),$$ where $\sigma^2 \equiv \sigma^2(\alpha)$ is defined at (\[eq:46\]) for $\alpha \ne 1/2$. \[Recall of Remark \[remark\_y\_alpha\_properties\].\]
It is possible to carry out a program similar to that of Section \[sec:higher-moments\] to derive asymptotics of higher order moments using singularity analysis. However we choose to sidestep this arduous, albeit mechanical, computation. Instead we will derive the asymptotics of higher moments using a somewhat more direct approach akin to the one employed in [@MR97f:68021]. The approach involves approximation of sums by Riemann integrals. To that end, define $$\label{eq:47}
\widetilde{X}_n := X_n - \pi^{-1/2}(n+1) \log (n+1) -
D_1(n+1), \quad\text{ and }\quad \hat\mu_n(k) := \frac{\beta_n}{4^{n+1}}
{\mathbf{E}\,\widetilde}{X}_n^k.$$ Note that $\widetilde{X}_0 = -D_1$, $\hat\mu_n(0) = \beta_n/4^{n+1}$, and $\hat\mu_0(k) = (-D_1)^k/4$. Then, in a now familiar manner, for $n
\geq 1$ we find $$\hat\mu_n(k) = 2 \sum_{j=1}^n \frac{\beta_{j-1}}{4^j}
\hat\mu_{n-j}(k) + \hat{r}_n(k),$$ where now we define $$\begin{gathered}
\hat{r}_n(k) := \sum_{\substack{k_1+k_2+k_3=k\\k_1,k_2 < k}}
\binom{k}{k_1,k_2,k_3} \sum_{j=1}^n \hat\mu_{j-1}(k_1)
\hat{\mu}_{n-j}(k_2)\\
\times \left[
\frac{1}{\sqrt\pi} ( j\log j + (n+1-j) \log (n+1-j) - (n+1) \log
(n+1) + \sqrt\pi n^{1/2})
\right]^{k_3}\end{gathered}$$ Passing to generating functions and then back to sequences one gets, for $n \geq 0$, $$\hat{\mu}_n(k) = \sum_{j=0}^n (j+1) \frac{\beta_j}{4^j}
\hat{r}_{n-j}(k).$$ Using induction on $k$, we can approximate $\hat{r}_n(k)$ and $\hat{\mu}_n(k)$ above by integrals and obtain the following result. We omit the proof, leaving it as an exercise for the ambitious reader.
\[prop\_alpha\_12\] Let $X_n$ be the additive functional induced by the toll sequence $(n^{1/2})_{n \geq 1}$ on Catalan trees. Define $\widetilde{X}_n$ as in [(\[eq:47\])]{}, with $D_1$ defined at and $D_0$ at . Then $${\mathbf{E}\,[}
{\widetilde{X}_n}/{n}
]^k
= m_k + o(1) \text{ as $n \to \infty$},$$ where $m_0=1$, $m_1=0$, and, for $k \geq 2$, $$\begin{gathered}
\label{eq:48}
m_k = \frac{1}{4\sqrt\pi} \frac{\Gamma(k-1)}{\Gamma(k-\tfrac12)}\\
\times \left[
\sum_{\substack{k_1+k_2+k_3=k\\k_1,k_2 < k}}
\binom{k}{k_1,k_2,k_3} m_{k_1} m_{k_2}
\left(
\frac{1}{\sqrt\pi}
\right)^{k_3}
J_{k_1,k_2,k_3}
+ 4 \sqrt\pi k m_{k-1}
\right],
\end{gathered}$$ where $$J_{k_1,k_2,k_3} := \int_{0}^1 x^{k_1-\tfrac32}
(1-x)^{k_2-\tfrac32} [ x \log x + (1-x) \log (1-x) ]^{k_3} \, dx.$$ Furthermore $\widetilde{X}_n/(n+1) \stackrel{\mathcal{L}}{\to} Y$, where $Y$ is a random variable with the unique distribution whose moments are ${\mathbf{E}\,Y^k} = m_k$, $k \geq 0$.
### A unified result {#sec:unified-result}
The approach outlined in the preceding section can also be used for the case $\alpha \ne 1/2$. For completeness, we state the result for that case here (without proof).
\[prop:alpha\_ne\_12\_riemann\] Let $X_n$ be the additive functional induced by the toll sequence $(n^{\alpha})_{n \geq 1}$ on Catalan trees. Let $\alpha' := \alpha
+ \tfrac12$. Define $\widetilde{X}_n$ as $$\label{eq:52}
\widetilde{X}_n :=
\begin{cases}
X_n - C_0(n+1) -
\displaystyle\frac{ \Gamma(\alpha-\tfrac12
)}{\Gamma(\alpha)}(n+1)^{\alpha'}
& 0 < \alpha <
1/2, \\
X_n - \displaystyle\frac{ \Gamma(\alpha-\tfrac12
)}{\Gamma(\alpha)}(n+1)^{\alpha'} & \alpha > 1/2,
\end{cases}$$ where $$C_0 := \sum_{n=1}^\infty n^\alpha \frac{\beta_n}{4^n}.$$ Then, for $k=0,1,2,\ldots$, $${\mathbf{E}\,\left}[ {\widetilde{X}_n}/{n^{\alpha'}} \right]^k =
m_k + o(1) \text{ as $n \to \infty$},$$ where $m_0=1$, $m_1=0$, and, for $k \geq 2$, $$\begin{gathered}
\label{eq:42}
m_k = \frac{1}{4\sqrt\pi}
\frac{\Gamma(k\alpha'-1)}{\Gamma(k\alpha'-\tfrac12)}\\
\times \left[
\sum_{\substack{k_1+k_2+k_3=k\\k_1,k_2 < k}}
\binom{k}{k_1,k_2,k_3} m_{k_1} m_{k_2}
\left( \frac{\Gamma(\alpha-\tfrac12)}{\Gamma(\alpha)}\right)^{k_3}
J_{k_1,k_2,k_3}
+ 4 \sqrt\pi k m_{k-1}
\right],
\end{gathered}$$ with $$J_{k_1,k_2,k_3} := \int_{0}^1 x^{k_1\alpha'-\tfrac32}
(1-x)^{k_2\alpha'-\tfrac32} [ x^{\alpha'} + (1-x)^{\alpha'} - 1
]^{k_3} \, dx.$$ Furthermore, $\widetilde{X}_n/n^{\alpha'}
\stackrel{\mathcal{L}}{\to} Y$, where $Y$ is a random variable with the unique distribution whose moments are ${\mathbf{E}\,Y^k} = m_k$.
\[The reader may wonder as to why we have chosen to state Proposition \[prop:alpha\_ne\_12\_riemann\] using several instances of $n+1$, rather than $n$, in . The reason is that use of $n+1$ is somewhat more natural in the calculations that establish the proposition.\]
In light of Propositions \[thm:limit\_law\_alpha\_ne\_12\], \[prop\_alpha\_12\], and \[prop:alpha\_ne\_12\_riemann\], there are a variety of ways to state a unified result. We state one such version here.
\[thm:limit-dist\] Let $X_n$ denote the additive functional induced by the toll sequence $(n^\alpha)_{n \geq 1}$ on Catalan trees. Then $$\frac{X_n - {\mathbf{E}\,X_n}}{\sqrt{{\mathbf{Var}\,X_n}}}
\stackrel{\mathcal{L}}{\to} W,$$ where the distribution of $W$ is described as follows:
\(a) For $\alpha \ne 1/2$, $$W = \frac{1}{\sigma}
\left(
Y - \frac{C_1\sqrt\pi}{\Gamma(\alpha)}
\right), \quad \text{ with } \quad \sigma^2 :=
\frac{C_2\sqrt\pi}{\Gamma(2\alpha+\tfrac12)} -
\frac{C_1^2\pi}{\Gamma^2(\alpha)} > 0,$$ where $Y$ is a random variable with the unique distribution whose moments are $${\mathbf{E}\,Y^k} =
\frac{C_k\sqrt\pi}{\Gamma(k(\alpha+\tfrac12)-\tfrac12)},$$ and the $C_k$’s satisfy the recurrence [(\[eq:10\])]{}.
\(b) For $\alpha=1/2$, $$W = \frac{Y}{\sigma}, \quad \text{ with } \quad \sigma^2 :=
\frac{8}{\pi} \log 2 - \frac{\pi}{2},$$ where $Y$ is a random variable with the unique distribution whose moments $m_k = {\mathbf{E}\,Y^k}$ are given by [(\[eq:48\])]{}.
Define $$W_n := \frac{X_n - {\mathbf{E}\,X_n}}{\sqrt{{\mathbf{Var}\,X_n}}}$$
\(a) Consider first the case $\alpha < 1/2$ and let $\alpha' := \alpha +
\tfrac12$. By , $$\label{eq:53}
{\mathbf{E}\,X_n} = C_0(n+1) + \frac{C_1\sqrt\pi}{\Gamma(\alpha)}
n^{\alpha'} + o(n^{\alpha'}).$$ Since $\widetilde{X}_n$ defined at and $X_n$ differ by a deterministic amount, ${\mathbf{Var}\,X_n}={\mathbf{Var}\,\widetilde{X}_n}$. Now by Proposition \[prop:alpha\_ne\_12\_riemann\], $$\label{eq:54}
{\mathbf{Var}\,\widetilde{X}_n} = {\mathbf{E}\,\widetilde{X}_n^2} -
({\mathbf{E}\,\widetilde{X}_n})^2 = (m_2 + o(1))n^{2\alpha'} - (m_1^2 +
o(1))n^{2\alpha'} = (m_2 + o(1))n^{2\alpha'}.$$ So $\sigma^2$ equals $m_2$ defined at (\[eq:42\]), namely, $$\frac{1}{4\sqrt\pi}\frac{\Gamma(2\alpha'-1)}{\Gamma(2\alpha'-\tfrac12)}
\left(\frac{\Gamma(\alpha-\tfrac12)}{\Gamma(\alpha)}\right)^2 J_{0,0,2}.$$ Thus to show $\sigma^2 > 0$ it is enough to show that $J_{0,0,2} > 0$. But $$J_{0,0,2} = \int_{0}^1 x^{-3/2} (1-x)^{-3/2} [ x^{\alpha'} +
(1-x)^{\alpha'} - 1]^2 \,dx,$$ which is clearly positive. Using and , $$W_n = \frac{X_n - C_0(n+1) - \frac{C_1\sqrt\pi}{\Gamma(\alpha)}
n^{\alpha'} + o(n^{\alpha'})}{ (1 + o(1))\sigma
n^{\alpha'}},$$ so, by Proposition \[thm:limit\_law\_alpha\_ne\_12\] and Slutsky’s theorem [@MR95k:60001 Theorem 25.4], the claim follows.
The case $\alpha > 1/2$ follows similarly.
\(b) When $\alpha=1/2$, $${\mathbf{E}\,X_n} = \frac{1}{\sqrt\pi} n\log{n} + D_1 n + o(n)$$ by and $${\mathbf{Var}\,X_n} =
\left(
\frac{8}{\pi}\log2 - \frac{\pi}{2} + o(1)
\right)n^2$$ by . The claim then follows easily from Proposition \[prop\_alpha\_12\] and Slutsky’s theorem.
The shape functional {#sec:shape-functional}
====================
We now turn our attention to the shape functional for Catalan trees. The shape functional is the cost induced by the toll function $b_n \equiv \log{n}$, $n \geq 1$. For background and results on the shape functional, we refer the reader to [@MR97f:68021] and [@MR99j:05171].
In the sequel we will improve on the mean and variance estimates obtained in [@MR97f:68021] and derive a central limit theorem for the shape functional for Catalan trees. The technique employed is singularity analysis followed by the method of moments.
Mean {#sec:shape_function_mean}
----
We use the notation and techniques of Section \[sec:asympotics-mean\] again. Observe that now $B(z) =
\operatorname{Li}_{0,1}(z)$ and gives the singular expansion $$\begin{gathered}
\operatorname{CAT}(z/4) = 2 - \frac2{\Gamma(-1/2)}[ \operatorname{Li}_{3/2,0}(z) -
\zeta(3/2)]\\
+ 2\left(1-\frac{\zeta(1/2)}{\Gamma(-1/2)} \right)(1-z) +
O(|1-z|^{3/2}).\end{gathered}$$ So $$B(z)\odot\operatorname{CAT}(z/4) = -\frac2{\Gamma(-1/2)} \operatorname{Li}_{3/2,1}(z)
+ \bar{c} + \bar{\bar{c}}(1-z) + O(|1-z|^{\tfrac32-\epsilon}),$$ where $ \bar{c} $ and $ \bar{\bar{c}} $ denote unspecified (possibly 0) constants. The constant term in the singular expansion of $ B(z)\odot\operatorname{CAT}(z/4) $ is already known to be $$C_0 = B(z)\odot\operatorname{CAT}(z/4) \Bigr\rvert_{z=1} = \sum_{n=1}^\infty
(\log{n}) \frac{\beta_n}{4^n}.$$ Now using the singular expansion of $ \operatorname{Li}_{3/2,1}(z) $, we get $$B(z) \odot \operatorname{CAT}(z/4) = C_0 - 2(1-z)^{1/2}L(z) -
2(2(1-\log(2))-\gamma)(1-z)^{1/2} + O(|1-z|),$$ so that $$\label{eq:22}
A(z)\odot\operatorname{CAT}(z/4) = C_0(1-z)^{-1/2} - 2L(z) -
2(2(1-\log2)-\gamma) + O(|1-z|^{1/2}).$$ Using singularity analysis and the asymptotics of the Catalan numbers we get that the mean $ a_n $ of the shape functional is given by $$\label{eq:24}
a_n = C_0(n+1) - 2\sqrt\pi n^{1/2} + O(1),$$ which agrees with the estimate in Theorem 3.1 of [@MR97f:68021] and improves the remainder estimate.
Second moment and variance {#sec:shape_functional_variance}
--------------------------
We now derive the asymptotics of the approximately centered second moment and the variance of the shape functional. These estimates will serve as the basis for the induction to follow. We will use the notation of Section \[sec:small-toll-functions\], centering the cost function as before by $C_0(n+1)$.
It is clear from that $$\label{eq:38}
\widehat{M}_1(z) = -2L(z) - 2(2(1-\log2)-\gamma) +
O(|1-z|^{1/2}),$$ and with $ k=2 $ gives us, recalling , $$\label{eq:34}
\widehat{R}_2(z) = C_0^2 + \operatorname{CAT}(z/4)\odot\operatorname{Li}_{0,2}(z) + 4\operatorname{Li}_{0,1}(z)
\odot [\frac{z}4 \operatorname{CAT}(z/4)
\widehat{M}_1(z)] + \frac{z}2\widehat{M}_1^2(z).$$ We analyze each of the terms in this sum. For the last term, observe that $z/2 \to 1/2$ as $z \to 1$, so that $$\frac{z}2 \widehat{M}_1^2(z) = 2L^2(z) + 4(2(1-\log2)-\gamma)L(z) +
2(2(1-\log2)-\gamma)^2 + O(|1-z|^{\tfrac12-\epsilon}),$$ the $ \epsilon $ introduced to avoid logarithmic remainders. The first term is easily seen to be $$\operatorname{CAT}(z/4) \odot \operatorname{Li}_{0,2}(z) = K + O(|1-z|^{\tfrac12-\epsilon}),$$ where $$K := \sum_{n=1}^\infty (\log{n})^2 \frac{\beta_n}{4^n}.$$ For the middle term, first observe that $$\frac{z}4 \operatorname{CAT}(z/4)\widehat{M}_1(z) = -L(z) - (2(1-\log2)-\gamma) +
(1-z)^{1/2}L(z) + O(|1-z|^{1/2})$$ and that $ L(z) = \operatorname{Li}_{1,0}(z) $. Thus the third term on the right in is 4 times: $$-\operatorname{Li}_{1,1}(z) + \bar{c} + O(|1-z|^{\tfrac12-2\epsilon}) = -\frac12
L^2(z) + \gamma L(z) + \bar{c} + O(|1-z|^{\tfrac12-\epsilon}).$$ \[The singular expansion for $ \operatorname{Li}_{1,1}(z) $ was obtained using the results at the bottom of p. 379 in [@MR2000a:05015]. We state it here for the reader’s convenience: $$\operatorname{Li}_{1,1}(z) = \frac12 L^2(z) - \gamma L(z) + \bar{c} + O(|1-z|),$$ where $\bar{c}$ is again an unspecified constant.\] Hence $$\widehat{R}_2(z) = 8(1-\log2)L(z) + \bar{c} +
O(|1-z|^{\tfrac12-\epsilon}),$$ which leads to $$\label{eq:23}
\widehat{M}_2(z) = 8(1-\log2)(1-z)^{-1/2}L(z) +
\bar{c}(1-z)^{-1/2} + O(|1-z|^{-\epsilon}).$$ We draw the attention of the reader to the cancellation of the ostensible lead-order term $ L^2(z) $. This kind of cancellation will appear again in the next section when we deal with higher moments.
Now using singularity analysis and estimates for the Catalan numbers we get $$\label{eq:25}
\tilde\mu_n(2) = 8(1-\log2)n\log{n} + \bar{c}n + O(n^{\tfrac12+\epsilon}).$$ Using , $${\mathbf{Var}\,X_n} = \tilde\mu_n(2) - \tilde\mu_n(1)^2 =
8(1-\log2)n\log{n} + \bar{c}n + O(n^{\tfrac12+\epsilon}),$$ which agrees with Theorem 3.1 of [@MR97f:68021] (after a correction pointed out in [@MR99j:05171]) and improves the remainder estimate. In our subsequent analysis we will not need to evaluate the unspecified constant $\bar{c}$.
Higher moments {#sec:shape_function_higher-moments}
--------------
We now turn our attention to deriving the asymptotics of higher moments of the shape functional. The main result is as follows.
\[thm:shape\_moments\] Define $\widetilde{X}_n := X_n - C_0(n+1)$, with $X_0 := 0$; $\tilde{\mu}_n(k) := {\mathbf{E}\,\widetilde{X}_n^k} $, with $\tilde{\mu}_n(0) = 1$ for all $n \geq 0$; and $\hat{\mu}_n(k)
:= \beta_n\tilde{\mu}_n(k)/4^n $. Let $\widehat{M}_k(z)$ denote the ordinary generating function of $\hat{\mu}_n(k)$ in the argument $n$. For $ k \geq 2 $, $ \widehat{M}_k(z) $ has the singular expansion $$\widehat{M}_k(z) = (1-z)^{-\tfrac{k-1}2}
\sum_{j=0}^{\lfloor {k}/2 \rfloor} C_{k,j}
L^{\lfloor {k}/2 \rfloor-j}(z) +
O(|1-z|^{-\tfrac{k}2+1-\epsilon}),$$ with $$C_{2l,0} = \frac14 \sum_{j=1}^{l-1} \binom{2l}{2j}
C_{2j,0}C_{2l-2j,0}, \qquad C_{2,0} = 8(1-\log2).$$
The proof is by induction. For $ k=2 $ the claim is true by . We note that the claim is *not* true for $ k=1 $. Instead, recalling (\[eq:38\]), $$\label{eq:28}
\widehat{M}_1(z) = -2L(z) -
2(2(1-\log2)-\gamma) + O(|1-z|^{1/2}).$$ For the induction step, let $ k \geq 3 $. We will first get the asymptotics of $ \widehat{R}_k(z) $ defined at with $B(z) = \operatorname{Li}_{0,1}(z)$. In order to do that we will obtain the asymptotics of each term in the defining sum. We remind the reader that we are only interested in the form of the asymptotic expansion of $ \widehat{R}_k(z) $ and the coefficient of the lead-order term when $ k $ is even. This allows us to “define away” all other constants, their determination delayed to the time when the need arises.
For this paragraph suppose that $ k_1 \geq2 $ and $ k_2 \geq 2 $. Then by the induction hypothesis $$\begin{gathered}
\label{eq:26}
\frac{z}4 \widehat{M}_{k_1}(z)\widehat{M}_{k_2}(z) = \frac14
(1-z)^{-\tfrac{k_1+k_2}2+1}
\sum_{l=0}^{\lfloor {k_1}/2 \rfloor +
\lfloor {k_2}/2 \rfloor} A_{k_1,k_2,l}
L^{\lfloor {k_1}/2 \rfloor + \lfloor {k_2}/2
\rfloor - l}(z)\\
{}+ O(|1-z|^{-\tfrac{k_1+k_2}2+\tfrac32-\epsilon}),
\end{gathered}$$ where $ A_{k_1,k_2,0}= C_{k_1,0}C_{k_2,0} $. (a) If $ k_3=0 $ then $ k_1+k_2=k $ and the corresponding contribution to $
\widehat{R}_k(z) $ is given by $$\begin{gathered}
\label{eq:27}
\frac14 \binom{k}{k_1} (1-z)^{-\tfrac{k}2+1}\\
\times
\sum_{l=0}^{\lfloor {k_1}/2 \rfloor +
\lfloor ({k-k_1})/2 \rfloor} A_{k_1,k-k_1,l}
L^{\lfloor {k_1}/2 \rfloor + \lfloor ({k-k_1})/2 \rfloor -
l}(z) + O(|1-z|^{-\tfrac{k}2+\tfrac32-\epsilon}).
\end{gathered}$$ Observe that if $ k $ is even and $ k_1 $ is odd the highest power of $ L(z) $ in is $ \lfloor {k}/2 \rfloor-1 $. In all other cases the the highest power of $ L(z) $ in is $ \lfloor {k}/2 \rfloor $. (b) If $ k_3 \ne 0 $ then we use Lemma \[lem:omztoli\] to express as a linear combination of $$\left\{
\operatorname{Li}_{-\tfrac{k_1+k_2}2+2,l}(z) \right\}_{l=0}^{\lfloor {k_1}/2 \rfloor
+ \lfloor {k_2}/2 \rfloor}$$ with a remainder that is $O(|1-z|^{-\tfrac{k_1+k_2}2+\tfrac32-\epsilon})$. When we take the Hadamard product of such a term with $ \operatorname{Li}_{0,k_3}(z) $ we will get a linear combination of $$\left\{
\operatorname{Li}_{-\tfrac{k_1+k_2}2+2,l+k_3}(z) \right\}_{l=0}^{\lfloor
{k_1}/2 \rfloor + \lfloor {k_2}/2 \rfloor}$$ and a smaller remainder. Such terms are all $ O(|1-z|^{-\tfrac{k_1+k_2}2+1-\epsilon}) $, so that the contribution is $ O(|1-z|^{-\tfrac{k}2+\tfrac32-\epsilon}) $.
Next, consider the case when $ k_1=1 $ and $ k_2 \geq 2 $. Using the induction hypothesis and we get $$\label{eq:29}
\begin{split}
\frac{z}4 \widehat{M}_{k_1}(z) \widehat{M}_{k_2}(z) = -\frac12
(1-z)^{-\tfrac{k_2-1}2} \sum_{j=0}^{\lfloor {k_2}/2 \rfloor+1}
B_{k_2,j} L^{\left\lfloor \tfrac{k_2}2 \right\rfloor + 1 - j}(z) \\
{}+ O(|1-z|^{-\tfrac{k_2}2+1-2\epsilon}),
\end{split}$$ with $ B_{k_2,0} = C_{k_2,0} $. (a) If $ k_3=0 $ then $
k_2=k-1 $ and the corresponding contribution to $ \widehat{R}_k(z) $ is given by $$\label{eq:30}
-\frac{k}{2} (1-z)^{-\tfrac{k}2+1}
\sum_{j=0}^{\lfloor ({k-1})/{2} \rfloor+1} B_{k-1,j}
L^{\left\lfloor \tfrac{k-1}2 \right\rfloor+1-j}(z) +
O(|1-z|^{-\tfrac{k}2+\tfrac32-2\epsilon}).$$ (b) If $ k_3 \ne 0 $ then Lemma \[lem:omztoli\] can be used once again to express in terms of generalized polylogarithms, whence an argument similar to that at the end of the preceding paragraph yields that the contributions to $\widehat{R}(z)$ from such terms is $ O(|1-z|^{-\tfrac{k_2-1}2-\epsilon}) $, which is $ O(|1-z|^{-\tfrac{k}2+\tfrac32-\epsilon}) $. The case when $k_1
\geq 2$ and $k_2 =1$ is handled symmetrically.
When $ k_1=k_2=1 $ then $
(z/4)\widehat{M}_{k_1}(z)\widehat{M}_{k_2}(z) $ is $ O(|1-z|^{-\epsilon}) $ and when one takes the Hadamard product of this term with $ \operatorname{Li}_{0,k_3}(z) $ the contribution will be $ O(|1-z|^{-2\epsilon}) $.
Now consider the case when $ k_1=0 $ and $ k_2 \geq 2 $. Since $ \widehat{M}_0(z) = \operatorname{CAT}(z/4) $, we have $$\label{eq:31}
\frac{z}4 \widehat{M}_{k_1}(z)\widehat{M}_{k_2}(z) = \frac12
(1-z)^{-\tfrac{k_2-1}2} \sum_{j=0}^{\lfloor {k_2}/2 \rfloor}
C_{k_2,j} L^{\lfloor {k_2}/2 \rfloor-j}(z) +
O(|1-z|^{-\tfrac{k_2}2+1-\epsilon}).$$ By Lemma \[lem:omztoli\] this can be expressed as a linear combination of $$\left\{ \operatorname{Li}_{-\tfrac{k_2-1}2+1,j}(z)
\right\}_{j=0}^{\lfloor {k_2}/2 \rfloor}$$ with a $ O(|1-z|^{-\tfrac{k_2}2+1-\epsilon}) $ remainder. When we take the Hadamard product of such a term with $ \operatorname{Li}_{0,k_3}(z) $ we will get a linear combination, call it $S(z)$, of $$\left\{ \operatorname{Li}_{-\tfrac{k_2-1}2+1,j+k_3}(z)
\right\}_{j=0}^{\lfloor {k_2}/2 \rfloor}$$ with a remainder of $ O(|1-z|^{-\tfrac{k_2}2 + 1 - 2\epsilon})
$, which is $ O(|1-z|^{-\tfrac{k}2 + \tfrac32 - 2\epsilon}) $ unless $ k_2=k-1 $. When $ k_2 = k-1 $, by Lemma \[lem:omztoli\] the constant multiplying the lead-order term $ \operatorname{Li}_{-\tfrac{k}{2}+2,\lfloor\tfrac{k-1}{2} \rfloor + 1}(z) $ in $S(z)$ is $\frac{C_{k-1,0}}2
\mu_0^{(-\tfrac{k}{2}+2,\lfloor\tfrac{k-1}{2}\rfloor)}$. When we take the Hadamard product of this term with $ \operatorname{Li}_{0,k_3}(z) $ we get a lead-order term of $$\frac{C_{k-1,0}}2
\mu_0^{(-\tfrac{k}{2}+2,\lfloor\tfrac{k-1}{2}\rfloor)}
\operatorname{Li}_{-\tfrac{k}{2}+2,\lfloor\tfrac{k-1}{2}\rfloor+1}(z).$$ Now we use Lemma \[lem:litoomz\] and the observation that $ \lambda_0^{(\alpha,r)}\mu_0^{(\alpha,s)}=1 $ to conclude that the contribution to $\widehat{R}_k(z)$ from the term with $ k_1=0 $ and $ k_2=k-1 $ is $$\label{eq:32}
\frac{k}2 (1-z)^{-\tfrac{k}2+1}
\sum_{j=0}^{\lfloor\frac{k-1}{2}\rfloor+1} D_{k,j}
L^{\lfloor\frac{k-1}{2}\rfloor+1-j}(z)
+ O(|1-z|^{-\tfrac{k}2+\tfrac{3}2-\epsilon}),$$ with $ D_{k,0}=C_{k-1,0} $. Notice that the lead order from this contribution is precisely that from but with opposite sign; thus the two contributions cancel each other to lead order. The case $k_2 = 0$ and $k_1 \geq 2$ is handled symmetrically.
The last two cases are $ k_1=0 $, $ k_2=1 $ (or vice-versa) and $ k_1=k_2=0 $. The contribution from these cases can be easily seen to be $ O(|1-z|^{-\tfrac{k}2+\tfrac32-2\epsilon}) $.
We can now deduce the asymptotic behavior of $ \widehat{R}_k(z)
$. The three contributions are , , and , with only (in net) contributing a term of the form $ (1-z)^{-\tfrac{k}{2}+1} L^{\lfloor {k}/{2}
\rfloor}(z) $ when $k$ is even. The coefficient of this term when $ k $ is even is given by $$\frac14 \sum_{\substack{0 < k_1 < k\\k_1\text{ even}}}
\binom{k}{k_1} C_{k_1,0}C_{k_2,0}.$$ Finally we can sum up the rest of the contribution, define $ C_{k,j} $ appropriately and use to claim the result.
A central limit theorem {#sec:shape_functional_centr-limit-theor}
-----------------------
Proposition \[thm:shape\_moments\] and singularity analysis allows us to get the asymptotics of the moments of the “approximately centered” shape functional. Using arguments identical to those in Section \[sec:asymptotics-moments\] it is clear that for $ k \geq 2 $ $$\label{eq:33}
\tilde{\mu}_n(k) = \frac{C_{k,0}\sqrt\pi}{\Gamma(\tfrac{k-1}2)}
n^{{k}/{2}} [\log{n}]^{\lfloor{k}/{2}\rfloor} +
O(n^{{k}/{2}} [\log{n}]^{\lfloor{k}/{2}\rfloor-1}).$$ This and the asymptotics of the mean derived in Section \[sec:shape\_function\_mean\] give us, for $ k \geq 1 $, $$E\left[ \frac{\tilde{X}_n}{\sqrt{n\log{n}}} \right]^{2k} \to
\frac{C_{2k,0}\sqrt\pi}{\Gamma(k-\tfrac12)}, \qquad E\left[
\frac{\tilde{X}_n}{\sqrt{n\log{n}}} \right]^{2k-1} = o(1)$$ as $n \to \infty$. The recurrence for $ C_{2k,0} $ can be solved easily to yield, for $k
\geq 1$, $$C_{2k,0} = \frac{(2k)!(2k-2)!}{2^k2^{2k-2}k!(k-1)!} \sigma^{2k},$$ where $ \sigma^2 := 8(1-\log2) $. Then using the identity $$\frac{\Gamma(k-\tfrac12)}{\sqrt\pi} =
\left[2^{2k-2} \frac{(k-1)!}{(2k-2)!}\right]^{-1}$$ we get $$\frac{C_{2k,0}\sqrt\pi}{\Gamma(k-\tfrac12)} = \frac{(2k)!}{2^k
k!}\sigma^{2k}.$$ It is clear now that both the “approximately centered” and the normalized shape functional are asymptotically normal.
\[thm:shape\_clt\] Let $ X_n $ denote the shape functional, induced by the toll sequence $(\log{n})_{n \geq 1}$, for Catalan trees. Then $$\frac{X_n-C_0(n+1)}{\sqrt{n\log{n}}} \stackrel{\mathcal{L}}{\to}
\mathcal{N}(0,\sigma^2) \quad\text{ and }\quad
\frac{X_n - {\mathbf{E}\,X_n} }{\sqrt{ {\mathbf{Var}\,X_n} }}
\stackrel{\mathcal{L}}{\to} \mathcal{N}(0,1),$$ where $$C_0 := \sum_{n=1}^\infty (\log{n}) \frac{\beta_n}{4^n}, \qquad
\beta_n = \frac1{n+1}\binom{2n}{n},$$ and $ \sigma^2 := 8(1-\log2) $.
Concerning numerical evaluation of the constant $C_0$, see the end of Section 5.2 in [@FFK].
Sufficient conditions for asymptotic normality {#sec:suff-cond-asympt}
==============================================
In this speculative final section we briefly examine the behavior of a general additive functional $X_n$ induced by a given “small” toll sequence $(b_n)$. We have seen evidence \[Remark \[remark\_y\_alpha\_properties\](\[item:1\])\] that if $(b_n)$ is the “large” toll sequence $n^{\alpha}$ for any fixed $\alpha
> 0$, then the limiting behavior is non-normal. When $b_n = \log n$ (or $b_n = n^\alpha$ and $\alpha \downarrow 0$), the (limiting) random variable is normal. Where is the interface between normal and non-normal asymptotics? We have carried out arguments similar to those leading to Propositions \[prop\_alpha\_12\] and \[prop:alpha\_ne\_12\_riemann\] (see also [@MR97f:68021]) that suggest a sufficient condition for asymptotic normality, but our “proof” is somewhat heuristic, and further technical conditions on $(b_n)$ may be required. Nevertheless, to inspire further work, we present our preliminary indications.
We assume that $b_n \equiv b(n)$, where $b(\cdot)$ is a function of a nonnegative real argument. Suppose that $x^{-3/2} b(x)$ is (ultimately) nonincreasing and that $x b'(x)$ is slowly varying at infinity. Then $${\mathbf{E}\,X_n} = C_0 (n+1) - (1 + o(1)) 2 \sqrt{\pi} n^{3/2} b'(n),$$ where $$C_0 = \sum_{n=1}^\infty b_n \frac{\beta_n}{4^n}.$$ Furthermore, $${\mathbf{Var}\,X_n} \sim 8 (1 - \log 2) [n b'(n)]^2 n\log n,$$ and $$\frac{X_n - C_0 (n + 1)}{n b'(n) \sqrt{n \log n}}
\stackrel{\mathcal{L}}{\to} \mathcal{N}(0,\sigma^2), \text{ where }
\sigma^2 = 8(1 - \log 2).$$ This asymptotic normality can also be stated in the form $$\frac{X_n - {\mathbf{E}\,X_n} }{\sqrt{{\mathbf{Var}\,X_n}}}
\stackrel{\mathcal{L}}{\to} \mathcal{N}(0,1).$$
**Acknowledgments.** We thank two anonymous referees for helpful comments.
[^1]: Research for both authors supported by NSF grants DMS-9803780 and DMS-0104167, and by The Johns Hopkins University’s Acheson J. Duncan Fund for the Advancement of Research in Statistics. Research for the second author supported by NSF grant 0049092 and carried out primarily while this author was affiliated with what is now the Department of Applied Mathematics and Statistics at The Johns Hopkins University.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'For a lattice $L$, let ${\textup{Princ}(L)}$ denote the ordered set of principal congruences of $L$. In a pioneering paper, G. Grätzer characterized the ordered sets ${\textup{Princ}(L)}$ of finite lattices $L$; here we do the same for countable lattices. He also showed that each bounded ordered set $H$ is isomorphic to ${\textup{Princ}(L)}$ of a bounded lattice $L$. We prove a related statement: if an ordered set $H$ with least element is the union of a chain of principal ideals, then $H$ is isomorphic to ${\textup{Princ}(L)}$ of some lattice $L$.'
address: 'University of Szeged, Bolyai Institute. Szeged, Aradi vértanúk tere 1, HUNGARY 6720'
author:
- Gábor Czédli
date: 'May 7, 2013'
title: The ordered set of principal congruences of a countable lattice
---
[^1]
Introduction {#introsection}
============
Historical background
---------------------
A classical theorem of Dilworth [@dilwcollect] states that each finite distributive lattice is isomorphic to the congruence lattice of a finite lattice. Since this first result, the *congruence lattice representation problem* has attracted many researchers, and dozens of papers belonging to this topic have been written. The story of this problem were mile-stoned by Huhn [@huhn] and Schmidt [@schmidtidnl], reached its summit in Wehrung [@wehrung] and Ržička [@ruzicka], and was summarized in Grätzer [@grbypict]; see also Czédli [@czgrepres] for some additional, recent references.
In [@ggprincl], Grätzer started an analogous new topic of Lattice Theory. Namely, for a lattice $L$, let ${\textup{Princ}(L)}={{\langle {\textup{Princ}(L)},\subseteq\rangle}}$ denote the ordered set of principal congruences of $L$. A congruence is *principal* if it is generated by a pair ${{\langle a,b\rangle}}$ of elements. Ordered sets and lattices with 0 and 1 are called *bounded*. Clearly, if $L$ is a bounded lattice, then ${\textup{Princ}(L)}$ is a bounded ordered set. The pioneering theorem in Grätzer [@ggprincl] states the converse: each bounded ordered set $P$ is isomorphic to ${\textup{Princ}(L)}$ for an appropriate bounded lattice $L$. Actually, the lattice he constructed is of length 5. Up to isomorphism, he also characterized finite bounded ordered sets as the ${\textup{Princ}(L)}$ of finite lattices $L$.
Terminology
-----------
Unless otherwise stated, we follow the standard terminology and notation of Lattice Theory; see, for example, Grätzer [@GGLT]. Our terminology for weak perspectivity is the classical one taken from Grätzer [@grglt]. *Ordered sets* are nonempty sets equipped with *orderings*, that is, with reflexive, transitive, antisymmetric relations. Note that an ordered set is often called a *partially ordered set*, which is a rather long expression, or a *poset*, which is not tolerated by spell-checkers, or an *order*, which has several additional meanings.
Our result
----------
Motivated by Grätzer’s theorem mentioned above, our goal is to prove the following theorem. A set $X$ is *countable* if it is finite or countably infinite, that is, if $|X|\leq \aleph_0$. An ordered set $P$ is *directed* if each two-element subset of $P$ has an upper bound in $P$. Nonempty down-sets of $P$ and subsets ${\mathord\downarrow c}={\{x\in P: x\leq c\}}$ are called *order ideals* and *principal $($order$)$ ideals*, respectively.
\[thmmain\]
\[thmmaina\] An ordered set $P={\langle P;\leq\rangle}$ is isomorphic to ${\textup{Princ}(L)}$ for some *countable* lattice $L$ if and only if $P$ is a countable directed ordered set with zero.
\[thmmainb\] If $P$ is an ordered set with zero and it is the union of a chain of principal ideals, then there exists a lattice $L$ such that $P\cong {\textup{Princ}(L)}$.
An alternative way of formulating the condition in part is to say that $0\in P$ and there is a cofinal chain in $P$. For a pair ${{\langle a,b\rangle}}\in L^2$ of elements, the least congruence collapsing $a$ and $b$ is denoted by ${\textup{con}(a,b)}$ or ${\textup{con}_{L}(a,b)}$. As it was pointed out in Grätzer [@ggprincl], the rule $${\textup{con}(a_i,b_i)}\subseteq
{\textup{con}(a_1\wedge b_1\wedge a_2\wedge b_2,a_1\vee b_1\vee a_2\vee b_2)}\,
\text{ for }i\in{\{1,2\}}
\label{prdirT}$$ implies that ${\textup{Princ}(L)}$ is always a directed ordered set with zero. Therefore, the first part of the theorem will easily be concluded from the second one. To compare part of our theorem to Grätzer’s result, note that a bounded ordered set $P$ is always a union of a (one-element) chain of principal ideals. Of course, no *bounded* lattice $L$ can represent $P$ by $P\cong{\textup{Princ}(L)}$ if $P$ has no greatest element.
Method
------
First of all, we need the key idea, illustrated by Figure \[fig4\], from Grätzer [@ggprincl].
Second, we feel that without the quasi-coloring technique developed in Czédli [@czgrepres], the investigations leading to this paper would have not even begun. As opposed to colorings, the advantage of quasi-colorings is that we have joins (equivalently, the possibility of generation) in their range sets. This allows us to decompose our construction into a sequence of elementary steps. Each step is accompanied by a quasiordering. If several steps, possibly infinitely many steps, are carried out, then the join of the corresponding quasiorderings gives a satisfactory insight into the construction. Even if it is the “coloring versions” of some lemmas that we only use at the end, it is worth allowing their quasi-coloring versions since this way the proofs are simpler and the lemmas become more general.
Third, the idea of using appropriate auxiliary structures is taken from Czédli [@112gen]. Their role is to accumulate all the assumptions our induction steps will need.
Auxiliary statements and structures
===================================
The rest of the paper is devoted to the proof of Theorem \[thmmain\].
Quasi-colorings and auxiliary structures
----------------------------------------
A *quasiordered set* is a structure ${\langle H;\nu\rangle}$ where $H\neq {\varnothing}$ is a set and $\nu\subseteq H^2$ is a reflexive, transitive relation on $H$. Quasiordered sets are also called preordered ones. Instead of ${{\langle x,y\rangle}}\in \nu$, we usually write $x {\mathrel{\leq_\nu}}y$. Also, we write $x{\mathrel{<_\nu}}y$ and $x{\mathrel{\parallel_\nu}}y$ for the conjunction of $x{\mathrel{\leq_\nu}}y$ and $y\not{\mathrel{\leq_\nu}}x$, and that of ${{\langle x,y\rangle}}\notin\nu$ and ${{\langle y,x\rangle}}\notin \nu$, respectively. If $g\in H$ and $x{\mathrel{\leq_\nu}}g$ for all $x\in H$, then $g$ is a *greatest element* of $H$; *least elements* are defined dually. They are not necessarily unique; if they are, then they are denoted by $1_H$ and $0_H$. If for all $x,y\in H$, there exists a $z\in H$ such that $x{\mathrel{\leq_\nu}}z$ and $y{\mathrel{\leq_\nu}}z$, then ${\langle H;\nu\rangle}$ is a *directed* quasiordered set. Given $H\neq {\varnothing}$, the set of all quasiorderings on $H$ is denoted by ${\textup{Quord}(H)}$. It is a complete lattice with respect to set inclusion. For $X\subseteq H^2$, the least quasiorder on $H$ that includes $X$ is denotes by ${\textup{quo}(X)}$. We write ${\textup{quo}(x,y)}$ instead of ${\textup{quo}({\{{{\langle a,b\rangle}}\}})}$.
![The lattice $N_6$ \[fig1\]](czg-princl1.eps)
Let $L$ be a lattice. For $x,y\in L$, ${{\langle x,y\rangle}}$ is called an *ordered pair* of $L$ if $x\leq y$. The set of ordered pairs of $L$ is denoted by ${{\textup{Pairs}^{\leq}(L)}}$. Note that we shall often use that ${{\textup{Pairs}^{\leq}(S)}}\subseteq {{\textup{Pairs}^{\leq}(L)}}$ holds for sublattices $S$ of $L$; this explains why we work with ordered pairs rather than intervals. Note also that ${{\langle a,b\rangle}}$ is an ordered pair if[f]{} $b/a$ is a quotient; however, the concept of ordered pairs fits better to previous work with quasi-colorings.
By a *quasi-colored lattice* we mean a structure ${{\mathcal L}}={\langle L;\gamma,H,\nu\rangle}$ where $L$ is a lattice, ${\langle H;\nu\rangle}$ is a quasiordered set, $\gamma\colon {{\textup{Pairs}^{\leq}(L)}}\to H$ is a surjective map, and for all ${{\langle u_1,v_1\rangle}},{{\langle u_2,v_2\rangle}}\in {{\textup{Pairs}^{\leq}(L)}}$,
- if $\gamma({{\langle u_1,v_1\rangle}}){\mathrel{\leq_\nu}}\gamma({{\langle u_2,v_2\rangle}})$, then ${\textup{con}(u_1,v_1)}\leq {\textup{con}(u_2,v_2)}$;
- if ${\textup{con}(u_1,v_1)}\leq {\textup{con}(u_2,v_2)}$, then $\gamma({{\langle u_1,v_1\rangle}}){\mathrel{\leq_\nu}}\gamma({{\langle u_2,v_2\rangle}})$.
This concept is taken from Czédli [@czgrepres]. Prior to [@czgrepres], the name “coloring" was used for surjective maps onto antichains satisfying in Grätzer, Lakser, and Schmidt [@grlaksersch], and for surjective maps onto antichains satisfying in Grätzer [@grbypict page 39]. However, in [@czgrepres], [@grlaksersch], and [@grbypict], $\gamma({{\langle u,v\rangle}})$ was defined only for covering pairs $u\prec v$. To emphasize that ${\textup{con}(u_1,v_1)}$ and ${\textup{con}(u_2,v_2)}$ belong to the ordered set ${\textup{Princ}(L)}$, we usually write ${\textup{con}(u_1,v_1)}\leq {\textup{con}(u_2,v_2)}$ rather than ${\textup{con}(u_1,v_1)}\subseteq {\textup{con}(u_2,v_2)}$. It follows easily from , , and the surjectivity of $\gamma$ that if ${\langle L;\gamma,H,\nu\rangle}$ is a quasi-colored set, then ${\langle H;\nu\rangle}$ is a directed quasiordered set with least element; possibly with many least elements.
We say that a quadruple ${\langle a_1,b_1,a_2,b_2\rangle}\in L^4$ is an *$N_6$-quadruple* of $L$ if $${\{b_1\wedge b_2=a_1\wedge a_2,\,\, a_1<b_1,\,\,a_2<b_2,\,\, a_1\vee a_2=b_1\vee b_2\}}$$ is a six-element sublattice, see Figure \[fig1\]. If, in addition, $b_1\wedge b_2=0_L$ and $a_1\vee a_2=1_L$, then we speak of a *spanning $N_6$-quadruple*. An $N_6$-quadruple of $L$ is called a *strong $N_6$-quadruple* if it is a spanning one and, for all $i\in{\{1,2\}}$ and $x\in L$, $$\begin{aligned}
0_L < x \leq b_i &{\mathrel{\Longrightarrow}}x\vee a_{3-i}=1_L, \text{ and} \label{labsa}\\
1_L>x \geq a_i&{\mathrel{\Longrightarrow}}x\wedge b_{3-i}=0_L\text.\label{labsb}
$$ For a subset $X$ of $L^2$, the least lattice congruence including $X$ is denoted by ${\textup{con}(X)}$. In particular, ${\textup{con}({\{{{\langle a,b\rangle}}\}})}={\textup{con}(a,b)}$. The least and the largest congruence of $L$ are denoted by $\Delta_L$ and ${\nabla_{\kern -2pt L}}$, respectively.
Now, we are in the position to define the key concept we need. In the present paper, by a *auxiliary structure* we mean a structure $${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}\label{auxstr}$$ such that the following eight properties hold:
- ${\langle L;\gamma, H,\nu\rangle}$ is a quasi-colored lattice;
- the quasiordered set ${\langle H;\nu\rangle}$ has exactly one least element, $0_H$, and at most one greatest element;
- $\delta $ and ${\varepsilon}$ are $H\to L$ maps such that $\delta (0_H)={\varepsilon}(0_H)$ and, for all $x\in H\setminus{\{0_H\}}$, $\delta (x)\prec{\varepsilon}(x)$; note that we often write $a_x$ and $b_x$ instead of $\delta (x)$ and ${\varepsilon}(x)$, respectively;
- for all $p\in H$, $\gamma({{\langle \delta (p),{\varepsilon}(p)\rangle}})=p$;
- if $p$ and $q$ are distinct elements of $H\setminus{\{0_H\}}$, then ${\langle \delta (p),{\varepsilon}(p), \delta (q),{\varepsilon}(q)\rangle}$ is an $N_6$-quadruple of $L$;
- if $p,q\in H$, $p{\mathrel{\parallel_\nu}}q$, and ${\langle \delta (p),{\varepsilon}(p), \delta (q),{\varepsilon}(q)\rangle}$ is a spanning $N_6$-quadruple, then it is a strong $N_6$-quadruple of $L$;
- If $L$ is a bounded lattice and $|L|>1$, then $$\begin{aligned}
\bigl|\bigl\{x\in L:{} &0_L\prec x\prec 1_L\text{ and, for all elements }y \text{ in }\cr
&L\setminus\{0_L,1_L,x\},\,\, x\text{ is a complement of }y\bigr\}\bigr|\geq 3;\end{aligned}$$
- if $1_H\in H$ and $|L|>1$, then ${\textup{con}\bigl( {\{{{\langle \delta (p),{\varepsilon}(p)\rangle}}: p\in H\text{ and } p\neq 1_H \}}\bigr)} \neq {\nabla_{\kern -2pt L}}$.
It follows from that ${\{\delta(x),{\varepsilon}(x)\}}={\{a_x, b_x\}}$ is disjoint from ${\{0_L,1_L\}}={\varnothing}$, provided $|H|\geq 3$ and $x\in H\setminus{\{0_H\}}$.
If ${\langle H;\nu\rangle}$ is a quasiordered set, then $\Theta_\nu=\nu\cap\nu^{-1}$ is an equivalence relation, and the definition ${[x]\Theta_\nu}\leq {[y]\Theta_\nu}\iff x{\mathrel{\leq_\nu}}y$ turns the quotient set $H/\Theta_\nu$ into an ordered set ${\langle H/\Theta_\nu;\leq\rangle}$. The importance of our auxiliary structures is first shown by the following lemma.
\[impclM\] If ${{\mathcal L}}$ in is an auxiliary structure, then the ordered set ${\textup{Princ}(L)}$ is isomorphic to ${\langle H/\Theta_\nu;\leq\rangle}$. In particular, if $\nu$ is an ordering, then ${\textup{Princ}(L)}$ is isomorphic to the ordered set ${\langle H;\nu\rangle}$.
Clearly, ${\textup{Princ}(L)}={\{{\textup{con}(x,y)}: {{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}(L)}}\}}$. Consider the map ${\varphi}\colon {\textup{Princ}(L)}\to H/\Theta_\nu$, defined by ${\textup{con}(x,y)}\mapsto {[\gamma({{\langle x,y\rangle}})]\Theta_\nu}$. If ${\textup{con}(x_1,y_1)}={\textup{con}(x_2,y_2)}$, then ${[\gamma({{\langle x_1,y_1\rangle}})]\Theta_\nu} = {[\gamma({{\langle x_2,y_2\rangle}})]\Theta_\nu}$ follows from . Hence, ${\varphi}$ is a map. It is surjective since so is $\gamma$. Finally, it is bijective and an order isomorphism by and .
We say that an auxiliary structure ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ is *countable* if $|L|\leq\aleph_0$ and $|H|\leq\aleph_0$. Next, we give an example.
\[exegy\] Let $H$ be a set, finite or infinite, such that $0_H,1_H\in H$ and $|H|\geq 3$. Let us define $\nu={\textup{quo}\bigl(({\{0_H\}}\times H) \cup (H\times {\{1_H\}})\bigr)}$; note that ${\langle H;\nu\rangle}$ is an ordered set (actually, a modular lattice of length 2). Let $L$ be the lattice depicted in Figure \[fig2\], where ${\{h,g,p,q,\dots\}}$ is the set $H\setminus{\{0_H,1_H\}}$. For $x\prec y$, $\gamma({{\langle x,y\rangle}})$ is defined by the labeling of edges. Note that, in Figure \[fig2\], we often write $0$ and $1$ rather than $0_H$ and $1_H$, because of space consideration. Let $\gamma({{\langle x,x\rangle}})=0_H$ for $x\in L$, and let $\gamma({{\langle x,y\rangle}})=1_H$ for $x<y$ if $x\not\prec y$. Let $\delta (0_H)={\varepsilon}(0_H)=x_0$. For $s\in H\setminus{\{0_H\}}$, we define $\delta (s)=a_s$ and ${\varepsilon}(s)=b_s$. Now, obviously, ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ is an auxiliary structure. If $|H|\leq \aleph_0$, then ${{\mathcal L}}$ is countable.
![The auxiliary structure in Example \[exegy\] \[fig2\]](czg-princl2.eps)
Substructures are defined in the natural way; note that $\nu=\nu'\cap H^2$ will not be required below. Namely,
\[defbeagy\] Let ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ and ${{\mathcal L}}'={\langle L';\gamma', H',\nu',\delta ',{\varepsilon}'\rangle}$ be auxiliary structures. We say that ${{\mathcal L}}$ is a *substructure* of ${{\mathcal L}}'$ if the following hold:
$L$ is a sublattice of $L'$, $H\subseteq H'$, $\nu\subseteq\nu'$, and $0_{H'}=0_H$;
$\gamma$ is the restriction of $\gamma'$ to ${{\textup{Pairs}^{\leq}(L)}}$, $\delta$ is the restriction of $\delta'$ to $H$, and ${\varepsilon}$ is the restriction of ${\varepsilon}'$ to $H$.
Clearly, if ${{\mathcal L}}$, ${{\mathcal L}}'$, and ${{\mathcal L}}''$ are auxiliary structures such that ${{\mathcal L}}$ is a substructure of ${{\mathcal L}}'$ and ${{\mathcal L}}'$ is a substructure of ${{\mathcal L}}''$, then ${{\mathcal L}}$ is a substructure of ${{\mathcal L}}''$; this fact will be used implicitly. The following lemma indicates how easily but efficiently we can work with auxiliary structures.
For an auxiliary structure ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ and an arbitrary (possibly empty) set $K$, we define the following objects. Let ${{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ be the disjoint union $H\cup K\cup{\{1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}\}}$, and let $0_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}=0_H$. Define ${{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\in{\textup{Quord}({{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}})}$ by $${{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}={\textup{quo}\bigl(\nu \cup({\{0_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}} \}} \times {{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}} ) \cup ({{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\times{\{ 1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}\}}) \bigr)}\text.$$ Consider the lattice ${{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ defined by Figure \[fig3\], where $u,v,\dots$ denote the elements of $K$. The thick dotted lines indicate $\leq$ but not necessarily $\prec$; they are edges only if $L$ is bounded. Note that all “new” lattice elements distinct from $0_{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}$ and $1_{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}$, that is, all elements of ${{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\setminus(L\cup {\{0_{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}, 1_{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}\}})$, are complements of all “old” elements. Extend $\delta$ and ${\varepsilon}$ to maps ${{\delta^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\colon {{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}} \to{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}$ by letting ${{\delta ^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(w)=a_w$ and ${{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(w)=b_w$ for $w\in K\cup{\{1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}\}}$. Define ${{\gamma^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\colon {{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}})}}\to{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ by $${{\gamma^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(
{{\langle x,y\rangle}})=
\begin{cases}
\gamma({{\langle x,y\rangle}}),& \text{if } {{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}(L)}},\cr
w, &\text{if } x=a_w,\,\, y=b_w,\text{ and }w\in K,\cr
0_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}},&\text{if }x=y,\cr
1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}},&\text{otherwise.}
\end{cases}$$ By space consideration again, the edge label $1$ in Figure \[fig3\] stands for $1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}$. Finally, let ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}={\langle {{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}};{{\gamma^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}, {{\delta ^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\rangle}$. The straightforward proof of the following lemma will be omitted.
\[lupstp\] If ${{\mathcal L}}$ is an auxiliary structure, then so is ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$. Furthermore, ${{\mathcal L}}$ is a substructure of ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$, and if ${{\mathcal L}}$ and $K$ are countable, then so is ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$. Moreover, if $p,q\in {{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ such that ${\{p,q\}}\not\subseteq H$ and $p\parallel_{{{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}q$, then ${\langle {{\delta ^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(p), {{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(p),{{\delta ^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(q), {{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(q) \rangle}$ is a strong $N_6$-quadruple.
Since new bottom and top elements are added, we say that ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ is obtained from ${{\mathcal L}}$ by a *vertical extension*; this motivates the triangle aiming upwards in its notation.
![The auxiliary structure ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ \[fig3\]](czg-princl3.eps)
![Grätzer’s lattice ${L_{\textup{GG}}}$ \[fig4\]](czg-princl4.eps)
Horizontal extensions of auxiliary structures
=============================================
The key role in Grätzer [@ggprincl] is played by the lattice ${L_{\textup{GG}}}$; see Figure \[fig4\]. We also need this lattice. Assume that $$\begin{aligned}
{{\mathcal L}}&={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}\text{ is an auxiliary structure, }p,q\in H\text{, } p{\mathrel{\parallel_\nu}}q,\text{ and} \cr
&{\langle a_p,b_p,a_q,b_q\rangle}={\langle \delta (p),{\varepsilon}(p),\delta (q),{\varepsilon}(q) \rangle} \text{ is a}\cr
&\text{spanning or, equivalently, a strong }N_6\text{-quadruple}.
\end{aligned}\label{asumLpar}$$ The equivalence of “spanning” and “strong” in follows from . We define a structure ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ as follows, and it will take a lot of work to prove that it is an auxiliary structure. We call ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ a *horizontal extension* of ${{\mathcal L}}$; this explains the horizontal triangle in the notation. By changing the sublattice ${\{0_L,a_p,b_p,a_q,b_q,1_L\}}$ into an ${L_{\textup{GG}}}$ as it is depicted in Figure \[fig4\], that is, by inserting the black-filled elements of Figure \[fig4\] into $L$, we obtain an ordered set denoted by ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$; see also later for more exact details. (We will prove that ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a lattice and $L$ is a sublattice in it.) The construction of ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ from ${{\mathcal L}}$ is illustrated in Figure \[fig5\]. Note that there can be much more elements and in a more complicated way than indicated. The solid lines represent the covering relation but the dotted lines are not necessarily edges. The new lattice ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is obtained from $L$ by inserting the black-filled elements. Note that while Grätzer [@ggprincl] constructed a lattice of length 5, here even the interval, say, $[b_p, 1_L]$ can be of infinite length.
![Obtaining ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ from ${{\mathcal L}}$ \[fig5\]](czg-princl5.eps)
Let ${{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}=H$. In ${\textup{Quord}({{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}$, we define ${{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\textup{quo}\bigl(\nu\cup{\{{{\langle p,q\rangle}}\}}\bigr)}$. We extend $\gamma$ to ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\colon {{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}\to {{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ by $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x,y\rangle}})=
\begin{cases}
\gamma({{\langle x,y\rangle}}),&\text{if }{{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}(L)}},\cr
p,&\text{if }{{\langle x,y\rangle}}\in{\{{{\langle d_{pq},e_{pq}\rangle}}, {{\langle f_{pq},g_{pq}\rangle}} \}},\cr
q,&\text{if }{{\langle x,y\rangle}}\in{\{{{\langle c_{pq},d_{pq}\rangle}}, {{\langle c_{pq},e_{pq}\rangle}} \}},\cr
0_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}, &\text{if } x=y,\cr
1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}, &\text{otherwise.}
\end{cases}$$ The definition of ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is also illustrated in Figure \[fig5\], where the edge color $1$ stands for $1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. Finally, after letting ${{\delta ^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}=\delta $ and ${{{\varepsilon}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\varepsilon}$, we define $${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\langle {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}};{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}, {{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}},{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}},{{\delta ^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}},{{{\varepsilon}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\rangle}\text.\label{spaldef}$$
\[spalislat\] If ${{\mathcal L}}$ satisfies , then ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a lattice and $L$ is a sublattice of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$.
First, we describe the ordering of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ more precisely; this description is the real definition of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Let $$\begin{aligned}
N_6^{pq}&={{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\setminus L={\{c_{pq},d_{pq},e_{pq},f_{pq},g_{pq}\}},\cr
B_6^{pq}&={\{0_L, a_p,b_p,a_q,b_q,1_L \}}\text{, and}\cr
S_6^{pq}&=\{0_L, a_p,b_p,a_q,b_q,c_{pq},d_{pq},e_{pq},f_{pq},g_{pq}, 1_L\} = N_6^{pq}\cup B_6^{pq}\text{.} \cr
\end{aligned}\label{sodiHGrj}$$ Here $S_6^{pq}$ is isomorphic to the lattice ${L_{\textup{GG}}}$, and its “boundary”, $B_6^{pq}$, to $N_6$. The elements of $L$, $N_6^{pq}$, and $B_6^{pq}$ are called *old*, *new*, and *boundary* elements, respectively. For $x,y\in {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, we define $x\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} y\iff$ $$\begin{cases}
x\leq_L y ,&\text{if }x,y\in L\text{, or}\cr
x\leq_{S_6^{pq}} y ,&\text{if }x,y\in S_6^{pq}\text{, or}\cr
\exists z\in B_6^{pq}:
x\leq_L z\text{ and }z\leq_{S_6^{pq}}y,&\text{if }x\in L\setminus S_6^{pq}\text{ and } y\in N_6^{pq}\text{, or}\cr
\exists z\in B_6^{pq}:
x\leq_{S_6^{pq}}z\text{ and }z\leq_L y,&\text{if }x\in N_6^{pq}\text{ and } y\in L\setminus S_6^{pq}\text.
\end{cases}
\label{spaorder}$$ Observe that for $u_1,u_3\in B_6^{pq}$ and $u_2\in N_6^{pq}$, the conjunction of $u_1\leq_{S_6^{pq}}u_2$ and $u_2 \leq_{S_6^{pq}} u_3$ implies ${\{0_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}},1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}\}}\cap{\{u_1,u_3\}}\neq{\varnothing}$. Hence, it is straightforward to see that $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ is an ordering and $\leq_L$ is the restriction of $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ to $L$.
For $x\in N_6^{pq}$, there is a unique least element ${{x^\ast}}$ of $B_6^{pq}$ such that $x\leq_{S_6^{pq}}{{x^\ast}}$ (that is, $x\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}{{x^\ast}}$). If $x\in L$, then we let ${{x^\ast}}=x$. Similarly, for $x\in N_6^{pq}$, there is a unique largest element ${{x_\ast}}$ of $B_6^{pq}$ such that ${{x_\ast}}\leq_{S_6^{pq}} x$. Again, for $x\in L$, we let ${{x_\ast}}=x$. With this notation, is clearly equivalent to $$x\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} y\iff
\begin{cases}
x\leq_L y,&\text{if }x,y\in L\text{, or}\cr
x\leq_{S_6^{pq}} y,&\text{if }x,y\in S_6^{pq}\text{, or}\cr
x\leq_L {{y_\ast}},&\text{if }x\in L\setminus S_6^{pq}\text{ and } y\in N_6^{pq}\text{, or}\cr
{{x^\ast}}\leq_L y,&\text{if }x\in N_6^{pq}\text{ and } y\in L\setminus S_6^{pq}\text.
\end{cases}\label{spanorder}$$
Next, for $x\parallel y\in{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, we want to show that $x$ and $y$ has a join in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. We can assume that ${\{x,y\}}$ has an upper bound $z$ in $N_6^{pq}$, because otherwise ${{x^\ast}}\vee_L {{y^\ast}}$ would clearly be the join of $x$ and $y$ in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. If $z$ belonged to ${\{c_{pq},d_{pq},e_{pq}\}}$, then the principal ideal ${\mathord\downarrow z}$ (taken in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$) would be a chain, and this would contradict $x\parallel y$. Hence, $z\in {\{f_{pq},g_{pq}\}}$. If both $x$ and $y$ belong to $N_6^{pq}$, then $x\parallel y$ gives ${\{x,y\}}={\{e_{pq},f_{pq}\}}$, $z$ and $1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ are the only upper bounds of ${\{x,y\}}$, and $z$ is the join of $x$ and $y$. Hence, we can assume that $x\in L$. If $y$ also belongs to $L$, then $x\leq{{z_\ast}}$ and $y\leq {{z_\ast}}$ yields $x\vee_L y\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} {{z_\ast}}\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} z$, and $x\vee_L y$ is the join of $x$ and $y$ in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ since $z$ was an arbitrary upper bound of ${\{x,y\}}$ in $N_6^{pq}$.
Therefore, we can assume that $x\in L$ and $y\in N_6^{pq}$. It follows from $b_p\wedge_L b_q=0_L$ that, for each $u\in L$, ${\mathord\uparrow u}\cap B_6^{pq}$ has a smallest element; we denote it by $\widehat u$. For $u\in N_6^{pq}$, we let $\widehat u=u$. Note that, for every $u\in {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, $\widehat u$ is the smallest element of ${\mathord\uparrow u}\cap S_6^{pq}$. The existence of $z$, mentioned above, implies that $\widehat x\in {\{a_p,b_p\}}$.
We assert that $\widehat x\vee_{S_6^{pq}} y= \widehat x\vee_{S_6^{pq}}\widehat y$ is the join of $x$ and $y$ in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. (Note that $\widehat x\vee_{S_6^{pq}} y\subseteq {\{f_{pq},g_{pq}\}}$.) We can assume $y\in{\{c_{pq},d_{pq},e_{pq}\}}$ since otherwise $1_L$ is the only upper bound of $y$ in $L$ and $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}y=\widehat x\vee_{S_6^{pq}} y$ is clear. Consider an upper bound $t\in L$ of $x$ and $y$. Since $y\in{\{c_{pq},d_{pq},e_{pq}\}}$, we have $a_q\leq t$ and $x\vee_L a_q\leq t$. From $x\parallel y\in{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ and $\widehat x\in {\{a_p,b_p\}}$, we obtain $0_L<x\leq b_p$. Since ${\langle a_p,b_p,a_q,b_q\rangle}$ is a strong $N_6$-quadruple by , the validity of for ${{\mathcal L}}$ implies $\widehat x\vee_{S_6^{pq}} y\,\leq 1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}=1_L=x\vee_L a_q\leq t$. This shows that $\widehat x\vee_{S_6^{pq}} y$ is the join of $x$ and $y$ in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. The case $x,y\in L$ showed that ${\langle L;\vee\rangle}$ is a subsemilattice of ${\langle {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}};\vee\rangle}$. For later reference, we summarize the description of join in a concise form as follows; note that $x\parallel y$ is not assumed here: $$x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}y=\begin{cases}
{{x^\ast}}\vee_L {{y^\ast}}, &\text{if } {\{x,y\}}\not\subseteq {\mathord\downarrow g_{pq}}\text{ or }{\{x,y\}}\subseteq L , \cr
\widehat x\vee_{S_6^{pq}} \widehat y&\text{otherwise, that is, if } {\{x,y\}}\subseteq {\mathord\downarrow g_{pq}}\text{ and }{\{x,y\}}\not\subseteq L\text.
\end{cases}\label{joindeR}$$ We have shown that any two elements of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ have a join. Although $S_6^{pq}$ and the construction of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ are not exactly selfdual, by interchanging the role of ${\{f_{pq},g_{pq}\}}$ and that of ${\{c_{pq},d_{pq},e_{pq}\}}$, we can easily dualize the argument above. Thus, we conclude that ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a lattice and $L$ a sublattice of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$.
The following lemma is due to Dilworth [@dilworth1950a], see also Grätzer [@grglt Theorem III.1.2].
\[llajtorja\] If $L$ is a lattice and ${{\langle u_1,v_1\rangle}},{{\langle u_2,v_2\rangle}}\in{{\textup{Pairs}^{\leq}(L)}}$, then the following three conditions are equivalent.
${\textup{con}(u_1,v_1)}\leq {\textup{con}(u_2,v_2)}$;
${{\langle u_1,v_1\rangle}}\in {\textup{con}(u_2,v_2)}$;
there exists an $n\in\mathbb N$ and there are $x_{i} \in L$ for $i\in{\{0,\dots,n\}}$ and ${{\langle y_{i j},z_{i j}\rangle}} \in{{\textup{Pairs}^{\leq}(L)}}$ for ${{\langle i,j\rangle}}\in{\{1,\dots,n\}}\times{\{0,\dots,n\}}$ such that the following equalities and inequalities hold: $$\begin{aligned}
u_1&=x_{0}\leq x_{1}\leq\dots\leq x_{n-1}\leq x_{n}=v_1\cr
y_{i0} &=x_{i-1}\text{, }y_{in}=u_2\text{, }z_{i0}=x_i\text{, and }z_{in}=v_2\text{ for }1\leq i\leq n,\cr
y_{i,j-1}&= z_{i,j-1}\wedge y_{ij}
\text{ and } z_{i,j-1}\leq z_{ij}
\text{ for }j \text{ odd, } i,j\in{\{1,\dots,n\}},\cr
z_{i,j-1} & = y_{i,j-1}\vee z_{ij} \text{ and }y_{i,j-1}\geq y_{ij}\text{ for }j \text{ even, } i,j\in{\{1,\dots,n\}}\text.
\end{aligned}\label{lajtorjaformula}$$
The situation of Lemma \[llajtorja\] is outlined in Figure \[fig6\]; note that not all elements are depicted, and the elements are not necessarily distinct. The second half of says that, in terms of Grätzer [@grglt], ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is *weakly* up or down perspective into ${{\langle y_{ij},z_{ij}\rangle}}$; up for $j$ odd and down for $j$ even. Besides weak perspectivity, we shall also need a more specific concept; recall that ${{\langle x_1,y_1\rangle}}$ is *perspective* to ${{\langle x_2,y_2\rangle}}$ if there are $i,j\in{\{1,2\}}$ such that $i\neq j$, $x_i=y_i\wedge x_j$, and $y_j=x_j\vee y_i$.
![Illustrating Lemma \[llajtorja\] for $n=4$ \[fig6\]](czg-princl6.eps)
For a quasiordered set ${\langle H;\nu\rangle}$ and $p, q_1,\dots,q_n\in H$, we say that $p$ is a *join* of the elements $q_1,\dots,q_n$, in notation, $p=\bigvee_{i=1}^n q_i$, if $q_i{\mathrel{\leq_\nu}}p$ for all $i$ and, for every $r\in H$, the conjunction of $q_i{\mathrel{\leq_\nu}}r$ for $i=1,\dots,n$ implies $p{\mathrel{\leq_\nu}}r$. This concept is used in the next lemma. Note that even if a join exists, it need not be unique.
\[chainlemma\] If ${\langle L; \gamma, H,\nu\rangle}$ is a quasi-colored lattice and ${\{u_0\leq u_1\leq\dots \leq u_n\}}$ is a finite chain in $L$, then $$\gamma({{\langle u_0,u_n\rangle}})=\bigvee_{i=1}^n \gamma({{\langle u_{i-1},u_i\rangle}})\quad\text{ holds in }{\langle H;\nu\rangle}\text.\label{lancjoin}$$
Let $p=\gamma({{\langle u_0,u_n\rangle}})$ and $q_i=\gamma({{\langle u_{i-1},u_i\rangle}})$. Since ${\textup{con}(u_{i-1},u_i)}\leq {\textup{con}(u_0,u_n)}$, yields $q_i{\mathrel{\leq_\nu}}p$ for all $i$. Next, assume that $r\in H$ such that $q_i{\mathrel{\leq_\nu}}r$ for all $i$. By the surjectivity of $\gamma$, there exists a ${{\langle v,w\rangle}}\in{{\textup{Pairs}^{\leq}(L)}}$ such that $\gamma({{\langle v,w\rangle}})=r$. It follows by that ${{\langle u_{i-1},u_i\rangle}}\in {\textup{con}(u_{i-1},u_i)}\leq {\textup{con}(v,w)}$. Since ${\textup{con}(v,w)}$ is transitive and collapses the pairs ${{\langle u_{i-1},u_i\rangle}}$, it collapses ${{\langle u_{0},u_n\rangle}}$. Hence, ${\textup{con}(u_{0},u_n)}\leq {\textup{con}(v,w)}$, and implies $p{\mathrel{\leq_\nu}}r$.
Now, we are in the position to deal with the following lemma.
\[mainlemma\] The structure ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, which is defined in with assumption , is an auxiliary structure, and ${{\mathcal L}}$ is a substructure of ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Furthermore, if ${{\mathcal L}}$ is countable, then so is ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$.
Since we work both in ${{\mathcal L}}$ and ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, relations, operations and maps are often subscripted by the relevant structure. By Lemma \[spalislat\], ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a lattice. Obviously, and hold for ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Since ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is an extension of $\gamma$, ${{\delta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}= \delta$, ${{{\varepsilon}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}= {\varepsilon}$, and $L$ is a sublattice of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, we obtain that and hold in ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$.
Let $r_1,r_2\in{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Since $\nu$ is transitive, $p{\mathrel{\parallel_\nu}}q$, and ${{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\textup{quo}\bigl(\nu\cup {\{{{\langle p,q\rangle}}\}}\bigr)}$, we obtain that $$\label{twopssb}
{{\langle r_1,r_2\rangle}}\in {{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\iff
r_1{\mathrel{\leq_\nu}}p\text{ and }q{\mathrel{\leq_\nu}}r_2\text{, or }r_1{\mathrel{\leq_\nu}}r_2\text.$$ This clearly implies that holds for ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$.
It follows from that if ${{\langle x,y\rangle}}\in {{\textup{Pairs}^{\leq}(L)}}$ and $\gamma({{\langle x,y\rangle}})=1_{H}$, then we have ${\textup{con}_{L}(x,y)}={\nabla_{\kern -2pt L}}$. Combining this with , we obtain easily that for all ${{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$, $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x,y\rangle}})=1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} {\mathrel{\Longrightarrow}}{\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)} = {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}\text.\label{haegyakok}$$
Let $\Theta$ denote the congruence of $L$ described in . Consider the equivalence relation ${{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ on ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ whose classes (in other words, blocks) are the $\Theta$-classes, ${\{c_{pq},d_{pq},e_{pq}\}}$ and ${\{f_{pq},f_{pq}\}}$. Based on and its dual, a straightforward argument shows that, for all $x,y\in{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, ${{\langle x\wedge y,x\rangle}}\in{{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ if[f]{} ${{\langle y,x\vee y\rangle}}\in {{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Clearly, the intersection of ${{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ and the ordering of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is transitive. Hence, we conclude that ${{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a congruence on ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Since it is distinct from ${\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$, ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies .
Next, we prove the converse of . Assume that ${{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$ such that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x,y\rangle}})\neq 1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$; we want to show that ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)}\neq{\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$. Since this is clear if $x=y$, we assume $x\neq y$. First, if $x,y\in L$, then let $r=\gamma({{\langle x,y\rangle}})$. Applying to $\gamma$ and to ${{\mathcal L}}$, we obtain $ {\textup{con}_{L}(x,y)} ={\textup{con}_{L}(\delta (r),\delta (r))}$. Hence $\Theta$, which we used in the previous paragraph, collapses ${{\langle x,y\rangle}}$, and ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)}\subseteq{{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\subset {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$. Second, if ${\{x,y\}}\cap L={\varnothing}$, then ${{\langle x,y\rangle}}$ is perspective to ${{\langle a_p,b_p\rangle}}$ or ${{\langle a_q,b_q\rangle}}$, whence $ {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)} \in
{\bigl\{ {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_p,b_p)}, {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_q,b_q)} \bigr\}}$ reduces the present case to the previous one. Finally, $|L\cap{\{x,y\}}|=1$ is excluded since then ${{\langle x,y\rangle}}$ would be $1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$-colored. Now, after verifying the converse of , we have proved that for all ${{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$, $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x,y\rangle}})=1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} \iff {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)} = {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}\text.\label{haiirkok}$$
Next, to prove that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies , assume that ${{\langle u_1,v_1\rangle}},{{\langle u_2,v_2\rangle}}\in {{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$ such that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}} ){\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}} )$. Let $r_i={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_i,v_i\rangle}} )$, for $i\in{\{1,2\}}$. We have to show ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. By , we can assume that $r_2\neq 1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. Thus, by , we have $r_1\neq 1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. We can also assume that $r_1\neq 0_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ since otherwise ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}={\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,u_1)}=\Delta_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ would clearly imply ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Thus, $r_1,r_2\in H\setminus{\{0_H,1_H\}}$. By the construction of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, ${{\langle u_i,v_i\rangle}}$ is perspective to some ${{\langle u_i',v_i'\rangle}}\in{{\textup{Pairs}^{\leq}(L)}}$ such that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_i,v_i\rangle}})={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_i',v_i\rangle}}')$, and perspectivity implies ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_i,v_i)}= {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_i',v_i')}$. Therefore, we can assume that ${{\langle u_1,v_1\rangle}}, {{\langle u_2,v_2\rangle}} \in {{\textup{Pairs}^{\leq}(L)}}$, because otherwise we could work with ${{\langle u_1',v_1'\rangle}}$ and ${{\langle u_2',v_2'\rangle}}$.
According to , we distinguish two cases. First, assume that $r_1{\mathrel{\leq_\nu}}r_2$. Since ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ extends $\gamma$, we have $\gamma({{\langle u_1,v_1\rangle}} ) = {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}} )
=r_1{\mathrel{\leq_\nu}}r_2={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}} )= \gamma({{\langle u_2,v_2\rangle}} )$. Applying to $\gamma$, we obtain ${{\langle u_1,v_1\rangle}}\in {\textup{con}_{L}(u_1,v_1)}\leq {\textup{con}_{L}(u_2,v_2)}$. Using Lemma \[llajtorja\], first in $L$ and then, backwards, in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, we obtain ${{\langle u_1,v_1\rangle}}\in{\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$, which yields ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$.
Second, assume that $r_1{\mathrel{\leq_\nu}}p$ and $q{\mathrel{\leq_\nu}}r_2$. Since ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle a_p,b_p\rangle}})=\gamma({{\langle a_p,b_p\rangle}})=p$ and ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle a_q,b_q\rangle}})=q$ by , the argument of the previous paragraph yields that we have ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_p,b_p)}$ and ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_q,b_q)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Clearly (or applying Lemma \[llajtorja\] within $S_6^{pq})$, we have ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_p,b_p)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_q,b_q)}$. Hence, transitivity yields ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Consequently, ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies .
Next, to prove that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies , assume that ${{\langle u_1,v_1\rangle}}, {{\langle u_2,v_2\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$ such that ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)} \leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Our purpose is to show the inequality ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}} ){\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}} )$. By , we can assume ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}\neq {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$, and we can obviously assume $u_1\neq v_1$. That is, ${\{ {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)},\, {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)} \}}\cap {\{\Delta_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}},{\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}\}}={\varnothing}$. A pair ${{\langle w_1,w_2\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$ is called *mixed* if $|{\{i: w_i\in L\}}|=1$. That is, if one of the components is old and the other one is new. It follows from the construction of ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ and that none of ${{\langle u_1,v_1\rangle}}$ and ${{\langle u_2,v_2\rangle}}$ is mixed. If ${{\langle u_1,v_1\rangle}}$ is a new pair, that is, if ${\{u_1,v_1\}}\cap L={\varnothing}$, then we can consider an old pair ${{\langle u'_1,v'_1\rangle}}$ such that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1',v_1'\rangle}} )={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}} )$ and, by perspectivity, ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u'_1,v'_1)} = {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}$. Hence, we can assume that ${{\langle u_1,v_1\rangle}}$ is an old pair, and similarly for the other pair. That is, we assume that both ${{\langle u_1,v_1\rangle}}$ and ${{\langle u_2,v_2\rangle}}$ belong to ${{\textup{Pairs}^{\leq}( L)}}$.
The starting assumption ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)} \leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$ means that ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. This is witnessed by Lemma \[llajtorja\]. Let $x_j, y_{ij}, z_{ij}\in {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ be elements for $i\in{\{1,\dots,n\}}$ and $j\in{\{0,\dots,n\}}$ that satisfy ; see also Figure \[fig6\]. To ease our terminology, the ordered pairs ${{\langle y_{ij},y_{ij}\rangle}}$ will be called *witness pairs* (of the containment ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$). Since ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}\neq {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$, none of the witness pairs generate ${\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$. Thus, by , $$\text{none of the witness pairs is mixed or }1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}\text{-colored.}\label{nonm1col}$$
Take two consecutive witness pairs, ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ and ${{\langle y_{ij},z_{ij}\rangle}}$. Here $i,j\in{\{1,\dots,n\}}$. Our next purpose is to show that $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) {\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}})\text. \label{winprs}$$ We assume $y_{i,j-1}<z_{i,j-1}$ since trivially holds if these two elements are equal. Hence, ${y_{ij}}<{z_{ij}}$ also holds.
If both ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ and ${{\langle y_{ij},z_{ij}\rangle}}$ are old pairs, that is, if they belong to ${{\textup{Pairs}^{\leq}(L)}}$, then yields $ {\textup{con}_{L}(y_{i,j-1},z_{i,j-1})}\leq {\textup{con}_{L}(y_{ij},z_{ij})}$. From this, we conclude the relation $\gamma({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) {\mathrel{\leq_\nu}}\gamma({{\langle y_{ij},z_{ij}\rangle}})$ by , applied for ${{\mathcal L}}$, and we obtain the validity of for old witness pairs, because ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ extends $\gamma$.
If both ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ and ${{\langle y_{ij},z_{ij}\rangle}}$ are new pairs, that is, if they belong to ${{\textup{Pairs}^{\leq}(N_6^{pq})}}$, then and allow only two possibilities: ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) = {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}})$, or ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) =p$ and $ {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}})=q$. In both cases, holds.
\[old2newcase\] Assume first that $j$ is odd, that is, ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is weakly up-perspective into ${{\langle y_{ij},z_{ij}\rangle}}$. Since $y_{ij}$, being a new element, and $z_{i,j-1}$ are both distinct from $y_{i,j-1}$, $$z_{i,j-1}\parallel y_{ij}\text.\label{osidHn}$$ Since $0_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}\leq y_{i,j-1}<z_{i,j-1} < z_{ij} $, $z_{i,j-1}$ is an old element, and $z_{ij}$ is a new one, $z_{ij}\in{\{f_{pq}, g_{pq}\}}$. Taking $y_{ij}<z_{ij}$ and into account, we obtain $y_{ij}=f_{pq}$ and $z_{ij}=g_{pq}$. Applying the definition of $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ for the elements of the old witness pair and using the “weak up-perspectivity relations” from , we have $y_{i,j-1}\leq a_p<f_{pq}$. Similarly, but also taking $z_{i,j-1}\parallel y_{ij}$ into account, we obtain $z_{i,j-1}\leq b_p<g_{pq}$. We claim that ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is up-perspective to ${{\langle a_p,b_p\rangle}}$. We can assume $z_{i,j-1}<b_p$, because otherwise they would be equal, we would have $y_{i,j-1}=z_{i,j-y}\wedge f_{pq}=b_p\wedge f_{pq}=a_p$, and the two pairs would be the same. Hence, from $a_p\prec b_p$, $z_{i,j-1}<b_p$ and $z_{i,j-1}\parallel y_{ij}=f_{pq}$, we obtain $z_{i,j-1}\parallel a_p$ and $z_{i,j-1}\vee a_p=b_p$. Since $y_{i,j-1}\leq z_{i,j-1}\wedge a_p \leq z_{i,j-1}\wedge y_j=y_{i,j-1}$, the old pair ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is up-perspective to the old pair ${{\langle a_p,b_p\rangle}}$. Hence, ${\textup{con}_{L}(y_{i,j-1},z_{i,j-1})}={\textup{con}_{L}(a_p,b_p)}$. Applying for ${{\mathcal L}}$, we obtain $$\begin{aligned}
{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}})
&=\gamma({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) \overset{{\textup{(C2)}}{}}=
\gamma({{\langle a_p,b_p\rangle}})
\overset{{\textup{(A4)}}{}}=p\cr
&={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle f_{pq},f_{pq}\rangle}})
= {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}}),\end{aligned}$$ which implies if $j$ is odd.
Second, let $j$ be even. That is, we assume that ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is weakly down-perspective into ${{\langle y_{ij},z_{ij}\rangle}}$. The dual of the previous argument shows that $y_{ij}=c_{pq}$ and $z_{ij}\in{\{d_{pq},e_{pq}\}}$. However, $z_{ij}=d_{pq}$ or $z_{ij}=e_{pq}$ does not make any difference, and ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}})
=q= {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle a_q,b_q\rangle}})= {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}})$ settles for $j$ even.
Like in Case \[old2newcase\], it suffices to deal with an odd $j$, because an even $j$ could be treated dually. Since ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is weakly up-perspective into ${{\langle y_{ij},z_{ij}\rangle}}$ and $1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ is the only old element above $f_{pq}$, we obtain $y_{i,j-1}\in{\{c_{pq}, d_{pq},e_{pq}\}}$. We obtain as before. Taking also into account, we obtain that $y_{i,j-1}=c_{pq}$ and $z_{i,j-1}$ is one of $d_{pq}$ and $e_{pq}$. No matter which one, an argument dual to the one used in Case \[old2newcase\] yields $a_q=b_{q}\wedge y_{ij}$ and $b_q\leq z_{ij}$. Hence, ${{\langle a_q,b_q\rangle}}$ is weakly up-perspective into ${{\langle y_{ij},z_{ij}\rangle}}$, and we obtain $${\textup{con}_{L}(a_q,b_q)}\leq {\textup{con}_{L}(y_{ij},z_{ij})} \overset{{\textup{(C2)}}}{{\mathrel{\Longrightarrow}}}
q\overset{{\textup{(A4)}}}= \gamma({{\langle a_q,b_q\rangle}} ){\mathrel{\leq_\nu}}\gamma({{\langle y_{ij},z_{ij}\rangle}}),$$ which implies $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) =q \leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}}),$$ and follows again.
Now that we have proved , observe that for $j=1,\dots,n$ and transitivity yield ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x_{i-1},x_{i}\rangle}}) =
{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i0},z_{i0}\rangle}}) {\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{in},z_{in}\rangle}}) = {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}})$. Hence, Lemma \[chainlemma\] implies ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}}){\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}})$. Therefore, ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies , and holds for ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$.
Next, to prove that ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies , assume that $r,s\in H$ such that $r\parallel_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} s$ and ${\langle \delta (r),{\varepsilon}(r), \delta (s),{\varepsilon}(s)\rangle}={\langle a_r,b_r,a_s,b_s\rangle}$ is a spanning $N_6$-quadruple. We want to show that it is a strong $N_6$-quadruple of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. The treatment for is almost the dual of that for , whence we give the details only for . Since the role of $r$ and $s$ is symmetric, it suffices to deal with the case $0<x\leq b_r$; we want to show $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} a_s=1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. Since $r\parallel_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} s$ implies $r{\mathrel{\parallel_\nu}}s$, $L$ is a ${\{0,1\}}$-sublattice of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, and holds for ${{\mathcal L}}$, we obtain $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} a_s=1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ for old elements, that is, for all $x\in L$ such that $0<x\leq b_r$.
Hence, we assume that $x$ is a new element, that is, $x\in N_6^{pq}$. Since $b_r$ is an old element and $x\leq b_r<b_r\vee_L b_s = 1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$, we obtain $x\notin{\{f_{pq},g_{pq}\}}$. Hence, $x\in {\{c_{pq},d_{pq},e_{pq}\}}$. If we had $r\neq q$, then $x\leq b_r$ and the description of $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ would imply $a_q\leq b_r$, which would be a contradiction since holds in ${{\mathcal L}}$. Consequently, $r=q$. Thus, we have $0<x\leq b_q$, and we know from $s\parallel_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} r=q$ and $p\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} q$ that $s\notin{\{p, q, 0_H\}}$ and $s{\mathrel{\parallel_\nu}}q$. We also know $p\neq 0_H$ since $p{\mathrel{\parallel_\nu}}q$.
If we had $a_s\in{\mathord\downarrow g_{pq}}$, then the description of $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ would yield $a_s\leq b_p$, which would contradict . Hence, $a_s\notin{\mathord\downarrow g_{pq}}$, and gives $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} a_s={{x^\ast}}\vee_L a_s$. Therefore, since the spanning $N_6$-quadruple ${\langle a_q,b_q,a_s,b_s\rangle}={\langle a_r,b_r,a_s,b_s\rangle}$ is strong in ${{\mathcal L}}$ by and $0<x<{{x^\ast}}\leq b_q$, we conclude ${{x^\ast}}\vee_L a_s=1_L$, which implies the desired $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} a_s=1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. Consequently, ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies . This completes the proof of Lemma \[mainlemma\].
Approaching infinity
====================
For an ordered set $P={\langle P;\leq\rangle}$ and a subset $C$ of $P$, the restriction of the ordering of $P$ to $C$ will be denoted by ${{\leq\rceil_{C}}}$. If each element of $P$ has an upper bound in $C$, then $C$ is a *cofinal subset* of $P$. The following lemma belongs to the folklore; having no reference at hand, we will outline its easy proof.
\[cofinalitylemma\] If an ordered set $P={\langle P;\leq\rangle}$ is the union of a chain of principal ideals, then it has a cofinal subset $C$ such that ${\langle C; {{\leq\rceil_{C}}}\rangle}$ is a well-ordered set.
The top elements of these principal ideals form a cofinal chain $D$ in $P$. Let $\mathcal H(D)=\{X: X\subseteq D$ and ${\langle X;{{\leq\rceil_{X}}}\rangle}$ is a well-ordered set$\}$. For $X,Y\in \mathcal H(D)$, let $X\sqsubseteq Y$ mean that $X$ is an order ideal of ${\langle Y;{{\leq\rceil_{Y}}}\rangle}$. Zorn’s Lemma yields a maximal member $C$ in ${\langle \mathcal H(D),\sqsubseteq\rangle}$. Clearly, $C$ is well-ordered and it is a cofinal subset.
Now, we combine the vertical action of Lemma \[lupstp\] and the horizontal action of Lemma \[mainlemma\] into a single statement. Note that the order ideal $H$ of ${\langle {{H}^{\bullet}},{{\nu}^{\bullet}}\rangle}$ in the following lemma is necessarily a directed ordered set.
\[combinlemma\] Assume that ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ is an auxiliary structure such that ${\langle H,\nu\rangle}$ is an order ideal of a bounded ordered set ${\langle {{H}^{\bullet}},{{\nu}^{\bullet}}\rangle}$. $($In particular, $\nu$ is an ordering and $\nu={{{{\nu}^{\bullet}}\rceil_{H}}}$.$)$ Then there exists an auxiliary structure ${{{{\mathcal L}}}^{\bullet}}={\langle {{L}^{\bullet}};{{\gamma}^{\bullet}}, {{H}^{\bullet}},{{\nu}^{\bullet}},{{\delta}^{\bullet}} ,{{{\varepsilon}}^{\bullet}} \rangle}$ such that ${{\mathcal L}} $ is a substructure of ${{{{\mathcal L}}}^{\bullet}}$. Furthermore, if ${{\mathcal L}}$ and ${{H}^{\bullet}}$ are countable, then so is ${{{{\mathcal L}}}^{\bullet}}$.
We can assume $H\neq {{H}^{\bullet}}$ since otherwise ${{{{\mathcal L}}}^{\bullet}}={{\mathcal L}}$ would do. Consider the set $$D={\{{{\langle p,q\rangle}}: 0_{{{H}^{\bullet}}}<_{{{\nu}^{\bullet}}} p <_{{{\nu}^{\bullet}}} q<_{{{\nu}^{\bullet}}} 1_{{{H}^{\bullet}}}\text{ and } p\not<_\nu q \}}\text.\label{ezD}$$ Since every set can be well-ordered, we can also write $D={\{{{\langle p_\iota,q_\iota\rangle}}: \iota<\kappa\}}$, where $\kappa$ is an ordinal number. In ${\textup{Quord}({{H}^{\bullet}})}$, we define $$\nu_\lambda={\textup{quo}\bigl(\nu\cup ({\{0_{{{H}^{\bullet}}}\}}\times {{H}^{\bullet}})
\cup ({{H}^{\bullet}}\times {\{1_{{{H}^{\bullet}}}\}})
\cup {\{{{\langle p_\iota,q_\iota\rangle}}:\iota<\lambda\}}\bigr)} \label{sldiHGk}$$ for $\lambda\leq \kappa$. It is an ordering on ${{H}^{\bullet}}$, because $\nu_\lambda\subseteq {{\nu}^{\bullet}}$ implies that it is antisymmetric. Note that $\nu_\kappa={{\nu}^{\bullet}}$ and $0_{{{H}^{\bullet}}}=0_H$. For each $\lambda \leq \kappa$, we want to define an auxiliary structure ${{\mathcal L}}_\lambda={\langle L_\lambda;\gamma_\lambda, H_\lambda,\nu_\lambda,\delta_{\lambda},{\varepsilon}_{\lambda}\rangle}$ such that, for all $\lambda<\kappa$, the following properties be satisfied : $$\begin{aligned}
&\text{${{\mathcal L}}_\mu$ is a substructure of ${{\mathcal L}}_\lambda$ for all $\mu\leq \lambda$;} \label{dirun} \\
&\text{$H_\lambda=H_0$, $0_{L_\lambda}=0_{L_0} $, and $1_{L_\lambda}=1_{L_0}$};\label{djrun} \\
&\begin{aligned}
{\langle \delta_\lambda&(p),{\varepsilon}_\lambda(p),\delta_\lambda(q),{\varepsilon}_\lambda(q)\rangle}\text{ is a spanning }N_6\text{-quadruple (equivalently, } \cr
&\text{a strong }N_6\text{-quadruple) for all }{{\langle p,q\rangle}}\in D\text{ such that }p\parallel_{\nu_\lambda} q\text.
\end{aligned}\label{spnnning}\end{aligned}$$ Modulo the requirement that ${{\mathcal L}}_\lambda$ should be an auxiliary structure, the equivalence mentioned in is a consequence of . We define ${{\mathcal L}}_\lambda$ by (transfinite) induction as follows.
We define ${{\mathcal L}}_0$ by a vertical extension. Let $K={{H}^{\bullet}}\setminus(H\cup{\{1_{{{H}^{\bullet}}}\}} )$, let ${{\langle {{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\rangle}}={{\langle {{H}^{\bullet}},\nu_0\rangle}}$, and let ${{\mathcal L}}_0={{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ be the auxiliary structure what we obtain from ${{\mathcal L}}$ according to Lemma \[lupstp\]. Note that, for all ${{\langle p,q\rangle}}\in D$, ${\{p,q\}}\not\subseteq H$ since $\nu={{{{\nu}^{\bullet}}\rceil_{{{H}^{\bullet}}}}}$. Hence, by Lemma \[lupstp\], holds for $\lambda=0$.
Assume that $\lambda$ is a successor ordinal, that is, $\lambda=\eta+1$, and ${{\mathcal L}}_\eta={\langle L_\eta;\gamma_\eta, H_\eta,\nu_\eta,\delta_{\eta},{\varepsilon}_{\eta}\rangle}$ is already defined and satisfies , , and . Since $p_\eta <_{{{\nu}^{\bullet}}} q_\eta$ and $\nu_\eta\subseteq {{\nu}^{\bullet}}$, we have either $p_\eta <_{\nu_\eta} q_\eta$, or $p_\eta \parallel_{\nu_\eta} q_\eta$. These two possibilities need separate treatments. First, if $p_\eta <_{\nu_\eta} q_\eta$, then $\nu_\lambda=\nu_\eta$ and we let ${{\mathcal L}}_\lambda={{\mathcal L}}_\eta$.
Second, let $p_\eta \parallel_{\nu_\eta} q_\eta$. We define ${{\mathcal L}}_\lambda$ from ${{\mathcal L}}_\eta$ by a horizontal extension as follows. With the notation ${{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}=\nu_\lambda$, we obtain from that ${{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\textup{quo}({\nu_\eta\cup{\{{{\langle p_\eta,q_\eta\rangle}} \}} })}\in{\textup{Quord}({{H}^{\bullet}})}$. Furthermore, the validity of for ${{\mathcal L}}_\eta$ yields that ${{\langle p_\eta,q_\eta\rangle}}$ is a spanning $N_6$-quadruple of ${{\mathcal L}}_\eta$. Thus, letting ${{\langle p_\eta,q_\eta\rangle}}$ and ${{\mathcal L}}_\eta$ play the role of ${{\langle p,q\rangle}}$ and ${{\mathcal L}}$ in and , respectively, we define ${{{\mathcal L}}_\lambda}$ as the auxiliary structure ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ taken from Lemma \[mainlemma\]. Since $L_\eta$ is a ${\{0,1\}}$-sublattice of $L_\lambda$, spanning $N_6$-quadruples of $L_\eta$ are also spanning in $L_\lambda$. Furthermore, it follows from $\nu_{\lambda}\supseteq \nu_\eta$ that $p\parallel_\lambda q{\mathrel{\Longrightarrow}}p\parallel_\eta q$. Hence, we conclude that is inherited by ${{{\mathcal L}}_\lambda}$ from ${{{\mathcal L}}_\eta}$.
Assume that $\lambda$ is a limit ordinal. Let $$\begin{aligned}
L_\lambda=\bigcup_{\eta<\lambda} L_\eta,{{\kern 5 pt}} \gamma_\lambda=\bigcup_{\eta<\lambda} \gamma_\eta,{{\kern 5 pt}}
H_\lambda={{H}^{\bullet}},{{\kern 5 pt}}
\nu_\lambda=\bigcup_{\eta<\lambda} \nu_\eta,{{\kern 5 pt}}
\delta_{\lambda }=\bigcup_{\eta<\lambda} \delta_{\eta },
{{\kern 5 pt}}
{\varepsilon}_{\lambda}=\bigcup_{\eta<\lambda} {\varepsilon}_{\eta }\text.\end{aligned}$$ We assert that ${{\mathcal L}}_\lambda={\langle L_\lambda;\gamma_\lambda, H_\lambda,\nu_\lambda,\delta_{\lambda},{\varepsilon}_{\lambda}\rangle}$ is an auxiliary structure satisfying , , and .
Since all the unions defining ${{\mathcal L}}_\lambda$ are directed unions, $L_\lambda$ is a lattice, and ${\langle H_\lambda;\nu_\lambda\rangle}$ is a quasiordered set. Actually, it is an ordered set since $\nu_\lambda\subseteq {{\nu}^{\bullet}}$. By the same reason, $\gamma_\lambda$, $\delta_{\lambda}$, and ${\varepsilon}_{\lambda}$ are maps. It is straightforward to check that all of ,…, hold for ${{\mathcal L}}_\lambda$; we only do this for , that is, we verify and , and also for .
Assume $\gamma_\lambda({{\langle u_1,v_1\rangle}})\leq_{\nu_\lambda} \gamma_\lambda({{\langle u_2,v_2\rangle}})$. Since the unions are directed, there exists an $\eta<\lambda$ such that $u_1,v_1,u_2,v_2\in L_\nu$, and we have $\gamma_\eta({{\langle u_1,v_1\rangle}})\leq_{\nu_\eta} \gamma_\eta({{\langle u_2,v_2\rangle}})$. Using that the auxiliary structure ${{\mathcal L}}_\eta$ satisfies , we obtain ${\textup{con}_{L_\eta}(u_1,v_1)} \leq {\textup{con}_{L_\eta}(u_2,v_2)}$, that is, ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{L_\eta}(u_2,v_2)}$. Using Lemma \[llajtorja\], we conclude ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{L_\lambda}(u_2,v_2)}$ in the usual way. This implies ${\textup{con}_{L_\lambda}(u_1,v_1)} \leq {\textup{con}_{L_\lambda}(u_2,v_2)}$. Therefore, ${{\mathcal L}}_\lambda$ satisfies .
Similarly, if ${\textup{con}_{L_\lambda}(u_1,v_1)} \leq {\textup{con}_{L_\lambda}(u_2,v_2)}$, then Lemma \[llajtorja\] easily implies the existence of an $\eta<\lambda$ such that ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{L_\eta}(u_2,v_2)}$ and ${\textup{con}_{L_\eta}(u_1,v_1)} \leq {\textup{con}_{L_\eta}(u_2,v_2)}$; for ${{\mathcal L}}_\eta$ yields $\gamma_\eta({{\langle u_1,v_1\rangle}})\leq_{\nu_\eta} \gamma_\eta({{\langle u_2,v_2\rangle}})$; and we conclude $\gamma_\lambda({{\langle u_1,v_1\rangle}})\leq_{\nu_\lambda} \gamma_\lambda({{\langle u_2,v_2\rangle}})$. Hence, ${{\mathcal L}}_\lambda$ satisfies and .
Next, for the sake of contradiction, suppose that fails in ${{\mathcal L}}_\lambda$. This implies that ${{\langle 0_{L_\lambda},1_{L_\lambda}\rangle}}$ belongs to $\bigvee{\bigl\{{\textup{con}_{L_\lambda}(a_p,b_p)}: p\in H_\lambda\setminus{\{1_{{{H}^{\bullet}}}\}}\bigr\}}$, where the join is taken in the congruence lattice of $L_\lambda$. Since principal congruences are compact, there exists a finite subset $T\subseteq H_\lambda\setminus{\{1_{{{H}^{\bullet}}}\}}$ such that ${{\langle 0_{L_\lambda},1_{L_\lambda}\rangle}}$ belongs to $\bigvee{\{{\textup{con}_{L_\lambda}(a_p,b_p)}: p\in T\}}$. Thus, there exists a finite chain $0_{L_\lambda}=c_0<c_1<\dots<c_k=0_{L_\lambda}$ such that, for $i=1,\dots, k$, ${{\langle c_{i-1},c_i\rangle}}\in \bigcup{\{{\textup{con}_{L_\lambda}(a_p,b_p)}: p\in T\}}$. Each of these memberships are witnessed by finitely many “witness” elements according to ; see Lemma \[llajtorja\]. Taking all these memberships into account, there are only finitely many witness elements all together. Hence, there exists an $\eta<\lambda$ such that $L_\eta$ contains all these elements. Applying Lemma \[llajtorja\] in the converse direction, we obtain that ${{\langle 0_{L_\eta},1_{L_\eta}\rangle}}={{\langle 0_{L_\lambda},1_{L_\lambda}\rangle}}$ belongs to $\bigvee{\{{\textup{con}_{L_\eta}(a_p,b_p)}: p\in T\}}$, which is a contradiction since ${{\mathcal L}}_\eta$ satisfies . Consequently, ${{\mathcal L}}_\lambda$ is an auxiliary structure.
Clearly, ${{\mathcal L}}_\lambda$ satisfies and since so do the ${{\mathcal L}}_\eta$ for $\eta<\lambda$. If ${{\langle p,q\rangle}}\in D$ and $p\parallel_\lambda q$, then $p\parallel_\eta q$ for some (actually, for every) $\eta<\lambda$. Hence, the satisfaction of for ${{\mathcal L}}_\lambda$ follows the same way as in the Successor Step since $L_\eta$ is a ${\{0,1\}}$-sublattice of $L_\lambda$.
We have seen that ${{\mathcal L}}_\nu$ is an auxiliary structure for all $\lambda\leq\kappa$. Letting $\lambda$ equal $\kappa$, we obtain the existence part of the lemma. The last sentence of the lemma follows from the construction and basic cardinal arithmetics.
We are now in the position to complete the paper.
In order to prove part of the theorem, assume that $P={\langle P;\nu_P\rangle}$ is an ordered set with zero and it is the union of a chain of principal ideals. By Lemma \[cofinalitylemma\], there exist an ordinal number $\kappa$ and a cofinal chain $C={\{c_\iota: \iota<\kappa\}}$ in $P$ such that $0_P=c_0$ and, for $\iota,\mu<\kappa$ we have $\iota< \mu \iff c_\iota< c_\mu$. The cofinality of $C$ means that $P$ is the union of the principal ideals $H_\iota={\mathord\downarrow c_\iota}$, $\iota<\kappa$. We let $H_\kappa=\bigcup_{\iota<\kappa}H_ \iota$ and $\nu_\kappa=\bigcup_{\iota<\kappa}\nu_{H_i}$, where $\nu_{H_i}$ denotes the restriction ${{\nu_P\rceil_{H_i}}}$. Clearly, $P=H_\kappa$ and $\nu_P= \nu_\kappa$, that is, ${\langle P;\nu_P\rangle}={\langle H_\kappa;\nu_\kappa\rangle}$. Note that $H_\kappa$ is not a principal ideal in general since $P$ need not be bounded.
For each $\lambda \leq \kappa$, we define an auxiliary structure ${{\mathcal L}}_\lambda={\langle L_\lambda;\gamma_\lambda, H_\lambda,\nu_\lambda,\delta_{\lambda},{\varepsilon}_{\lambda}\rangle}$ such that ${{\mathcal L}}_\mu$ is a substructure of ${{\mathcal L}}_\lambda$ for every $\mu\leq \lambda$; we do this by (transfinite) induction as follows.
We start with the one-element lattice $L_0$ and $H_0={\{c_0\}}={\{0_P\}}$, and define ${{\mathcal L}}_0$ in the only possible way.
Assume that $\lambda=\eta+1$ is a successor ordinal. We apply Lemma \[combinlemma\] to obtain ${{\mathcal L}}_\lambda$ from ${{\mathcal L}}_\eta$. This is possible since $H_\eta$ is an order ideal of $H_\lambda$. Note that Lemma \[combinlemma\] does not assert the uniqueness of ${{{{\mathcal L}}}^{\bullet}}$, and, in principle, it could be a problem later that ${{\mathcal L}}_\lambda$ is not uniquely defined. However, this is not a real problem since we can easily solve it as follows.
Let $\tau_0$ be the smallest *infinite* ordinal number such that ${|P|}\leq |\tau_0|$, let $\tau=2^{\tau_0}$, and let $\pi$ be the smallest ordinal with $|P|=|\pi|$. Note that $|\tau|$ is at least the power of continuum but $|\pi|$ can be finite. Let $P={\{h_\iota: \iota<\pi\}}$ such that $h_\iota\neq h_\eta$ for $\iota<\eta<\pi$. Also, take a set $T={\{t_\iota: \iota<\tau\}}$ such that $t_\iota\neq t_\eta$ for $\iota<\eta<\tau$. The point is that, after selecting the well-ordered cofinal chain $C$ above, we can use the well-ordered index sets ${\{\iota: \iota<\pi\}}$ and ${\{\iota: \iota<\tau\}}$ to make every part of our compound construction unique. Namely, when we well-order $D$, defined in , we use the lexicographic ordering of the index set ${\{\iota: \iota<\pi\}}\times {\{\iota: \iota<\pi\}}$. When we define lattices, their base sets will be initial subsets of $T$; a subset $X$ of $T$ is *initial* if, for all $\mu<\iota<\tau$, $\,t_\iota\in X$ implies $t_\mu\in X$. If we have to add new lattice elements, like a new top or $c_{pq}$, etc., then we always add the first one of $T$ that has not been used previously. Cardinality arithmetics shows that $T$ is never exhausted. This way, we have made the definition of ${{\mathcal L}}_\lambda$ unique.
Clearly, ${{\mathcal L}}_\iota$ is a substructure of ${{\mathcal L}}_\lambda$ for $\iota< \lambda$; either by Lemma \[combinlemma\] if $\iota=\eta$, or by the induction hypothesis and transitivity if $\iota<\eta$.
If $\lambda$ is a limit ordinal, then first we form the union $${{\mathcal L}}'_\lambda={\langle L'_\lambda;\gamma'_\lambda,H'_\lambda,\nu'_\lambda,\delta'_\lambda, {\varepsilon}'_\lambda \rangle}=
{\langle \bigcup_{\eta<\lambda}L_\eta;\bigcup_{\eta<\lambda}\gamma_\eta, \bigcup_{\eta<\lambda}H_\eta,\bigcup_{\eta<\lambda}\nu_\eta,\bigcup_{\eta<\lambda}\delta_\eta, \bigcup_{\eta<\lambda}{\varepsilon}_\eta \rangle}\text.$$ Note that $\nu'_{\lambda}={{\nu_P\rceil_{H'_\lambda}}}$. The same way as in the proof of Lemma \[combinlemma\], we obtain that ${{\mathcal L}}'_\lambda$ is an auxiliary structure; the only difference is that now trivially holds in ${{\mathcal L}}_\lambda$ since $H'_\lambda$ does not have a largest element. To see this, suppose for contradiction that $u$ is the largest element of $H'_\lambda$. Then $u\in H_\eta$ for some $\eta<\lambda$. Since $\lambda$ is a limit ordinal, $\eta+1<\lambda$. Hence $c_{\eta+1}\leq u\leq c_\eta$, which contradicts $c_\eta<c_{\eta+1}$.
Clearly, ${\langle H'_\lambda;\nu'_\lambda\rangle}$ is an order ideal in ${\langle H_\lambda;\nu_\lambda\rangle}$. Thus, applying Lemma \[combinlemma\] to this situation, we obtain an auxiliary structure ${{{{\mathcal L}}}^{\bullet}}$, and we let ${{\mathcal L}}_\lambda= {{{{\mathcal L}}}^{\bullet}}$. Obviously, for all $\eta<\lambda$, ${{\mathcal L}}_\eta$ is a substructure of ${{\mathcal L}}_\lambda$.
Now, we have constructed an auxiliary structure ${{\mathcal L}}_\lambda$ for each $\lambda\leq \kappa$. In particular, ${{\mathcal L}}_\kappa={\langle L_\kappa;\gamma_\kappa, H_\kappa,\nu_\kappa,\delta_{\kappa},{\varepsilon}_{\kappa}\rangle} =
{\langle L_\kappa;\gamma_\kappa, P,\nu_P,\delta_{\kappa},{\varepsilon}_{\kappa}\rangle}
$ is an auxiliary structure. Thus, by Lemma \[impclM\], ${\textup{Princ}(L_\kappa)}\cong{\langle P;\nu_P\rangle}$, which proves part of the theorem.
In order to prove part , assume that $L$ is a countable lattice. Obviously, we have $|{\textup{Princ}(L)}|\leq|{{\textup{Pairs}^{\leq}(L)}}|\leq\aleph_0$, and we already mentioned that ${\textup{Princ}(L)}$ is always a directed ordered set with 0, no matter what the size $|L|$ of $L$ is.
Conversely, let $P$ be a directed ordered set with 0 such that $|P|\leq\aleph_0$. Then there is an ordinal $\kappa\leq \omega$ (where $\omega$ denotes the least infinite ordinal) such that $P={\{p_i: i<\kappa\}}$. Note that ${\{i:i<\kappa\}}$ is a subset of the set of nonnegative integer numbers. For $i,j<\kappa$, there exists a smallest $k$ such that $p_i\leq p_k$ and $p_j\leq p_k$; we let $p_i\sqcup p_j=p_k$. This defines a binary operation on $P$; it need not be a semilattice operation. Let $q_0=p_0$. For $0<i<\kappa$, let $q_i=q_{i-1}\sqcup p_i$. A trivial induction shows that $q_i$ is an upper bound of ${\{p_0,p_1,\dots,p_i\}}$, for all $i<\kappa$, and $q_{i-1}\leq_P q_i$ for all $0<i<\kappa$. Hence, the principal ideals ${\mathord\downarrow q_i}$ form a chain ${\{{\mathord\downarrow q_i}: i<\kappa\}}$, and $P$ is the union of these principal ideals. Therefore, part of the Theorem yields a lattice $L$ such that $P$ is isomorphic to ${\textup{Princ}(L)}$. Since the ${\mathord\downarrow q_i}$ are countable and there are countably many of them, and since all the lemmas we used in the proof of part of the theorem preserve the property “countable”, $L$ is countable.
[99]{} Bogart, K. P., Freese, R., Kung, J. P. S. (editors): The Dilworth Theorems. Selected papers of Robert P. Dilworth. Birkhäuser Boston, Inc., Boston, MA, 1990. xxvi+465 pp. ISBN: 0-8176-3434-7
Czédli, G.: (1+1+2)-generated equivalence lattices. J. Algebra [**221**]{}, 439–462 (1999)
Czédli, G.: Representing homomorphisms of distributive lattices as restrictions of congruences of rectangular lattices. Algebra Universalis [**67**]{}, 313–345 (2012)
Dilworth, R.P.: The structure of relatively complemented lattices. Ann. of Math. (2) [**51**]{}, 348–359 (1950)
Grätzer, G.: General Lattice Theory, 2nd edn. Birkhäuser, Basel (1998)
Grätzer, G.: The Congruences of a Finite Lattice. A Proof-by-picture Approach. Birkhäuser, Boston (2006)
Grätzer, G.: The order of principal congruences of a bounded lattice. [<http://arxiv.org/pdf/1302.4163>]{}; Algebra Universalis, to appear.
Grätzer, G.: Lattice Theory: Foundation. Birkhäuser Verlag, Basel (2011)
Grätzer, G., Lakser, H., Schmidt, E.T.: Congruence lattices of finite semimodular lattices. Canad. Math. Bull. [**41**]{}, 290–297 (1998)
Huhn, A. P.: On the representation of distributive algebraic lattices. III. Acta Sci. Math. (Szeged) [**53**]{}, 11–18 (1989)
Ržička, P.: Free trees and the optimal bound in Wehrung’s theorem. Fund. Math. [**198**]{}, 217–228 (2008)
Schmidt, E.T.: The ideal lattice of a distributive lattice with 0 is the congruence lattice of a lattice. Acta Sci. Math. (Szeged) [**43**]{}, 153–168 (1981)
Wehrung, F.: A solution to Dilworth’s congruence lattice problem. Adv. Math. [**216**]{}, 610–625 (2007)
[^1]: This research was supported by the European Union and co-funded by the European Social Fund under the project “Telemedicine-focused research activities on the field of Mathematics, Informatics and Medical sciences” of project number “TÁMOP-4.2.2.A-11/1/KONV-2012-0073”, and by NFSR of Hungary (OTKA), grant number K83219
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This note is commenting on Ronald Gallant’s (2015) reflections on the construction of Bayesian prior distributions from moment conditions. The main conclusion is that the paper does not deliver a working principle that could justify inference based on such priors.'
author:
-
title: 'Some comments about A. Ronald Gallant’s “Reflections on the probability space induced by moment conditions with implications for [B]{}ayesian inference"'
---
Introduction
============
The construction of prior distributions has always been a central aspect of Bayesian analysis, arguably [*the*]{} central piece since all aspects of Bayesian inference are automatically derived from defining the model, the prior, the data and the loss function [@berger:1985]. While the prior is a mathematical object, there is not rigorous derivation of a given prior distribution from the available information or lack thereof and, while “objective Bayes" constructions are automated to some extent [@lhoste:1923; @jeffreys:1939; @broemeling:2003; @berger:bernardo:sun:2009], they are rejected by subjectivist Bayesians who argue in favour of personalistic and non-reproducible prior selection [@kadane:2011]. Defining priors via moment conditions can be traced at least back to [@jaynes:2003] and the notion of [*maximum entropy priors*]{}, even though the moment conditions only involve the parameters $\theta$ of the model. The current paper considers instead moment conditions as defined jointly on the pair $(x,\theta)$ and proposes some necessary conditions for a prior distribution to be compatible with those conditions. From a foundational perspective, a setting where the joint distribution of the data and the parameter that drives this data is a given is hard to fathom, because it implies there is no longer a fixed parameter to infer about. In addition, the term “exogeneity" used in the paper hints at a notion of the parameter being not truly a parameter, but including rather latent variables and maybe random effects. It is hard to reconcile this motivation with a computational one (also found in the paper) where complex likelihoods would justify calling for a prior as a practical tool. The additional reformulation of a (pseudo-)likelihood function defined by the method of moments (Section 2) makes the focus of the paper difficult to specify and hence analyse.
Given the introduction through Fisher’s (1930) fiducial distribution, one may wonder whether or not the author’s approach is to integrate fiducial constructs within the Bayesian paradigm or at the very least to specify under which conditions this can be achieved. As a long-time sceptic on the relevance of fiducial arguments, I doubt about one’s ability to produce an arbitrary distribution on an equally arbitrary transform of the pair $(x,\theta)$, instead of a genuine prior $\times$ likelihood construct. For instance, the discussion around the various meanings of the $t$ statistic $$\dfrac{\bar{x}-\theta}{\nicefrac{s}{\sqrt{n}}}$$ does not imply that it can achieve a $t$ posterior distribution with $n-1$ degrees of freedom [*jointly*]{} for all sample sizes $n$ and I doubt it can happen outside exotic cases like Dirac masses on one of the terms. (Exchanging the randomness of terms in a random variable as if it were a linear equation is a guaranteed way to produce fiducial paradoxes and measure theoretic difficulties.)
This set of comments on [@gallant:2015] is organised as follows: in Section \[sec:moma\], I analyse various and somewhat mutually exclusive aspects of the author’s approach, while in Section \[sec:compadre\], I discuss some computational consequences and alternatives.
Deriving priors from moments {#sec:moma}
============================
Moment default likelihood
-------------------------
Gallant (2015) considers the distribution of a pivotal quantity like $$Z=\sqrt{n} W(\mathbf{x},\theta)^{-\nicefrac{1}{2}} m(\mathbf{x},\theta)$$ as induced by the hypothetical joint distribution on $(x,\theta)$, hence conversely inducing constraints on this joint, as well as an associated conditional. (The constraints may be such that the joint distribution does not exist.) However, this perspective is abandoned a few lines below to define a moment likelihood $$p(x | \theta) = (2\pi)^{-\nicefrac{M}{2}} \exp \left\{ \nicefrac{-n}{2}\,
\bar{m}(x, \theta)^\text{T}[W(x, \theta)]^{-1} \bar{m}(x, \theta) \right\}$$ as a quasi-Gaussian pseudo-likelihood in the moment $\bar{m}(x, \theta)$. This is only one among many ways of defining a likelihood from moments, but it further removes the symmetry in $x$ and $\theta$ induced by the original formulation. In addition, one may wonder why a determinant like $\text{det}\{W(x, \theta)\}^{\nicefrac{-1}{2}}$ or at least a normalising constant (obviously depending on $\theta$) does not appear in the pseudo-likelihood, since this impacts the resulting posterior density.
A connected reference is Zellner’s (1997) [*Bayesian method of moments*]{} where, given moment conditions on the parameters $\theta$ and $\sigma^2$, $$\mathbb{E}[\theta|x_1,\ldots,x_n] = \bar{x}_n\,,
\quad
\mathbb{E}[\sigma^2|x_1,\ldots] = s^2_n\,,
\quad
\text{var}(\theta|\sigma^2,x_1,\ldots) = \nicefrac{\sigma^2}{n}\,,$$ [@zellner:1997] derives a [*maximum entropy*]{} posterior $$\theta|\sigma^2,x_1,\ldots\sim\mathcal{N}(\bar{x}_n,\nicefrac{\sigma^2}{n})\,,
\quad
\sigma^{-2}|x_1,\ldots\sim\mathcal{E}xp(s^2_n)\,,$$ later shown to be incompatible with the corresponding predictive distribution, besides producing an inconsistent estimator of $\sigma^2$ [@geisser:1999].[^1]
Measure-theoretic considerations
--------------------------------
> “If one specifies a set of moment functions collected together into a vector $m(x,\theta)$ of dimension $M$, regards $\theta$ as random and asserts that some transformation $Z(x,\theta)$ has distribution $\psi$ then what is required to use this information and then possibly a prior to make valid inference?" (p.4)
The central question in the paper is determining whether a set of moment equations $$\label{eq:mom}
\mathbb{E}[m(X_1,\ldots,X_n,\theta)]=0$$ (where both the $X_i$‘s and $\theta$ are [*a priori*]{} random) leads to a well-defined pair of a likelihood function and a prior distribution compatible with those. From a mathematical perspective, this seems to be a highly complex question as it implies the integral equation $$\int_{\Theta\times\mathcal{X}^n} m(x_1,\ldots,x_n,\theta)\,\pi(\theta)f(x_1|\theta)\cdots f(x_n|\theta)
\text{d}\theta\,\text{d}x_1\cdots\text{d}x_n=0$$ must allow for a solution [*for all*]{} $n$’s.
Still from a purely mathematical perspective, the problem as stated in Section 3.3 of Gallant (2015) is puzzling: if the distribution of the transform $Z=Z(X,\Lambda)$ is provided, what are the consequences on the joint distribution of $(X,\Lambda)$? It is conceivable but rather unlikely that this distribution $\psi$ will induce a single joint, that is, a single prior and a single likelihood. It is much more likely that the distribution $\psi$ one arbitrarily selects on $m(x,\theta)$ is incompatible with a joint distribution on $(x,\theta)$. To wit, Fisher’s example of the $t$ statistic and of its $t_{n-1}$ distribution.
> “Typically $C$ is coarse in the sense that it does not contain all the Borel sets (...) The probability space cannot be used for Bayesian inference." (p.8)
My understanding of that part of the paper is that defining a joint on $m(x,\theta)$ is not always enough to deduce a (unique) posterior on $\theta$, which is fine and correct, but definitely anticlimactic. This sounds to be what Gallant calls a “partial specification of the prior" (p.9). Hence, rather than building the minimal Borel $\sigma$-algebra on $\mathcal{X}\times\Theta$ compatible with this joint on $m(x,\theta)$, I would suggest examining the range of prior$\times$likelihood pairs that agree with this partial property when using the regular Borel $\sigma$-algebra.
The general solution found in Section 3.5 (“The Abstraction") relies on the assumptions that $Z(\cdot,\theta)$ is a surjective function for all $\theta$’s and on the axiom of choice, namely that an antecedent of the function can be selected for each $z\in\mathcal{Z}$, namely, $\Upsilon(z,\theta)=x^\star$ such that $Z(x^\star,\theta)=z$. Under these assumptions, $Z$ and $\Upsilon(Z,\theta)$ are in one-to-one correspondence and hence can enjoy the same distribution modulo the proper change of variable. The distribution over $X$ is then obtained by assuming a uniform distribution over the orbit of $Z(x,\theta)=Z(x^*,\theta)$, leading to $$p^\star(x|\theta) = \psi(Z(x,\theta))$$ as defined in Equation (17). There is no issue about this derivation but, as noted previously, there is neither a compelling reason to adopt the smallest $\sigma$-algebra $\mathcal{C}^*$ to make the above a proper density in $\mathcal{X}$. I see little appeal in using this new measure and further wonder in which sense this defines a likelihood function, i.e., the product of $n$ densities of the $X_i$‘s conditional on $\theta$. To me this is the central issue, which remains unsolved by the paper.
Computational motivations
-------------------------
> “A common situation that requires consideration of the notions that follow is that deriving the likelihood from a structural model is analytically intractable and one cannot verify that the numerical approximations one would have to make to circumvent the intractability are sufficiently accurate." (p.7)
This computational perspective then is a completely different issue, namely that defining a joint distribution by mean of moment equations prevents regular Bayesian inference when the likelihood function is intractable. This point of view is much more exciting because (i) there are alternatives available, from approximate Bayesian computation (ABC) [@marin:pudlo:robert:ryder:2011] to INLA [@rue:martino:chopin:2008], to EP [@barthelme:chopin:2014], to variational Bayes [@jaakkola:jordan:2000]. In particular, the moment equations are strongly and even insistently suggesting that empirical likelihood techniques [@owen:2001; @lazar:2003] could be well-suited to this setting. And (ii) it is no longer a mathematical puzzle: there exists a joint distribution on $m(x,\theta)$, induced by one (or many) joint distribution(s) on $(x,\theta)$. Hence, the question of finding whether or not this item of information leads to a single proper prior on $\theta$ becomes irrelevant. However, in the event one wants to rely on ABC, being given the distribution of $m(x,\theta)$ seems to mean one can solely generate new values of this transform while missing a natural distance between observations and pseudo-observations, although the log-likelihood of $m(x^\text{obs},\theta)$ could itself be used as a distance.
As an aside, the author mentions marginal likelihood estimation by harmonic means à la [@newton:raftery:1994], but I would like to point out this usually is a rather poor solution with potential for disaster, while it requires the likelihood function to be available in closed form. It is also unclear to me why marginal likelihood is mentioned at this stage.
A form of ABC? {#sec:compadre}
==============
> ““These characteristics are (1) likelihood is not available; (2) prior information is available; (3) a portion of the prior information is expressed in terms of functionals of the model that cannot be converted into an analytic prior on model parameters; (4) the model can be simulated. Our approach depends on an assumption that (5) an adequate statistical model for the data are available." R. Gallant and R. McCulloch (2009)
As a final comment connected with the computational aspect of the current paper, I would like to point out Gallant’s and McCulloch’s (2009) connections with the ABC approach, to wit the above quote.
In [@gallant:mcculloch:2009], the true (scientific) model parametrised by $\theta$ is replaced with a (statistical) substitute that is available in closed form and parametrised by $g(\theta)$. which states that the intractable density is equal to a closed-form density.\] This latter model is over-parametrised when compared with the scientific model. Take, e.g., a $\mathcal{N}(\theta,\theta^2)$ scientific model versus a $\mathcal{N}(\mu,\sigma^2)$ statistical model. In addition, the prior information is only available on the parameter $\theta$. However, this does not seem to matter very much since (a) the Bayesian analysis is operated on $\theta$ only and (b) the Metropolis approach adopted by the authors involves simulating a massive number of pseudo-observations, given the current value of the parameter $\theta$ and the scientific model, so that the transform $g(\theta)$ can be estimated by maximum likelihood over the statistical model. The paper suggests using a secondary Markov chain algorithm to find this MLE. The pseudo-model is then used in a primary MCMC step.
Hence, the approach of [@gallant:mcculloch:2009] is not truly an ABC algorithm. In the same setting, ABC would indeed use one simulated dataset, with the same size as the observed dataset, compute the MLEs for both and compare them (as in [@drovandi:pettitt:faddy:2011; @martin:etal:2010]). This approach is faster if less accurate when Assumption 1—that the statistical model holds for a restricted parametrisation—does not stand.
Conclusion
==========
One overall interrogation about this paper is the validation of the outcome. As noted in [@fraser:2011], Bayesian posterior distributions are not naturally endowed with an epistemic validity. The same questioning obviously applies to entities defined outside the Bayesian paradigm, the present one included, in that producing a posterior or pseudo-posterior distribution on the parameter offers no guarantee [*per se*]{} about the efficiency of the inference it produces. Using asymptotically convergent approximations to the likelihood function does not always lead to consistent Bayesian approximations [@marin:pillai:robert:rousseau:2011] and thus requires further validation of the procedures proposed here.
Another global interrogation that remains open is the validation of the income outside of the Bayesian paradigm. The production of the equation has to occur as a byproduct of defining a joint probability model on the space $\mathcal{X}\times\Theta$, which seems to logically exclude both non-Bayesian perspectives and [*ex nihilo*]{} occurrences of moment conditions. The only statistics example worked out in the paper, namely habit persistence asset pricing, starts with a given prior distribution (31), which makes this example irrelevant for the stated goal of checking whether or not “the assertion of a distribution for moment functions either partially or completely specifies the prior".
Despite these difficulties in apprehending the paper postulate, I would like to conclude with a more positive perspective, namely that the problematic of partially defining models and priors via moment conditions is of considerable interest in an era of Big Data, small worlds [@savage:1954], and limited information. Acknowledging that inference and in particular Bayesian inference cannot always handle big worlds (see, e.g., the paradox exposed in [@robins:wasserman:2000]) and constructing coherent and efficient tools for restricted inference about some aspects of the model are very current questions that beg addressing in full generality.
Acknowledgements {#acknowledgements .unnumbered}
================
I am quite grateful to Robert Kohn (UNSW) and Mark Steel (University of Warwick) for helpful comments and references. This discussion was first delivered at the 6th French Econometrics Conference, on Dec. 5, 2014, conference held in honour of Christian Gouriéroux, to whom I am indebted for his help and support throughout my academic career.
Gallant.bbl
[^1]: In essence, the prior changes for each sample size $n$. [@geisser:1999] designates the Maxent principle [*per se*]{} as the culprit for this incoherence.
| {
"pile_set_name": "ArXiv"
} |